Skip to main content
European Union flag
Français
Sep 15, 2025

International Day of Democracy 2025

Professor Hiroki HABUKA, Kyoto University Graduate School of Law


The European Union has positioned itself as a global leader for comprehensive regulation on Hiroki HabukaArtificial Intelligence, especially since the entry into force of the AI Act. A growing number of EUVP participants centre their study visits around this regulation to learn from best practices and discuss the risks of this rapid technological development.

On this year’s International Day of Democracy, the EUVP reached out to one of its Alumni to discuss how AI can be used to improve governance and how other countries are addressing its threats. Hiroki HABUKA is a Research Professor at the Kyoto University Graduate School of Law and a strong proponent of using AI for good. He tells us more about the potential of AI for better democracy and the differences between European and Japanese approaches to AI. 


Hiroki, what are positive examples where AI is used today to strengthen democratic participation and responsible governance?

AI is already beginning to function as a powerful tool to enhance democracy and governance. In Japan, Code for Japan applies AI to analyse public budgets and citizen feedback, creating applications that empower residents to track government spending and identify specific community needs like infrastructure repairs. A pioneering example can be seen in Taiwan with its "vTaiwan" platform, a system that uses algorithms to group diverse public opinions on specific policy issues, such as the legalisation of ridesharing, thereby visualising the landscape of public opinion and clarifying points of contention. These examples demonstrate AI's potential to help the deliberative processes of democracy keep pace with the speed of modern society

Beyond direct participation, AI contributes significantly to the protection of human rights, such as by managing online content. It can detect and handle hate speech and defamation on social media at a scale and speed impossible for humans. Furthermore, in the realm of administrative services, generative AI can assist citizens in preparing official documents, potentially ensuring that public support reaches those who previously struggled with bureaucratic procedures.


Looking ahead, what opportunities do you see for AI to improve democracy over the next 10 years? Think Tanks (such as AI Futures Project) warn about serious threats linked to unregulated development of AI. What do you consider the biggest risks of everyday and frontier AI? How can they be managed?

Over the next 10 years, I see three major opportunities: First, the evolution of participatory democracy, where AI-powered platforms make citizen deliberation more sophisticated and accessible, truly allowing democratic values to function at the speed of contemporary society. Second, enhanced governance efficiency and transparency, with AI streamlining policy formation and risk monitoring. The knowledge gained in AI governance can even be applied to other regulatory areas, strengthening our societal systems in general. Third, improved access to justice, as AI can assist in dispute resolution by analysing past judicial tendencies and supporting online dispute resolution (ODR), making justice faster and more accessible for citizens.

However, the risks are equally significant and exist on two levels:
For everyday AI, the threats are already apparent: Generative AI can easily create convincing deepfakes and false narratives, eroding social trust and democratic cohesion. Biases embedded in training data can perpetuate and amplify discrimination in critical areas like hiring, lending, and criminal justice. In addition, the dependence on vast datasets concentrates power in a few global tech companies, stifling competition. Finally, there is the “Black Box Problem”: The inherent difficulty in explaining the reasoning behind a deep learning model's decision undermines accountability and trust.

For frontier AI, the risks could be more profound and serious. As highlighted by the Bletchley Declaration, the "unknown unknowns" of highly capable general-purpose AI models could pose catastrophic risks in areas like cybersecurity or bioterrorism. Furthermore, the possibility of AI systems operating beyond human control or imposing a monolithic set of values on a diverse human society is a serious long-term concern.

To manage these multifaceted threats, a multi-layered approach is essential. We must adopt "Agile Governance," a framework that combines organisations, rules, technology, and processes to respond flexibly to rapid technological change. A risk-based approach is key, applying stricter oversight and rules to higher-risk AI applications. This should be achieved through a combination of hard law and soft law. Binding laws should set out fundamental requirements (e.g., acceptable risk threshold or basic requirements for risk management and transparency), while fast-changing technical details are best handled by non-binding standards and codes of conduct (soft law).
Crucially, we must ensure transparency and accountability. This means mandating appropriate disclosure of information about an AI system and establishing clear lines of responsibility. Finally, given the borderless nature of AI, international cooperation, such as the G7 Hiroshima AI Process, is indispensable for creating interoperable standards and governance frameworks.
 

You visited Brussels in the framework of the EUVP in January this year. What were the main take aways of the discussions you had? How do you assess the European Union’s efforts, most notably the AI Act, to regulate AI in a way that protects democratic societies and fundamental rights? 

My visit to Brussels coincided with a pivotal moment, just as the EU's ‘Competitiveness Compass' report was being released and discussions around AI's role in global competitiveness were intensifying. On the surface, there appeared to be a headwind against comprehensive regulations like the AI Act, driven by concerns that it could stifle innovation.


However, what I found from the dialogue with EU counterparts during the visit was that this was not a simple push for 'deregulation' that would compromise fundamental values like human rights and democracy for the sake of establishing a competitive edge.
Rather, I see it as a pragmatic effort to accelerate the implementation of responsible AI by making the comprehensive rules of the AI Act simpler and clearer, thereby giving businesses greater predictability. Ultimately, this approach is designed to better and more swiftly realise the fundamental values that the EU champions. It’s a strategy to make the protection of democracy and fundamental rights more effective in practice, not to abandon it. While this approach differs from Japan's, which leverages existing sector-specific laws, I have profound respect for the EU's unwavering 
commitment to its fundamental values and its continuous, pragmatic search for the best way to realise them in this new technological era.


Could you tell our readers more about the Japanese approach to AI governance? What lessons could democracies in Europe draw from the Japanese model?

Japan's approach to AI governance is fundamentally built on a pro-innovation stance, encapsulated in its goal to become "the world's most AI-friendly country". Instead of creating a single, comprehensive law to control AI, Japan's strategy is to provide maximum support for AI utilisation. This is achieved by using its existing legal framework and incorporating an agile, multi-stakeholder process. This entire approach was recently solidified by the AI Promotion Act, which went into full effect on 1 September 2025.
The AI Promotion Act is not a law designed to restrict AI, but rather one to support its development and use. It establishes a framework for a continuous PDCA (Plan-Do-Check-Act) cycle to circulate throughout society, which is very different from the EU's comprehensive AI Act.
 

Here’s how it works

  1. Plan: A new AI Strategic Headquarters, led by the Prime Minister, is responsible for creating a national AI Basic Plan.
  2. Do: The government's role is to implement measures to promote AI, such as funding R&D, developing shared data centres, and promoting AI education. For businesses and citizens, the Act defines responsibilities like using AI to improve business and fostering a public understanding of AI, but these are legally non-binding "duties to endeavour". Penalties only apply if an existing law, like copyright or data protection law, is violated.
  3. Check: The government is empowered to investigate domestic and international AI trends, analyse cases of rights infringement, and consider countermeasures based on its findings.
  4. Act: Based on these investigations, the government can then amend laws, update guidelines, or provide direct advice to businesses to improve practices.


A key part of Japan's strategy is actively removing outdated regulations that hinder the adoption of new technologies. A major effort has been made to eliminate so-called "analogue regulations"—rules that required human-based checks, such as mandatory visual inspections or an on-site presence. As of May 2025, nearly all these targeted regulations have been revised, clearing the way for AI systems to be used in place of humans.
To support this, Japan's AI Safety Institute is developing a safety evaluation framework to clarify what kind of AI system is acceptable as a substitute for human-based compliance and to define liability issues. This is part of the government’s policy to develop and implement AI safety evaluation methods with specific use cases in mind, particularly in healthcare and robotics.
 

Japan is a strong proponent of international collaboration in AI regulation, having launched the Hiroshima AI Process during its G7 presidency. Considering current geopolitical trends and growing concerns about a new AI “space race”, do you believe that a common international AI framework is achievable?

Considering the current geopolitical trends, creating a single, uniform set of international "rules" would be difficult, and perhaps not even desirable. At its core, there are cultural differences in the perspective on the relationship between humans and AI. For instance, the EU's AI Act seems to position AI as a "human tool" and considers AI systems that evaluate humans (e.g., in HR or credit scoring) as high-risk. Japan, on the other hand, is more tolerant of a "horizontal relationship" with AI. We should not impose a single set of rules that ignores these important cultural perspectives.
Therefore, what we should aim for is ensuring "interoperability" based on the premise of respecting each country's values and legal systems. This is an approach to form a cooperative international order where diverse values and systems can coexist and link together, rather than exporting one specific set of values as a global standard. The G7 Hiroshima AI Process, which established guiding principles and a code of conduct, is a successful example of this approach.
Of course, while our approaches may differ, Japan and the EU share many fundamental values, such as the rule of law, fundamental human rights, and democracy. Standing on this common foundation of values, I believe building an interoperable framework is fully achievable. As Japan has long emphasised a sector-specific approach, it is realistic to start with specific fields such as healthcare, autonomous driving, and robotics. I am confident that by accumulating success stories in these key areas, we can pave the way for broader and more effective international collaboration on AI governance.