AI-PAC: the Flint digest on AI policy in Asia-Pacific

AI-Pac is Flint’s new digest providing our take on the major developments in Asia-Pacific AI policy and regulation. In a fast-moving environment, this newsletter is designed to help you stay on top of the regulatory agenda and to consider what it means for your business. Please see here and here for our first two blogs, covering January, February, and March. In each update, we will set out:

  • Major regulatory developments across Asia-Pacific regarding AI regulation;
  • Overall trends of AI regulation in Asia-Pacific;
  • What this means for business and how business might best prepare.

In April and May, key developments included: 

  • The AI Seoul Summit achieved some progress in deepening international collaboration on AI safety research, but also highlighted differences over whether to regulate AI technology itself or whether to only regulate its applications.
  • Japan signalled its intention to move ahead with regulation that is likely to focus on the most advanced AI models.
  • Competition authorities in South Korea and India announced studies into the implications of AI for competition and consumer protection.
  • Singapore launched its finalised non-binding Model AI Governance Framework for Generative AI, indicating its ambition to be a global leader in advocating a flexible regulatory approach based on working with industry and pursuing international alignment.
  • China held separate talks on AI with the US and Russia, highlighting ambiguity over how it will engage internationally on AI policy and the extent to which it will cooperate with Western countries.

AI Seoul Summit: deepening international collaboration on AI safety 

On 21-22 May, the Governments of the UK and Republic of Korea co-hosted the AI Seoul Summit. Whereas the inaugural AI Safety Summit in the UK in November had focused on frontier AI safety, the Seoul Summit also focused on more immediate concerns such as the impact on employment, as well as highlighting the importance of innovation and inclusivity alongside safety.

The Summit achieved some progress towards deepening international collaboration. Most notably, world leaders from ten countries (including, from the Asia-Pacific region, Australia, Japan, Korea, and Singapore) signed the Seoul Declaration, which emphasises the need for interoperability in AI governance frameworks, and the Seoul Statement of Intent toward International Cooperation on AI Safety Science, which outlines an ambition to create an international network of AI safety institutes to collaborate on AI safety research. In Asia-Pacific, Japan and Singapore have already launched their own AI safety institutes. In addition, digital ministers from a group of 27 countries (including, from the Asia-Pacific region, Australia, India, Indonesia, Japan, Korea, New Zealand, the Philippines, and Singapore) signed the Seoul Ministerial Statement, agreeing to share best practice on defining risk thresholds and to cooperate on ensuring the development of AI reflects “shared values”.

At the same time, the Summit highlighted important differences of opinion. One area of disagreement centred on whether to regulate AI technology itself, or whether to focus on applications of the technology. In a keynote speech, Stanford University professor and former head of Google Brain, Andrew Ng, argued that the versatility of AI technology and the diversity of potential use cases meant that application-specific regulation was preferable to trying to regulate the technology itself. However, other experts, such as MIT professor, Max Tegmark, argued that the potential existential risks of AI mean that it is necessary to regulate the development of the most powerful AI models themselves and not just how they are applied.

Significantly, China did not sign the Seoul Ministerial Statement, despite having previously signed the Bletchley Declaration in November 2023. It is possible that the Seoul statement’s reference to “shared values” was one factor in China deciding not to sign, but this was also likely to have been influenced by geopolitical rivalry and China’s broader concerns about US export controls related to the technology.

The next AI summit is scheduled for February 2025 in France. Asia-Pacific governments will continue to engage closely with global discussions, which will also influence the development of domestic policy frameworks across the region.

Japan: cautious progress towards regulation

On 22 May, Japan’s Chief Cabinet Secretary, Yoshimasa Hayashi, indicated the government’s intention to advance discussions on legislation to regulate AI at a press conference. This followed on from a white paper published in April by representatives from the ruling Liberal Democratic Party (LDP). The white paper expressed concern about bias, hallucination, and AI-enabled fraud. The white paper calls for balancing Japan’s existing “soft-law” approach with a new emphasis on “hard law” to provide peace of mind and security. It advocates mandatory minimum legal requirements for AI models “that pose extremely high risks”, calling to target obligations at the most advanced models. A legislative draft could follow as soon as this summer.

Looking ahead, Japan will continue to be a vocal advocate for global cooperation on AI safety, while moving ahead with domestic regulation. Japan’s regulatory approach will be more targeted than broad regulations such as the EU’s AI Act. Government officials are mindful of the risk that rules that are too broad in scope and too prescriptive will negatively impact AI-related investment in Japan. Japan will also seek to align its approach with democratic allies as the global debate on AI safety evolves. 

South Korea, India: competition and consumer protection the latest frontier in AI

On 5 April, South Korea’s competition regulator, the Korea Fair Trade Commission (KFTC), launched a public tender for an in-depth study of the domestic generative AI market. The KFTC’s study will assess structural barriers that could stifle competition in AI, including algorithmic collusion, discriminatory treatment towards content providers, restrictions on platform access and usage, and the acquisition of burgeoning firms. The KFTC will also consider consumer protection issues such as algorithmic collusion.

On 22 April, the Competition Commission of India (CCI) similarly invited proposals from agencies to conduct a market study on AI and competition. The CCI market study aims to understand market dynamics, in particular who are the key stakeholders at different stages of the AI life-cycle and what are the main barriers to entry. Similarly to the KFTC, the CCI will also consider consumer protection issues, for example how AI-driven personalised recommendations influence consumer welfare and choice. 

The South Korean and Indian market studies follow on from announcements in recent months by competition regulators in the US, UK, EU, and France that they are scrutinising the competition dynamics around AI. Competition regulators are increasingly coming together to discuss their approach to AI. The KFTC recently discussed competition issues relating to AI in bilateral meetings with the US FTC and the European Commission at the 3rd Enforcers Summit in Washington DC. 

Asia-Pacific competition regulators are early on in the process of building their understanding of market dynamics around AI. However, in the coming months and years, businesses can expect increasing scrutiny of the competition and consumer implications of AI. In some markets, there may be a protectionist edge to regulators’ approach. This could include, for instance, assessment of whether concentration in data, compute, and AI talent prevents domestic businesses from competing with foreign players in developing and deploying AI models. 

Singapore: pioneering a more flexible, cooperative model of AI governance

On 30 May, Singapore’s Deputy Prime Minister Heng Swee Keat launched the finalised Model AI Governance Framework for Generative AI. Singapore also announced its plan to work with Rwanda to create a Digital Forum for Small States (FOSS) AI Governance Playbook that aims to help small states develop AI governance frameworks.  On 31 May, Singapore unveiled the “Project Moonshot” initiative, one of the world’s first open-source testing toolkits designed to help AI developers and deployers manage risks in deploying Large Language Models (LLMs). 

The Model Governance Framework provides non-binding guidance to businesses across the AI supply chain on developing and deploying AI safely. Its non-binding nature reflects the Singaporean government’s view that it is “too early” for stringent AI regulation. Rather than implementing new regulation, Singapore’s current approach is to work with sectoral regulators to assess if existing regulations need to be strengthened to account for advances in AI.

For businesses, Singapore’s partnership with Rwanda and the Forum for Small States (FOSS; an informal cross-regional group at the UN) raises hopes of further global collaboration on AI governance, in particular bringing the global south into the discussion.  It remains to be seen, however, how far the proposed Digital FOSS AI Governance Playbook will diverge from existing multilateral efforts from the UN, G7, GPAI, and others, to reflect the particular concerns of smaller states. 

Overall, these announcements underline Singapore’s goal to act as a global leader in developing a balanced approach to establishing guardrails for AI while maximising space for innovation. Singapore’s approach to AI governance stands out globally for, firstly, its focus on industry engagement, secondly, its emphasis on developing practical AI safety tools, and thirdly, its energetic advocacy of international interoperability in AI governance.

China: ambiguity over geopolitical positioning on AI

On 14 May, China and the US convened their first intergovernmental dialogue on AI. According to a statement from China’s Ministry of Foreign Affairs, China expressed its willingness to work with the international community to establish a global AI framework. China also reiterated its opposition to US export controls and “suppression” related to China’s development of AI.

On 16-17 May, China subsequently held talks with Russia, agreeing (document in Chinese) to build a regular negotiation mechanism aimed at strengthening cooperation on AI. The two countries also expressed their opposition to unilateral restrictive measures that hinder the development of AI from third countries.

The two developments - coupled with China’s refusal to sign the Seoul Ministerial Statement - highlight ambiguity over the extent to which China is willing to engage with perceived Western-led multilateral efforts on AI governance. On the one hand, China’s participation in the China-US dialogue suggests at least some willingness to cooperate in implementing guardrails around high-risk development of AI. However, US export controls on AI chips are leading some in Beijing to question the value of engaging with Western nations on AI policy. China last year proposed its own Global AI Governance Initiative and has also driven the formation of an “AI study group” within the expanded BRICS group. China may seek to challenge perceived Western leadership on multilateral policy initiatives around AI, championing alternative fora for cooperation such as the UN and the BRICS group. 

This blog was written by Ewan Lusty and David Skelton, who are both based in Singapore, with contributions from Charlie Jackson, based in London, and our Senior Adviser network based across the Asia-Pacific region. Ewan has been supporting Flint clients on a range of digital policy issues for over four years, prior to which he advised UK government ministers on online safety regulation. David spent seven years at Google, most recently working within the Global Strategy and Innovation team, as well as leading Google’s Policy Planning team within Asia Pacific. Charlie is one of Flint’s AI experts and previously worked in the UK’s Department for Digital, Culture, Media and Sport. For more detail on any of the developments discussed in this blog, contact us here.

Flint Insight

Subscribe to receive analysis and insight from Flint’s expert team on the latest political, policy and regulatory developments.
© Flint Global 2024 | Privacy Policy
Website by Saifee Creations