AI-PAC: the Flint monthly digest on AI policy in Asia-Pacific

AI-Pac is Flint’s new monthly digest providing our take on the major developments in Asia-Pacific AI policy. Please see here for our first blog, covering January and February. Each month, we will set out:

  • Major regulatory developments across Asia-Pacific regarding AI regulation;
  • Overall trends of AI regulation in Asia-Pacific;
  • What this means for business and how business might best prepare.

In March, the most consequential AI policy developments took place in South Korea, China, and India:

  • The Summit for Democracy in Seoul demonstrated how the impact of AI on democracy is a key concern for several governments in the region and across the world. Looking ahead, Seoul will also host the AI Safety Summit in May, though it remains to be seen how far this will meaningfully evolve policy discussions beyond the first UK-hosted AI Safety Summit last November. 
  • Delegates at China’s Two Sessions debated how to prevent China from falling further behind the US in the development of advanced AI capabilities. This could potentially signal a shift in emphasis towards developing AI capabilities over regulation and control, but AI policy-making in China remains highly opaque.
  • In India, the government walked back an advisory requiring AI developers to seek pre-approval before launching AI models. This shows the level of unpredictability that persists around much of AI policy and how domestic political considerations can shape the trajectory of AI regulation.   

Taken together, these developments demonstrate the ongoing tension that governments face between promoting dynamic and competitive AI ecosystems and regulating the potential risks. They also show the various driving forces at play as governments consider how to best regulate AI. We expect governments to emphasise different parts of these agendas at different times, according to their wider strategic priorities.

Summit for Democracy in Seoul: the impact of AI on democracy in focus

From 18 to 20 March, South Korea hosted representatives from over thirty countries at the third Summit for Democracy. The Summit for Democracy is an initiative launched in 2021 by US President, Joe Biden, to convene democratic nations with the aim of strengthening democracy around the world. 

At the Summit, among Asia-Pacific leaders, both South Korea’s President, Yoon Suk Yeol, and Japan’s Prime Minister, Fumio Kishida, warned of the potential threat to democracy from the use of AI in generating and distributing disinformation. Prime Minister Kishida noted ongoing efforts to progress multilateral discussions on AI governance through the G7-led Hiroshima AI Process, and suggested that participation in this initiative could be expanded beyond the G7. He also noted that Japan’s AI Strategy Council will be discussing AI-generated misinformation and disinformation. The discussions underline that the potential for AI to undermine information integrity is a widely shared concern around the region and is likely to be a key area for regulation. 

South Korea will also host the second AI Safety Summit in May, following the Bletchley Summit held by the UK in November. Korean and UK officials have been working together to finalise the agenda, and so far, details of the Summit have not yet been announced. The Summit is likely to consist of a virtual meeting of heads of state, an in-person meeting of digital ministers, and a ‘Global AI Forum’ that will convene a select group of leading AI model developers and government representatives. 

At the Summit, Korean officials would like to progress discussions around the Digital Bill of Rights, a concept first outlined by President Yoon in September 2023, which focuses on promoting freedom, fairness, safety, innovation, and international cooperation in the digital environment. It is also possible that the establishment of a UN-affiliated international AI regulatory organisation will be proposed at the Summit. However, with details still to be announced amid rumours of disagreement between Korean and UK officials over the agenda, it remains to be seen how far the Summit will substantially evolve discussions beyond Bletchley.

China: signs of a rebalancing 

China’s government held its annual Two Sessions political meetings - the annual meetings of the National People’s Congress (NPC) and the Chinese People’s Political Consultative Congress (CPPCC) - over the course of a week in early March. The Government Work Report published during the Two Sessions announced an “AI+” initiative. Although previous Work Reports and government initiatives have promoted the development of cutting-edge AI capabilities, AI+ appears to shift towards promoting the application of AI across the economy.

Discussions at the CPPCC - a consultative forum bringing together industry stakeholders and other experts to advise on government policy - raised concerns that China is falling further behind the US in the development of advanced AI capabilities. Zeng Yi, CEO of the Shenzhen-based China Electronics Corporation, warned that “we are at risk of seeing an even wider gap”. To address this, delegates proposed a range of initiatives, including establishing a unified market for computing power services and the coordination of academic and industrial resources to build up a “sovereign LLM”. This would be in keeping with China’s emphasis on ensuring that the training of AI models and the outputs they generate reflect socialist values.

There was also discussion of the appropriate structures for regulating AI. China already has regulations in place for specific aspects relating to AI development, including recommendation algorithms, deep synthesis, and generative AI. In June 2023, China’s State Council put developing a comprehensive national AI law on its agenda. In March, influential AI academics released a proposed draft version of this law. This included a greater emphasis on the development of AI capabilities rather than just regulating, dividing regulatory responsibilities across existing ministries rather than creating a new agency, and implementing targeted rules at developers of the most advanced frontier models.

It remains to be seen where these proposals lead in practice. However, the overall tone of discussions at the Two Sessions could presage a rebalancing in China’s approach towards promoting AI development over excessively regulating it.

India: a tale of two advisories

On 1 March, India’s Ministry of Electronics and IT (MeitY) issued an advisory to social media intermediaries and AI platforms, asking them to seek a permit from the government before launching any AI products and warning that they could face legal action under India’s IT Act for failing to comply. However, following a vocal backlash from industry and civil society groups, a subsequent advisory on 15 March removed the pre-launch permit requirement, instead requiring developers to label AI models to inform users of the “possible inherent fallibility or unreliability of the output generated”.

The initial advisory followed shortly after screenshotted responses from Google's Gemini chatbot to the question, “Is [Prime Minister Narendra] Modi a fascist?”, had prompted criticism from the Minister of State for Electronics and Information Technology, Rajeev Chandrasekhar. As early as 2 March, following concerns expressed by Indian start-ups about the impact of the 1 March advisory on AI innovation in India, Chandrasekhar had sought to clarify it was only targeted at larger “significant platforms”.

Analysts have cast doubt on the legal status of the 1 March advisory. Rather than an effort to set down lasting regulation, this could in part have been intended more as an act of political signalling. Some form of AI regulation is likely after the general election (to be held from 19 April to 1 June), though Indian ministers have previously said that they will take a light-touch approach that does not restrict rapid AI innovation. Nevertheless, March’s developments show the risk that post-election, India will take a reactive approach that focuses on specific issues and favours local players over one that is comprehensive, holistic and open. The risk that AI regulation is developed as a reaction to events, rather than as part of a coherent strategy, extends far beyond India. However, companies developing and deploying AI models in India should be particularly mindful of how those models treat politically sensitive issues.

This blog was written by Ewan Lusty and David Skelton, who are both based in Singapore, with contributions from our Senior Adviser network based across the Asia-Pacific region. Ewan has been supporting Flint clients on a range of digital policy issues for over four years, prior to which he advised UK government ministers on online safety regulation. David spent seven years at Google, most recently working within the Global Strategy and Innovation team, as well as leading Google’s Policy Planning team within Asia Pacific. For more detail on any of the developments discussed in this blog, contact us here.

Flint Insight

Subscribe to receive analysis and insight from Flint’s expert team on the latest political, policy and regulatory developments.
© Flint Global 2024 | Privacy Policy
Website by Saifee Creations