Share

AI-Pac June edition: the Flint digest on AI policy in Asia-Pacific

AI-Pac is Flint’s digest providing our take on the major developments in Asia-Pacific AI policy and regulation. In a fast-moving environment, this newsletter is designed to help you stay on top of the regulatory agenda and to consider what it means for your business. Please see here, here, and here for our first three blogs, covering January to May. In each update, we will set out:

  • Major regulatory developments across Asia-Pacific regarding AI regulation;
  • Overall trends of AI regulation in Asia-Pacific;
  • What this means for business and how business might best prepare.

After a burst of announcements earlier in the year, June was a quieter month. Key developments included: 

  • The Korean government opened a public consultation on the safety and ethics of AI. As Korea deliberates on its regulatory approach to AI, this move to engage the public on ethical and governance issues around technology could set a precedent for other countries to follow.
  • Thailand announced that it is moving forward with AI regulation. Policymakers are considering the appropriateness of precedents including the EU’s AI Act and Singapore’s Model Governance Framework for Generative AI to the Thai context, and there is an opportunity for business to engage policymakers on the detail.
  • Hong Kong’s Privacy Commissioner published guidance on AI and data protection. The messaging accompanying the guidance hints at uncertainty over how far, in future, Hong Kong’s policy approach to AI will hew to that of China.

Korea: engaging the public on AI safety and ethics

On 12 June, the Ministry of Science and ICT in the Republic of Korea opened a consultation on the safety, trust, and ethics of AI. The consultation aims to engage the Korean public on the opportunities and threats of AI and on the appropriate regulatory direction for safe and reliable AI. It runs alongside a “policy idea contest”, inviting the public to submit ideas for policies on issues such as gaps in the usage of AI and how to ensure AI developers are aware of ethical issues, with prize money awarded to the best ideas, as judged by the Ministry. The consultation will run until 31 July.

The consultation is an important step in the Korean government’s plan to establish a “New Digital Order” based on the Digital Bill of Rights concept outlined by Korea last September (see our earlier blog for more on the Digital Bill of Rights). Following this consultation, the government plans to launch further consultations on improving digital accessibility, combating fake news with AI, and the stable implementation of telemedicine later in the year. The consultations reflect the government’s recognition of the need to engage the public on key issues relating to the governance of technology in order to build trust. This consultative model could be emulated by other governments across the region and globally.

Korea has so far adopted a ‘develop first, regulate later’ approach to AI, with ministers emphasising the need for an agile and voluntary approach with “only the necessary minimum regulations”. The Act on Promotion of AI Industry and Framework for Establishing Trustworthy AI remains under review in the National Assembly. It aims to support the adoption of AI technology as well as proposing limited specific obligations for high-risk AI. However, opposition parties, who have a majority in the National Assembly, may introduce their own stricter regulatory proposals in the coming months, creating uncertainty over Korea’s direction of travel. 

One area to watch in Korea is the extent to which the government, in its ambitions to make Korea a top three country in AI technology, seeks to favour Korean AI developers at the expense of multinational companies. Multinationals seeking to develop and deploy AI in Korea may need to build a case that, in order to realise Korea’s ambitions, it will need to remain open for investment, innovation and talent from around the world.

Thailand: intention to regulate, but approach still to be determined

On 14 June, Thailand’s Office of the National Digital Economy and Society Commission announced that Thailand’s Electronic Transactions Development Agency (ETDA) has been directed to develop legislation for AI regulation. 

Thailand’s approach to AI is underpinned by the government’s ambition to transform Thailand into a globally and regionally competitive digital economy hub. ETDA officials drafting the AI law are wrestling with how to balance digital security with innovation and flexibility. They are particularly concerned about the risk of AI-enabled scams and have also expressed concerns about bias, citing the potential for AI-backed job application systems to eliminate qualified applicants.

As it develops regulations, the ETDA is reviewing the applicability of precedents such as the EU’s AI Act to Thailand, while taking into account differing cultural norms and values between the EU and Thailand. At the same time, senior officials at ETDA’s AI Governance Center have expressed interest in Singapore’s approach of working with industry to develop an agile, non-binding framework for responsible use of AI (see our last blog on Singapore’s Model Governance Framework for Generative AI). Thailand is participating in the ASEAN Working Group on AI Governance, which may see discussions of Singapore-like alternatives to the EU’s AI Act. One potential area of concern in Thailand is the possible burden on regulators and government in enforcing overly complex regulation. 

Progress on regulation is likely to be slow. The ETDA is welcoming input from some businesses on its approach, so there may be an opportunity for businesses developing or deploying AI in Thailand to present their view on appropriate forms of governance.

Hong Kong: do references to China foreshadow more substantive alignment?

On 11 June, Hong Kong’s Privacy Commissioner’s Office published the Artificial Intelligence: Model Personal Data Protection Framework (Model Framework). The Model Framework sets out non-binding recommendations and best practices for organisations to comply with Hong Kong’s Personal Data (Privacy) Ordinance (PDPO) while deploying AI services.

The guidance in the Model Framework itself is similar in nature to documents issued by privacy regulators in Singapore and the UK, among others. It covers four areas, calling on organisations to: (i) have an internal AI governance strategy; (ii) adopt a risk-based approach in the procurement, use and management of AI systems; (iii) implement continuous monitoring and review after the adoption of an AI model; (iv) provide appropriate transparency and openness to different types of stakeholders.

However, the Privacy Commissioner’s Office’s messaging around the report hints at broader uncertainty on Hong Kong’s future direction on AI and digital governance. The Model Framework notes that it “reflects the prevailing norms and best practices of the international community”, as well as “being supportive of the Global AI Governance Initiative promulgated by the Mainland [China] in 2023”. This is just one of several references to mainland China-led AI initiatives in the messaging around the report. 

Despite these references, the Model Framework itself is in practice considerably different from mainland Chinese initiatives such as the Interim Measures for the Management of Generative Artificial Intelligence Services issued in 2023. For example, there is no reference to upholding “Core Socialist Values”. However, there remain questions over whether, in future, digital and AI regulation in Hong Kong will start to hew more closely to China. These questions are particularly salient in light of the recent court order to block access to Hong Kong protest songs on social media within the territory.    

Singapore: pioneering a more flexible, cooperative model of AI governance

On 30 May, Singapore’s Deputy Prime Minister Heng Swee Keat launched the finalised Model AI Governance Framework for Generative AI. Singapore also announced its plan to work with Rwanda to create a Digital Forum for Small States (FOSS) AI Governance Playbook that aims to help small states develop AI governance frameworks.  On 31 May, Singapore unveiled the “Project Moonshot” initiative, one of the world’s first open-source testing toolkits designed to help AI developers and deployers manage risks in deploying Large Language Models (LLMs). 

The Model Governance Framework provides non-binding guidance to businesses across the AI supply chain on developing and deploying AI safely. Its non-binding nature reflects the Singaporean government’s view that it is “too early” for stringent AI regulation. Rather than implementing new regulations, Singapore’s current approach is to work with sectoral regulators to assess if existing regulations need to be strengthened to account for advances in AI.

For businesses, Singapore’s partnership with Rwanda and the Forum for Small States (FOSS; an informal cross-regional group at the UN) raises hopes of further global collaboration on AI governance, in particular bringing the global south into the discussion.  It remains to be seen, however, how far the proposed Digital FOSS AI Governance Playbook will diverge from existing multilateral efforts from the UN, G7, GPAI, and others, to reflect the particular concerns of smaller states. 

Overall, these announcements underline Singapore’s goal to act as a global leader in developing a balanced approach to establishing guardrails for AI while maximising space for innovation. Singapore’s approach to AI governance stands out globally for, firstly, its focus on industry engagement, secondly, its emphasis on developing practical AI safety tools, and thirdly, its energetic advocacy of international interoperability in AI governance.

China: ambiguity over geopolitical positioning on AI

On 14 May, China and the US convened their first intergovernmental dialogue on AI. According to a statement from China’s Ministry of Foreign Affairs, China expressed its willingness to work with the international community to establish a global AI framework. China also reiterated its opposition to US export controls and “suppression” related to China’s development of AI.

On 16-17 May, China subsequently held talks with Russia, agreeing (document in Chinese) to build a regular negotiation mechanism aimed at strengthening cooperation on AI. The two countries also expressed their opposition to unilateral restrictive measures that hinder the development of AI from third countries.

The two developments - coupled with China’s refusal to sign the Seoul Ministerial Statement - highlight ambiguity over the extent to which China is willing to engage with perceived Western-led multilateral efforts on AI governance. On the one hand, China’s participation in the China-US dialogue suggests at least some willingness to cooperate in implementing guardrails around the high-risk development of AI. However, US export controls on AI chips are leading some in Beijing to question the value of engaging with Western nations on AI policy. China last year proposed its own Global AI Governance Initiative and has also driven the formation of an “AI study group” within the expanded BRICS group. China may seek to challenge perceived Western leadership on multilateral policy initiatives around AI, championing alternative fora for cooperation such as the UN and the BRICS group. 


This blog was written by Ewan Lusty and David Skelton, who are both based in Singapore, with contributions from Charlie Jackson, based in London, and our Senior Adviser network based across the Asia-Pacific region. Ewan has been supporting Flint clients on a range of digital policy issues for over four years, prior to which he advised UK government ministers on online safety regulation. David spent seven years at Google, most recently working within the Global Strategy and Innovation team, as well as leading Google’s Policy Planning team within Asia Pacific. Charlie is one of Flint’s AI experts and previously worked in the UK’s Department for Digital, Culture, Media and Sport. For more details on any of the developments discussed in this blog, contact us here.

Flint Insight

Subscribe to receive analysis and insight from Flint’s expert team on the latest political, policy and regulatory developments.
© Flint Global 2024 | Privacy Policy
Website by Saifee Creations
cross