Share

AI-Pac: the Flint monthly digest on AI policy in Asia-Pacific

Now that Flint has opened its second Asia-Pacific office in Singapore, with a particular focus on technology policy and regulation, we are launching a new monthly digest providing our take on the major developments in Asia-Pacific AI policy. Each month we will set out:

  • Major regulatory developments across Asia-Pacific regarding AI regulation;
  • Overall trends of AI regulation in Asia-Pacific;
  • What this means for business and how business might best prepare.

This first blog is a bumper edition covering the first two months of 2024 and draws on insights from our expert network of Senior Advisers based across the region.

January and February were crowded months for AI policy developments. Several Asia-Pacific countries took important steps towards clarifying their approach to issues such as AI safety regulation and the implications of AI for copyright. For now, although there is no pan-Asia-Pacific consensus on AI governance, most Asia-Pacific governments’ focus is on maximising the benefits of AI. There is a clear ambition to ensure that regulation does not inhibit AI’s potential for raising productivity and living standards, but governments may struggle to realize this ambition in practice. This edition covers major developments so far this year in Japan, Australia, Singapore, ASEAN, and Indonesia, with implications not just for businesses in the technology sector but for businesses across the economy deploying AI. 

Japan: a tentative step towards mandatory rules

On 16 February, a team from Japan’s governing Liberal Democratic Party (LDP) announced that they are developing a “Responsible AI Promotion Act” and produced a set of explanatory slides. The slides outline LDP law-makers’ ambition to introduce legislation regulating AI safety later in 2024. Importantly, for now, this is a document produced by the governing party rather than a government legislative proposal.

The LDP representatives propose to focus regulation on “designated AI infrastructure model developers”, and to avoid targeting small-scale models or start-ups. The slides leave as an open question the criteria for designating model developers as in scope but suggests this will relate to their “scale” and “purpose”, including whether a model is for general purpose or not. In-scope developers will need to share information with the government about their compliance with a set of seven obligations, which are closely modelled on the White House Voluntary AI Commitments agreed in 2023.

The details announced so far leave important questions unanswered, but the proposal is far narrower in scope and content than the EU’s AI Act. Importantly, the LDP team is soliciting feedback from industry. The criteria and process for designating which developers are in scope will be a key area of focus for business. Japanese officials have previously expressed concern that Japan lags the US, China and Europe in AI development and uptake, and will want to avoid regulating in a way that prevents Japan from narrowing the gap. Following the LDP team’s announcement, the Ministry of Economy, Trade and Industry (METI) will need to draft and consult on legislation, providing more scope for industry to engage.

Separately, on 24 January, Japan’s Cultural Agency’s expert panel on AI and copyright protection published its preliminary report, inviting public comment until 12 February. Rights-holders had hoped for changes to Japan’s copyright law, which includes broad exemptions for the use of data for training AI models. However, the expert panel sought only to clarify the parameters of existing law rather than recommending its overhaul. Nevertheless, continuing pressure from Japan’s influential creative sector, coupled with developments on this issue elsewhere, could cause Japan’s position to shift in future.

Australia: a risk-based approach and the formation of expert groups

On 17 January, Australia’s government published its interim response to its consultation on Safe and responsible AI in Australia. The government is considering introducing legislation for mandatory safety guardrails for AI in high-risk settings. The definition of “high-risk” remains unclear; Minister for Industry and Science, Ed Husic, has said only that high-risk AI is “anything that affects the safety of people’s lives, or someone’s future prospects in work or with the law”. Australia has since formed an AI Expert Group of academics to advise on AI safety and submit a report by 30 June, with industry associations invited to ‘observe’ its work.

The government’s ambition to focus the majority of new regulatory obligations on high-risk AI deployment is similar to that of the EU’s AI Act. However, unlike the AI Act, Australia’s government so far does not plan to ban specific “unacceptable risk” AI systems, introduce new rules for foundation models, or impose transparency requirements for “limited risk” AI systems. So far, the government has not committed to legislating in 2024. But it will face pressure from both the Opposition and civil society to go faster and to broaden the scope of regulation. Growing calls to develop an Australian sovereign AI capability are increasingly impacting the debate on AI safety, and companies could face increased pressure to ensure AI reflects “Australian values”.

Separately, the Attorney General has established a Copyright and AI Reference Group and Steering Committee with cross-sector industry stakeholders. Technology companies have advocated reforming Australia’s copyright law to allow for greater flexibility for training AI models, but the government may instead bow to pressure from rights-holders to implement a mandatory bargaining mechanism forcing AI developers to pay for the use of data. 

Singapore: seeking cross-border consensus on generative AI

On 16 January, Singapore’s government published a draft Model AI Governance Framework for Generative AI, welcoming public comment until 15 March. The paper was intended for an international audience and is not a consultation on Singapore’s domestic approach.

The framework does not propose introducing mandatory regulation for generative AI models, instead advocating a flexible, co-regulatory approach to ensuring generative AI is developed responsibly. It demonstrates Singapore’s ambitions to be an active player in global and regional discussions on AI governance (also seen in the next item on ASEAN), distinguishing its approach as one focused on practical tools and cooperation with industry rather than sweeping rules. Under the AI Verify Foundation, Singapore’s Infocomm Media Development Authority (IMDA) has developed open-source AI safety testing tools and aims to deepen its collaboration with industry to generate use cases for these tools. As businesses contemplate the future global landscape of AI regulation, Singapore’s approach - if adopted elsewhere - presents a potentially appealing contrast to that of the EU. 

ASEAN: fragmentation likely despite deepening discussions

On 2 February, ASEAN digital ministers released the ASEAN Guide on AI Governance and Ethics, a non-binding practical guide for organisations developing or deploying AI technologies. Ministers also agreed to establish an ASEAN Working Group on AI Governance.

The Guide advises businesses on AI safety and makes national-level recommendations for ASEAN member governments. The content suggests the importance of Singapore in driving discussions at ASEAN level, with much of the guidance reflecting Singapore’s previous Model Artificial Intelligence Governance Framework published in 2020. Notably, the national-level recommendations make no mention of regulation, instead advocating policies around skills, data, and R&D that create an enabling environment for AI innovation. This suggests that, for the time being, many ASEAN countries are waiting to see how AI regulation unfolds in other jurisdictions rather than rushing to introduce rules of their own.

How far this will lead to regional convergence on AI policy remains in doubt. Obstacles include a gaping divide in the progress of digitalisation between different ASEAN members and starkly contrasting approaches to digital governance. For example, the Guide advocates facilitating cross-border data flows, but Indonesia, Vietnam, and (potentially) Cambodia have all implemented data localisation policies. Fragmentation will continue to be a challenge for businesses operating cross-border.

Indonesia: the use of AI in elections

On 14 February, Indonesia held its presidential election. The run-up to the election included several high-profile instances of AI-generated content. As more information emerges on the role of AI in the election, it could intensify concerns about other elections later in 2024 and increase the urgency of calls for regulation to protect against AI-generated disinformation and hate speech.

In addition to the more high-profile examples, political advisors and lobbyists reported the widespread use of generative AI tools. One political consultant claimed to have sold an app generating hyper-local campaign content to 700 legislative candidates, indicating his plan to market the app for candidates in India ahead of its general election, expected by May. Some Indonesian political strategists and activists argue that AI is a leveller, providing candidates with customised tools that would otherwise only be accessible to well-funded candidates, while also helping to engage a wider proportion of the population in politics. Others have expressed concern about the potential for deepfakes - for example of candidates speaking Arabic - to influence election outcomes.

The impact of AI on the election will only become clear over time. With researchers and civil society organisations gathering data and shortly publishing their own conclusions on the lessons learned, Indonesia’s election is certain to impact approaches taken to future elections and to debates on AI regulation more broadly. This is likely to include calls to make tech companies liable for the use of their products to create harmful deep-fakes and to mandate them to share data with regulators on the effectiveness of tools such as watermarking. 


This blog was written by Ewan Lusty and David Skelton, who are both based in Singapore, with contributions from our Senior Adviser network based across the Asia-Pacific region. Ewan has been supporting Flint clients on a range of digital policy issues for over four years, prior to which he advised UK government ministers on online safety regulation. David spent seven years at Google, most recently working within the Global Strategy and Innovation team, as well as leading Google’s Policy Planning team within Asia Pacific. For more details on any of the developments discussed in this blog, contact us here.

Flint Insight

Subscribe to receive analysis and insight from Flint’s expert team on the latest political, policy and regulatory developments.
© Flint Global 2024 | Privacy Policy
Website by Saifee Creations
cross