Share

The four key tech policy trends in Asia-Pacific for 2024

With the opening of Flint’s second Asia-Pacific office in Singapore, we have had many discussions with businesses and policy-makers on their key priorities on the horizon for 2024. This blog draws on those discussions to draw out four policy trends we expect to feature heavily. It focuses on policy trends in the technology sector, but these trends will impact businesses across the economy, from manufacturing to financial services.

The ‘hard-law’ vs. ‘soft-law’ debate

Across many issues, from online safety to competition in digital markets to AI, there has not been the same urgency to introduce mandatory rules in Asia-Pacific as we have seen in the EU and UK. There is no single ‘Asia-Pacific model’ of digital governance in a region as diverse as this. However, until now, the confluence of political and social forces driving regulation elsewhere - including a highly charged emphasis in public debate on the risks of new technology rather than the benefits, coupled with powerful coalitions of consumer representatives and businesses in adjacent industries advocating for regulation - have not generally been as prevalent in the region. Naturally, there are exceptions, notably Australia, which is continuing to build on regulation it previously implemented in a number of areas.

Where regulation exists, it has often taken the form of a more flexible co-regulatory approach rather than mandating specific practices. After taking office in 2022, for example, South Korea’s conservative government sought to encourage major digital players such as food-delivery apps to voluntarily reach agreements over dispute resolution with their business users, rather than implementing EU-style “do’s and ‘don’ts” for large platforms. 

In many Asia-Pacific countries, the balance is now shifting towards a ‘hard-law’, mandatory approach, at least in certain areas. Governments across the region see digitalisation as key to raising productivity and living standards, and are often mindful that EU-style regulation risks raising excessive obstacles to digital innovation. Nevertheless, increasing public concern over issues such as fraud, child safety, and deep-fakes, alongside more effective advocacy from industries in sectors adjacent to technology, is adding to the pressure to regulate. The implementation of EU and UK digital regulation is also providing Asia-Pacific policy-makers with models to draw from.  Korea itself announced its intention to develop an EU Digital Markets Act (DMA)-style ex ante regulation for digital markets at the end of 2023. The backlash to that announcement underlines how many Asia-Pacific countries remain more ambivalent about prescriptive digital regulation than in Europe. Nevertheless, in many countries, the pressure for mandatory rules is growing.

The ‘year of elections’ 

As we blogged in November of last year, 2024 will be a bumper year of elections - with over 2 billion people casting their ballots at some point this year. India, the world’s most populous democracy, will go to the polls this April and May. Indonesia and Taiwan have already had keenly fought elections and South Koreans will be voting in parliamentary elections in April. Outside of Asia, the US, EU, and UK will all be holding their own elections. 

This will be the biggest set of elections yet in which AI tools can be used as part of campaigns, potentially making it easier for political operatives ultimate goal of making campaigning as tailored and personalised as possible. We’ve already seen deepfakes play a role in this year’s elections, with the fake Joe Biden robocall in New Hampshire being the most recent and one of the best known examples. During the Indonesian election, a deepfake of the former President Suharto, who died in 2008, was created by the Golkar Party and gained almost 5 million views on X.  

Tech companies are, of course, taking steps to address this trend, with watermarking, “nutrition labelling” and verification tools being rolled out and election integrity being a key goal. A “tech accord” around election integrity, signed by twenty leading technology companies, was unveiled at the Munich Security Conference. But technological tools remain imperfect and it remains to be seen whether all platforms’ approaches will be sufficiently robust to meet the challenge new technology presents. Automated and human approaches to content moderation will face particular challenges in countries, such as Indonesia and India, with large numbers of minority languages. 

Past election seasons, notably that in 2016, have notably impacted the trajectory of tech regulation. The role of AI in this year’s elections is likely to have an important impact on how regulation unfolds. It should also be remembered that election campaigns often act as accelerators in bringing forward discussions around regulation.

The evolution of AI regulation

Governments around the world are grappling with how to regulate AI in a way that maximises the benefits of innovation while minimising societal risks. This will impact not just the technology companies developing the most advanced models but also companies in financial services, manufacturing, and agriculture seeking to develop their own use cases. Multilaterally, governments will be looking to build on the relatively strong momentum of the Bletchley Park summit with further AI Safety Summits in both Seoul and Paris.  A number of APAC governments are engaged in the AI Safety Summit process, including Australia, India, Indonesia, Japan, New Zealand, Korea, Singapore and China. The South Korean government will also be looking for the Seoul summit in early summer to showcase their AI industry and their approach to regulation. 

Throughout Asia-Pacific, countries are lining up to consider their approach to AI governance. Already this year, Australia, Singapore and Japan have published documents signalling their intended approach, with other countries across the region likely to follow later this year. Beyond broad agreement on the importance of principles such as accountability, transparency, and fairness, different countries are progressing with slightly differing approaches while engaging in multilateral discussions. The approach taken by each country will be determined by factors including public attitudes to the technology, broader governmental priorities, and geopolitical considerations. Japan’s governing Liberal Democratic Party (LDP), for example, last week published draft guidelines for advanced foundation model developers, seeking to align with the US’s Voluntary AI Commitments without inhibiting AI-enabled innovation.  APAC countries will also be closely observing the implementation of the EU’s AI Act.

Avoiding a messy overlap between new AI-specific rules and existing legal frameworks will also continue to be a challenge. For example, the EU is having difficulty aligning its AI Act with the provisions of the GDPR. How AI impacts existing regulation around copyright, privacy, data protection and content will have implications for companies across the economy. We will be providing a more detailed update on Asia-Pacific AI developments shortly.

The tension between fragmentation and harmonisation

Recent years have seen more and more commentators point to the growth of the “splinternet”, where acts of digital protectionism, such as data localisation, have hampered the global internet and the provision of cross-border digital services. Geopolitical tensions and concerns over national security and technological sovereignty have played an important role. This has been particularly pronounced in politically and culturally more disparate regions, such as Asia-Pacific. Different regulatory regimes across the region increase the difficulty for companies that are built to operate across borders and potentially fragment the product experience for users in different countries. 

Up until now, the most extreme examples have been bans or threatened bans on particular services or services from particular countries. However, proposals to prohibit particular types of technology also present an obstacle to businesses’ ambitions of regional growth. Businesses will be closely watching the Digital India Act set for publication later in 2024, following reports that it could include powers to ban new and emerging technology that causes harm to users or poses a threat to national security.  Aware of these challenges, some Asia-Pacific countries are seeking to improve regulatory convergence and reduce friction. The ASEAN group of South East Asian countries, for example, have developed a guide on AI Governance and Ethics. The Hiroshima AI Process, aimed at achieving global AI governance, was launched under Japan’s Presidency of the G7. South Korea is hosting the next AI Safety Summit and Singapore is also taking a key role in the multilateral conversation on AI governance. Despite these efforts at harmonisation, a fragmented environment is likely to be an increasingly global reality in 2024.


This blog was written by David Skelton and Ewan Lusty, who lead Flint’s tech offering in Singapore. David spent seven years at Google, most recently working within the Global Strategy and Innovation team, as well as leading Google’s Policy Planning team within Asia Pacific. Ewan has been supporting Flint clients on a range of digital policy issues for over four years, prior to which he advised UK government ministers on online safety regulation. For more information on how Flint's Asia Pacific team can help you navigate any points discussed in this blog, contact us here.

Flint Insight

Subscribe to receive analysis and insight from Flint’s expert team on the latest political, policy and regulatory developments.
© Flint Global 2024 | Privacy Policy
Website by Saifee Creations
cross