Last week the European Commission presented its long-awaited proposal for a ‘Regulation laying down harmonised rules on Artificial Intelligence’ (AI Act). Alongside this proposal, it also released an updated Coordinated Plan on AI that seeks balance by focusing on investment.
The AI Act reveals a more nuanced approach to regulation that has evolved since the AI White Paper was published in February 2020. Industry and certain Member States have pushed for greater regulatory flexibility to support innovation in Europe. As a result, only ‘high-risk’ uses of AI will be subject to specific obligations, while requirements for non-high risk AI are very limited. However, the starting list of high-risk applications is extensive and subject to change. The Commission will review it every year, which may result in scope creep over time and makes the updating process important to follow.
All high-risk applications will have to be certified by national authorities before being placed on the market. Requirements for high-risk AI systems cover the quality of data sets, recordkeeping, transparency, human oversight, and robustness, accuracy and cybersecurity. Providers will also have to establish a post-market monitoring system. Certain applications will be prohibited: systems that distort people’s behaviours through the use of subliminal techniques in ways likely to cause harm, social evaluation or classification of people; and certain uses of ‘real-time’ remote biometric identification in public spaces.
The AI Act is an effort to catch up with the US and China by taking a markedly different regulatory approach to the emerging AI superpowers. The Commission’s bet is that fostering trust in AI would encourage its wider uptake in Europe. But there are economic concerns that over-regulation in the still relatively early phases of technological development will stifle innovation and affect European competitive advantage. Commissioner Thierry Breton dismissed such concerns stating that the EU will hold the ‘largest amount of industrial data on the planet, and talent and sovereign infrastructure to support it’. Nevertheless, it could be hard to strike the right balance in practice.
The urgency to regulate a nascent industry in Europe demonstrates willingness to establish global regulatory dominance, and links closely to the Commission’s data strategy and strategic autonomy agenda. The Commission presents the Act as ‘a powerful basis to engage further with its external partners, including third countries, and international fora’. It is set to apply regardless of the company’s location, giving the EU an extra-territorial reach. In specific circumstances, the rules would even apply when AI is neither placed on the market, nor put into service, nor used in the EU, but the output produced is used in the EU.
The goal of transatlantic cooperation may yet impact the development of the AI Act (or vice versa). It has invited the US to start work this spring on a Transatlantic AI Agreement to set regional and global AI standards in line with Western values. While the Biden Administration may be more inclined to consider rules for AI and has a greater appetite for coordinated action to counter China, there is currently no indication the US will change its soft law approach and its existing international partnerships (e.g., Global Partnership on AI) in order to align with the EU.
Whether it finally delivers on the ambition of making the EU a global AI leader will also depend on the success of the Commission’s investment strategy. To close the investment gap with the US and China, the EU would have to both directly allocate and leverage significant funds for the development of the European AI and data economy. The Commission’s plan includes 1 billion EUR per year investment from the Digital Europe and Horizon Europe programmes to mobilise Member States’ and private investment of up to 20 billion EUR, with the aim of building the EU’s AI leadership in environment, health, public sector, robotics, mobility, home affairs, and agriculture.
The Commission’s political framing remains one of leveraging the opportunities of AI while addressing its risks. In addition to regulatory requirements, the Act proposes high level measures for Member States to support innovation. It recommends Member States establish AI regulatory sandboxes and provide SMEs and start-ups with ‘priority access’, as well as support and simplify SME compliance.
The rules are laid out in a Regulation, but leave space for action from Member States, notably on market surveillance and penalties, in addition to innovation. This may result in a competition between Member States in attracting investment.
Businesses may find the focus on high-risk uses of AI a welcome development. However, broad definitions and the move to centralise regulation have created an extensive and wide-reaching list of very diverse products that are subject to similar requirements. There is a question about how the needs of different sectors will be accommodated within the framework.
Some businesses may find that the proposed rules do not reflect the reality of AI innovation and might introduce sizeable obligations. For example, the Commission proposes a new conformity assessment whenever a change occurs that affects compliance or the purpose of the system. For AI systems that continue ‘learning’ after being put into service, it establishes that changes to the algorithm and its performance should not be ‘substantial’ compared to what has been assessed during its certification.
There remains a big opportunity for businesses to influence the final shape of the AI Act and the wider EU AI strategy, including cementing the direction of travel towards a more risk-based approach while pushing for more tightly and precisely delimited scope and obligations. The Council and Parliament will develop their respective positions over the course of the year. The Commission is keen to maintain the momentum, but the final law is only expected in 2023. This increases the importance engagement with key stakeholders early on in the process.
Andrea Klaric is a Manager at Flint and supports clients with EU and UK political and policy issues across a range of sectors, with a focus on digital and tech. To find out more about how Flint can help you navigate the risks and opportunities of these developments get in touch.