The EU Artificial Intelligence Act: what does it mean for the future of AI?

In April 2021, the European Commission proposed the Artificial Intelligence Act: a first of its kind across the world, it would implement new legal standards for AI technologies. 

AI is being widely created and adopted to better operational issues in a range of sectors including healthcare, transport, education, and employment. However, without firm regulation in place, AI has the potential to exacerbate mass surveillance, compromise fundamental freedoms such as freedom of movement and expression, and exploit certain demographics. 

The Act hopes to create a framework to ensure trustworthy AI and strengthen the digital ecosystem in the EU.

The EU now has the unique chance to promote a human-centric and trustworthy approach to AI. One that is based on fundamental rights, which manages risks while taking full advantage of the benefits AI can bring for the whole of society. We need a legal framework that leaves space for innovation, and a harmonised digital single market with clear standards.
— Axel Voss, MEP (1)

The AI Act falls within the wider context of the European Commission’s digital transformation agenda - including the Digital Services Act, the Digital Markets Act, and the Data Governance Act. This agenda hopes to reform and update the digital single market - i.e. unifying the rules and regulations surrounding digital rights across the EU member states and ensuring access for users to online services across the EU- emphasising safety, fundamental rights, data protection, and responsibility for liability (2,3,4).

Among other things, the Act proposes a new EU Artificial Intelligence Board, which shall have the responsibility for producing opinions, recommendations, and guidance pertaining to the Act, as well as ensuring it is properly implemented. 


What does the Act propose?

The Act addresses the regulation of AI by taking a risk-based approach and categorising different types of AI:

  1. AI systems considered to pose an “unacceptable risk” to people will be banned.

    These “unacceptable risk” systems include those which can manipulate people through subliminal techniques (methods not consciously perceived by individuals), exploit vulnerabilities of vulnerable people, children, and those with disabilities, as well as those likely to cause them or another psychological or physical harm (5).

    For example, such harm could be exacerbating inequality of marginalised groups, or preying on the vulnerabilities of those who are vulnerable for commercial purposes. 

    For example, AI systems that facilitate social scoring by public authorities, or the use of real-time biometric identification (i.e. live facial recognition) in public spaces by law enforcement also face a ban (unless certain limited exceptions apply).

  2. High risk AI systems are those which can pose a risk to the European fundamental rights, including the rights to: human dignity, privacy, protection of personal data, freedom of expression, worker’s rights, freedom of assembly, non-discrimination, rights to a fair trial, etc. They are in the areas of: 

    • Biometric identification and categorisation of natural persons 

    • Management and operation of critical infrastructure  

    • Education and vocational training 

    • Employment, workers management, and access to self-employment

    • Access to and enjoyment of essential private services and public services and benefits 

    • Law enforcement

    • Migration, asylum, and border control management

    • Administration of justice and democratic processes

    These systems shall be subject to conformity assessments - particularly whenever a change occurs which may impact the compliance of the system with the Act, or change the intended purpose of the system. They also have obligations surrounding the quality of datasets, record keeping, transparency, human oversight, robustness, accuracy, and cyber-security.

  3. Low-risk AI systems will have limited transparency obligations - when these systems are used, people must be informed so they can make informed choices- with the highest regulatory burdens going to “high risk” systems - with obligations including assessments before they are sold or used, and post-market monitoring for every provider (6).


What are some of the concerns with the Act?

As the draft copy of the Act has been circulated over the past year, there has of course been heavy scrutiny and concerns from industry, regulators, and legal experts. 

One large concern is whether the Act will cause an unnecessary financial and regulatory burden on SMEs and start-ups who will need to undergo heavy costs to be in compliance. Currently, those not in compliance shall be monitored by national regulators, and companies found to be in serious breach face fines of up to €20 million or 4% of their annual global turnover (whichever is higher) (7).

There are also questions about where responsibility lies; the Act seems to place much of the burden on system providers, but has been critiqued for not properly addressing the responsibility of those in control of the AI systems, i.e. users. Hence, the chain of responsibility for these systems is only partially addressed. 

Some believe the act needs to be further refined for greater legal certainty - the current proposed definition of AI is considered to be too wide and could therefore lead to overregulation. 

Furthermore, some of the requirements proposed would be hard to implement, particularly without harmonised standards across the EU (8). As with all pieces of legislation like this, there will be concerns that the legislation will become outdated very quickly and will not keep up with the pace of technological advancement.

While these are valid critiques and worth addressing, the Act is currently still going through the legislative process where it shall be scrutinised and amended accordingly. 


What stage of progress are we at with the Act?

This May, the European Parliament adopted recommendations from the Special Committee on Artificial Intelligence in a Digital Age (AIDA), which recommend that public debate on AI should focus on its potential to benefit society, particularly in health, fighting climate change, and enhancing the quality of life. Hence, AI should be regulated proportionate to the type of risk associated with the particular use of the system (9). 

As things stand at the moment, the AI Act is undergoing scrutiny by the Internal Market and Consumer Protection (IMCO) and the Civil Liberties, Justice and Home Affairs (LIBE) committees, and shall be facing a parliamentary vote in September this year.  


Should the Act be written and implemented properly, it has the potential to strike the correct balance between ensuring liability for high-risk systems is properly enforced, and protecting the rights and freedoms of EU citizens -while not unnecessarily hampering innovation and stifling the benefits AI may have. 

The EU taking a lead on this is welcome, and like GDPR, may prove to be a legal benchmark and standard across the world. 


Previous
Previous

Dissecting Australian Data Privacy: The Privacy Act 1988

Next
Next

Video privacy, cloud computing, and cross-border data transfers: what the ICO needs you to know