close
close
The EU AI Act – the reality is better than many feared

(Image by S. Hermann & F. Richter from Pixabay)

The European Union’s Artificial Intelligence Act came into force this month, with much of the text addressing high-risk AI systems deployed within the EU, whether based in the EU or in a third country such as the US.

The EU explains:

AI systems are considered high-risk whenever they profile individuals, i.e. when personal data is automatically processed to evaluate various aspects of a person’s life, such as work performance, economic situation, health, preferences, interests, reliability, behavior, location or movement.

Under the new law, developers of all high-risk systems must: establish a risk management system; implement data management; prepare technical documentation to demonstrate compliance; design the AI ​​to enable record-keeping and achieve appropriate levels of accuracy, robustness and cybersecurity; establish a quality management system to ensure compliance; and design their systems to allow operators to implement human oversight.

Meanwhile, developers And Operators of “lower-risk AI systems” must ensure that users are aware that they are interacting with an AI (e.g. chatbots and deep fakes). This is a sensible regulation that is intended to prevent the deliberate deception and manipulation of end users.

Importantly, all providers of general-purpose AI (GPAI) models must comply with the EU Copyright Directive and publish a summary of the content used for training.

IP-centric companies will welcome this provision, as unlicensed scraping of copyrighted data to train large language models and generative systems is the subject of ongoing litigation in the US.

These practices were also denounced by the UK House of Lords Communications and Digital Committee in its LLM inquiry last year and in its subsequent report this year (see diginomica, passim). Unfortunately, the government has “sit-faced” on the issue, the committee found, so the EU law may prompt UK politicians to look at the issue again.

Civil protection

In other areas, the law prioritizes the protection of citizens. The following types of AI systems are explicitly prohibited:

  • Those who use subliminal, manipulative or deceptive techniques to distort behavior and interfere with informed decisions, thereby causing “significant harm.”

– This is an interesting provision as it could potentially be applied to AI-powered social platforms where algorithms could encourage divisive content or push people towards fake content or scams. I expect there will be test cases against companies like Meta Platforms and X Corp to see if these clauses can be used in this way.

– This broad definition also includes risks such as ‘voice-controlled toys that encourage dangerous behaviour in children’.

  • Those who exploit vulnerabilities due to age, disability or socioeconomic circumstances to distort behaviour and thus cause “significant harm”.
  • Biometric categorisation systems that infer characteristics such as race, political opinions, trade union membership, religious beliefs or sexual orientation from a person’s appearance may result in that person being treated incorrectly, unfairly or biasedly. Such a system could make completely wrong predictions about a person based on biased human assumptions.
  • Social scoring, such as the evaluation or classification of individuals or groups based on their social behaviour or personal characteristics, which may result in disadvantageous or unfavourable treatment of some compared to others.
  • Systems that assess the risk of a person committing a crime based solely on profiles or personality traits. Exceptions apply, however, when AI is used to supplement human assessments based on objective, verifiable facts directly related to criminal activity.

– This development comes at an opportune time, as some countries, such as Argentina last week, have announced plans to use AI to predict crime, posing the risk of exposing individuals or groups to erroneous assessments based on biased human data.

– So, in essence, the EU wants to prevent the adoption of the concept of so-called ‘thoughtcrime’: the prediction that a crime will be committed by someone who looks similar to a criminal or suspect.

– An early example of automated bias in the criminal justice system was the COMPAS sentencing guidelines algorithm used in some U.S. courtrooms to predict recidivism. A 2016 ProPublica investigation found that COMPAS predicted that black defendants were more likely to recidivate than white ones. This led to harsher sentences for black Americans because of historical systemic bias in parts of the U.S. legal system.

  • Systems that create facial recognition databases by extracting facial images from the Internet or from video surveillance recordings.
  • Those that evoke emotions in the workplace or educational settings, except for medical or safety reasons.

– This is also a reasonable provision: systems for reading mood and emotion are cumbersome and do not have sufficiently diverse training data, so they can produce completely inaccurate results for some people, for neurodiverse people, and for workers of different races or cultures.

  • Real-time remote biometric identification – for example through facial recognition – in public spaces for law enforcement purposes, except when searching for missing persons, kidnapping victims and other vulnerable people or victims of human trafficking.

– Study after study has found that facial recognition produces less accurate results on women, as well as black and other ethnic minorities, leading to a high risk of misidentification. These problems are rooted in the lack of diversity in training data

– systems trained on white men rather than other groups – and in some cases on imaging systems whose sensors have been calibrated to white skin tones.

  • However, the EU states that this must be balanced against the need to prevent a significant and imminent threat to life, a terrorist attack or the location of suspects of serious crimes.

Big goals

Overall, the European Parliament says its aim is to ensure that AI systems deployed in the EU are safe, transparent, accountable, non-discriminatory and environmentally friendly. AI systems should be monitored by humans and not by automation, it continues.

In response to the fears of some developers that regulations would hinder innovation, the EU states:

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to create better conditions for the development and use of this innovative technology. AI can bring many benefits, such as better healthcare, safer and cleaner transport, more efficient production and cheaper and more sustainable energy.

As with the General Data Protection Regulation (GDPR) – which, while flawed, has become the gold standard when it comes to preventing the misuse of citizens’ data – the fines are steep. The law provides for fines of up to seven percent of annual global turnover for violations in the form of prohibited applications, up to three percent for violations of the law’s other provisions, and up to 1.5 percent for providing false data to regulators.

My opinion

At first glance, this is a serious step to protect citizens from the worst excesses and dangers of a technology that could have unpredictable social consequences.

In fact, the provisions make it difficult for suppliers to claim that the EU wants to rein in US companies and innovation in general. The law obliges suppliers of high-risk systems – and not a few developers of less risky and universally applicable AI – to be diligent and transparent.

This will be followed by fine-tuning and stress testing as well as a gradual introduction of compliance measures and standards.

By Olivia

Leave a Reply

Your email address will not be published. Required fields are marked *