AI Act: The European Parliament votes for safer AI and an end to Mass Surveillance

This week, the European Parliament adopted its position on the Artificial Intelligence Act (AI act). As Renew’s lead negotiator on the AI act in the JURI committee, I fought hard to protect citizens’ fundamental rights, and give businesses space and certainty to innovate. I’m very glad to announce that many of the things I fought for have made it into Parliament’s final position. Let’s take a look at what we achieved!

The JURI Opinion

The Legal Affairs Committee (JURI), where I was lead negotiator, finished its position on the Artificial Intelligence Act in September of last year. Since then I have been working to ensure the brilliant work done in our committee makes it into the final report. I’m glad to announce that almost all of the key points from our report made it into Parliament’s position.

A better deal for Open Source developers

From reducing fertilizer use to diagnosing illnesses, and wildfire detection to time planning, AI has the potential to transform the European Economy, helping citizens work smarter not harder, but it will only do so if we leave space for companies to innovate.

Open Source software has been a motor of innovation in the digital sector, and will continue to be with Artificial Intelligence, but open source developers shouldn’t have to bear the burden of compliance, especially when they are enthusiasts or individuals. Our changes mean Open Source developers will only have to comply with the regulation when they are providing their AI as a service or when they are providing commercial support for their code. It also guarantees that when third parties use code outside of one of these agreements, that third party becomes responsible for compliance.

Room to innovate

AI is complicated, but complying with EU law shouldn’t be. In many areas, the Commission’s proposal simply wasn’t clear enough, and would have created uncertainty for businesses. By defining central AI concepts, such as General Purpose AI, our report gives businesses clarity: they know what rules apply to them and how they need to comply.

Another problem was paperwork: some AI systems will already be regulated by other EU laws, like the GDPR. To save businesses time, we combined these obligations, so companies don’t get déjà vu from repeat reporting.

We also added a partial exemption for research applications, so long as those applications don’t pose a risk to fundamental rights or socio-economic well-being of citizens.

A right to an explanation and to recourse

While innovation is vital, protecting citizens was and always will be my highest priority. The AI Act will be the worlds’ first transversal law on Artificial Intelligence, it will likely inspire similar legislation around the world. The choices we make will define how AI is used, and if it benefits or harms citizens.

That’s why I fought hard to create new rights for citizens: a right to an explanation of AI-powered decisions, a right to be informed on the use of AI, and the right to complaint and recourse either as an individual or as a group.

Fighting for rights beyond the JURI report

The rights we introduced in the JURI report are a step in the right direction, but I wanted to go further: ever since the AI act was announced, I have been warning against dangerous applications of AI, such as orwellian biometric mass-surveillance and facial recognition databases, which would track us everywhere we go; intrusive and pseudo-scientific emotional recognition and AI-powered lie detectors; as well as dangerous and discriminatory biometric categorisation, social scoring and predictive policing. I also tabled amendments to ban these practices. In this weeks vote, MEPs voted in favour of these bans.

A full ban on Biometric Mass Surveillance

The Commission’s original proposal did contain a ban on biometric mass surveillance, but was full of loopholes that let governments do what they want. In this week’s historic vote, we closed these loopholes, preventing governments from spying on their citizens!

An end to pseudo-scientific emotional recognition

For years, research has been underway to develop tools that claim to allow AI to “read minds”, detect emotions, and even tell if a person is lying. The EU-funded iBorderCtl project intended to take these technologies and test them on migrants at the EU’s external border, but faced fury from the European Parliament as well as a case at the European Court of Justice. The project, which has since been labeled as “pseudoscientific” and “orwellian” by academics in forensic psychology and data science, shut down in 2018, but the idea never died. Today’s decision in parliament ensures citizens never face surveillance from pseudoscientific “emotional recognition”.

Canceling the remake of Minority Report: a ban on predictive policing!

The original proposal from the European Commission allowed for the use of AI to predict if a citizen is likely to commit a criminal offense or not. This technology, which is already in use in the United States, has proved to provide incorrect and discriminatory predictions on the basis of skin colour. I don’t want to live in an Orwellian European remake of Minority Report, which is why I’m glad that the final position of parliament is that predictive policing doesn’t belong in Europe!

What’s missing?

Still, in spite of all the progress, not everything is perfect: there are still some practices I would have liked to see prohibited that weren’t. Namely orwellian pseudoscientific behavioral recognition technologies and heartless automated migration decision-making. I co-signed amendments to get these practices banned, but unfortunately they did not reach the majority required to pass. You can find out more about the amendments and biometric recognition and automated migration decision making here.

What’s next?

The next step is trilogue negotiations, where the European Parliament (representing the European People) and the Council of the European Union (Representing the governments of each EU country) have to agree on a final version of the AI act. These negotiations won’t be easy, in particular when it comes to Biometric Mass Surveillance, but my colleagues have my full backing, and support in the negotiations to come.


Leave a Comment