ChatControl is illegal, so what can we do to protect kids?
We don’t have to spy on our children to keep them safe! Stop believing in the magic wands of digital filters and flagging content that can make our societies’ problems disappear. Let us use technology to empower our children to identify risks and enable them to reach out to trusted adults instead of a paternalistic approach using flawed technology that won’t protect kids but will make it harder to catch abusers! Let us build an internet that lets kids explore independently while learning about risks in an environment built to offer them safety, agency, and privacy.
Introduction
Last week, the European Parliamentary Research Service’s Impact Assessment on the CSAM regulation (ChatControl) was leaked. Unsurprisingly, it is highly critical of ChatControl, especially when it comes to scanning for unknown CSAM and grooming. The report also echos the concerns I raised in my whitepaper on the technical flaws of ChatControl, and how these flaws will make it harder for law enforcement to catch abusers.
It concludes that the Commission’s proposal would violate the EU Charter of Fundamental Rights, and that the protection the proposal could provide does not justify the violation of the charter. As of today, there is no denying ChatControl would be Illegal. And yet, in spite of this many – including the Commission – will double down, claiming there is no other way to prevent abuse. We must avoid the paternalistic approach of using a flawed technology that won’t protect kids and will make it harder to catch abuse.
That doesn’t mean that technology is without merit, but we need to look at how and when we use it. In this article, I want to share with you my ideas for an alternative to ChatControl that lets kids explore the internet independently while learning about risks in an environment built to protect them that offers agency, privacy, and safety.
Fighting unknown CSAM
Today, there are two primary categories of new CSAM: CSAM created by abusers, and self-generated CSAM, created by the victims following coercion; or leaked, stolen or re-shared following consensual sharing.
Unfortunately, much of new CSAM created by abusers initially circulates on the Darknet only, where it is impossible to detect, so even if we did introduce ChatControl, lots of abuse could go unnoticed for a long time. The only real solution to this issue is education: We need to empower young people to identify abuse early, by introducing key concepts like consent into education.
Self-generated CSAM has grown rapidly since the pandemic, and now represents about half of all CSAM. It is easier to combat, but although there are lots of technological solutions to help combat self-generated CSAM, the Commission decided to use the most intrusive one: mass surveillance.
The Commission’s approach
The Commission’s proposal to use artificial intelligence to detect abuse material would massively infringe on citizens’ privacy, and make it harder to catch abuse. AI cannot tell the difference between consensual nudes sent between a teenage couple and abuse material, so law enforcement would have to look at, investigate and punish every consensual nude between teens, as sexting between teens is illegal in 21 of the 27 member states. Today, according to a recent poll, 1 in 3 European Teenagers have sent nudes, meaning that there will be multiple million false positives every year. In addition to being an enormous waste of police resources and leaving less resources to investigate actual abuse, it is also stigmatising, humiliating and intrusive for teenagers discovering their sexuality.
So what is the alternative? Rather than stigmatising teens who share their nudes by calling the police on them, we can use nudity detection to help teens make informed decisions about sharing intimate pictures, while providing tools to help them when things go wrong, including to prevent their pictures from ending up on social media.
Helping teenagers make better decisions about sharing intimate images
In this demo, for users under 18, the chat app checks outgoing images for nudity. This check is conducted entirely on the device, and does not send any data or reports.
Instead, it explains the risks of sharing, helping them to make informed decisions. It also makes it harder for the recipient to share the image by preventing forwarding, saving and screenshots. While this isn’t foolproof, it does reduce the risk of teenagers nudes ending up online.
Finally, if something goes wrong, it provides simple options, including reaching out to helplines and hotlines, or submitting the image fingerprint to a takedown service. We hope to introduce an EU-wide service similar to the US TakeItDown service that would prevent the resharing of intimate images on participating social media sites.
Fighting Grooming
Grooming is another area where the Commission proposed extremely intrusive detection measures, which would effectively result in the scanning of all our personal messages. This sort of scanning would be even less reliable than AI scanning for new CSAM, and would result in massive numbers of false positives. Even organisations that are highly supportive of the Commission’s proposals have expressed concerns about the risks of generalised scanning, in particular for the LGBTQI+ Community.
We didn’t have to wait long for an example of this to appear: In our first FEMM Committee meeting on the proposal, a French far-right MEP Annika BRUNA unexpectedly took the floor to express her support for the proposal, but slowly her comments descended into something sinister: According to Ms Bruna, action also needs to be taken against Drag Queens, who, in her mind, are “Intimately linked to child abuse”. This intervention only proves precisely why plans to scan our communications set a dangerous precedent: Today, the Commission are pitching mass surveillance as a solution to child abuse, tomorrow, if the Annika Bruna’s and Victor Orban’s of Europe get their way, it will be used to silence the LGBTQI+ community.
Again, rather than using grooming detection to flag and report thereby overloading law enforcement, we could use the technology to warn kids about suspicious messages, so they can identify and move away from danger.
Helping young people avoid the dangers of grooming online
In this demo, for users under 18, the chat app checks the conversation for common patterns relating to grooming. This check is conducted entirely on the device, and does not send any data or reports.
Instead, it warns the young person when it notices something suspicious, helping them to identify and avoid danger, and giving them options to report the conversation or get a second opinion if they are unsure.
One of our other key goals is to build a relationship of trust between parents and children: but this requires change both from parents and children: far too often, parents reaction is to punish children by taking away their access to the app. We believe apps should offer guidance to parents and help them talk about online risks with their children.
Conclusion
These proposals are just some initial ideas on alternatives to ChatControl’s mass surveillance: we don’t have to spy on our children to keep them safe, instead, we can use technology to empower our children to identify risks, and then reach out to trusted adults.
In doing so, we will be Educating the adults of the future to be more aware online: many adults today have been victims of online fraud, been lured out into assaults or have had intimate photos leaked.
Instead of isolating them from the world and surveilling them, this approach helps young people safely confront risks they are exposed to, educating the adults of the future to be more aware online.