New EU rules would require chat apps to scan private messages for child abuse





The European Commission has proposed controversial new regulation that would require chat apps like WhatsApp and Facebook Messenger to selectively scan users’ private messages for child sexual abuse material (CSAM) and “grooming” behavior. The proposal is similar to plans mooted by Apple last year but, say critics, much more invasive.

After a draft of the regulation leaked earlier this week, privacy experts condemned it in the strongest terms. “This document is the most terrifying thing I’ve ever seen,” tweeted cryptography professor Matthew Green. “It describes the most sophisticated mass surveillance machinery ever deployed outside of China and the USSR. Not an exaggeration.”

Jan Penfrat of digital advocacy group European Digital Rights (EDRi) echoed the concern, saying, “This looks like a shameful general #surveillance law entirely unfitting for any free democracy.” (A comparison of the PDFs shows differences between the leaked draft and final proposal are cosmetic only.)

The regulation would establish a number of new obligations for “online service providers” — a broad category that includes app stores, hosting companies, and any provider of “interpersonal communications service.”

The most extreme obligations would apply to communications services like WhatsApp, Signal, and Facebook Messenger. If a company in this group receives a “detection order” from the EU they would be required to scan select users’ messages to look for known child sexual abuse material as well as previously unseen CSAM and any messages that may constitute “grooming” or the “solicitation of children.” These last two categories of content would require the use of machine vision tools and AI systems to analyze the context of pictures and text messages.

(In contrast, Apple’s proposal last year to scan messages to find child abuse material would only have looked for known examples of CSAM, which limits the scope for error. After facing widespread criticism that its proposal would damage the privacy of users, Apple removed references to the feature from its site and indefinitely postponed its rollout.)

“Detection orders” would be issued by individual EU nations, and the Commission claims these would be “targeted and specified” to reduce privacy infringements. However, the regulation is not clear about how these orders would be targeted — whether they would be limited to individuals and groups, for example, or applied to much broader categories.

Critics of the regulation say such detection orders could be used in a broad and invasive fashion to target large swaths of users. “The proposal creates the possibility for [the orders] to be targeted but doesn’t require it,” Ella Jakubowska, a policy advisor at EDRi, told The Verge. “It completely leaves the door open for much more generalized surveillance.”

Privacy experts say the proposal could also seriously undermine (and perhaps even break) end-to-end encryption. The proposal does not explicitly call for an end to encrypted services, but experts say that requiring companies to install in their systems any software the EU deems necessary to detect CSAM would make robust end-to-end encryption effectively impossible. Because of the EU’s influence on digital policy elsewhere in the world, these same measures could also spread around the globe, including to authoritarian states.

“There’s no way to do what the EU proposal seeks to do, other than for governments to read and scan user messages on a massive scale,” Joe Mullin, senior policy analyst at the digital rights group Electronic Frontier Foundation, told CNBC. “If it becomes law, the proposal would be a disaster for user privacy not just in the EU but throughout the world.”

In addition to problems with encryption, the Commission’s decision to target previously unknown examples of CSAM as well as “grooming” behavior has also been criticized. Finding this content would require the use of algorithmic scanners, which the Commission says would preserve the anonymity of targeted users. But experts say such tools are prone to error and would lead to innocent individuals being surveilled by their government.

“There was uproar when Apple was suggesting something similar for finding known [CSAM] content. But if you introduce ambiguity and these context-dependent scenarios, in which AI based tools which are notoriously unreliable, the challenges are much greater,” said EDRi’s Jakubowska. “You only have to look at how dodgy spam filters are. They’ve been around in our email for 20 years, but how many of us still get spam in our inboxes and miss legitimate emails? That really shows the limitation of these technologies.”

Said Jakubowska, “This whole proposal is based around mandating technically infeasible — if not impossible — things.”





Leave a Comment

x