Combatting child sexual abuse imagery: a proposed exemption to the EU's ePrivacy rules

Routers with cables1

The European Commission has published a proposal for a regulation which, if passed, would provide a time-limited and very constrained exemption to the rules set out in the ePrivacy framework, which limit the ways in which service providers can process communications data.

What's this about?

The rationale for the proposal is that, when the new Electronic Communications Code comes into force on 21 December 2020, some service providers may not be able to comply with the rules relating to communications privacy when they attempt to detect and block the transmission of child sex abuse imagery.

To help them continue to do this, the Regulation would give providers a narrow and specific exemption from the general privacy safeguards afforded by the ePrivacy rules.

Only to providers of "number-independent interpersonal communications services"

The proposed exemption applies only to providers of "number-independent interpersonal communications services".

These are services which do not make use of numbers from national telephone numbering plans — for example, email, and many over the top communications services.

What's the scope of the derogation?

Very, very narrow (and rightly so).

The list of conditions which providers must meet is significant, and the ones which caught my eye are these:

First, the processing must be "proportionate" (fair enough). However, it must also be:

"limited to well-established technologies regularly used by providers of number-independent interpersonal communications services for that purpose before the entry into force of this Regulation".

In other words, here we have a Regulation which by law forces providers to stick with the status quo. They can't come up with better ways of doing things.

This seems odd to me, especially since the Regulation would have a five year life span, and limiting providers to the tech which exists in 2019/2020 for five years seems of questionable value.

Second, each provider must:

annually [publish] a report on its related processing, including on the type and volumes of data processed, number of cases identified, measures applied to select and improve key indicators, numbers and ratios of errors (false positives) of the different technologies deployed, measures applied to limit the error rate and the error rate achieved, the retention policy and the data protection safeguards applied

This is a good thing: if providers are going to scan and monitor communications, being transparent about what they are doing, and how they are doing it, is important.

Can a provider scan everything?

Seemingly not.

According to the recitals — the bits which set out the context of an EU legal instrument, and which are not, in themselves, binding — the systems used:

should not include systematic filtering and scanning of communications containing text but *only look into specific communications in case of concrete elements of suspicion of child sexual abuse.

This does not appear as a specific requirement in Article 3 itself, and, in any case, the term "concrete elements of suspicion" hardly seems legally precise, but, if I understand it correctly, it means that a provider must have good evidence of the transmission of child sex abuse imagery before it applies the filter.

From a privacy of communications point of view, preventing scanning of all communications is likely to be welcomed. Practically, I am not sure how this will work.

I also note that a recent leaked draft of a report on tackling dissemination of child sex abuse imagery via end-to-end encrypted communications services, prepared for the European Commission, includes options which look to me like scanning of all communications (by hashing them and sending the hash to a centralised database for analysis. That doesn't feel like "specific communications in case of concrete elements of suspicion" to me.

Is this a permanent thing?

No. The proposal is that it will apply "only" until 31 December 2025.

Which, in terms of the life of online services, is basically forever.

Why does it keep referring to "child pornography"? Isn't that a greatly disliked term

Yes, it is greatly disliked and, IMHO, rightly so. We're not talking here about pornography, but heinous depictions of the sexual abuse of children.

The reason for referring to it is, sadly, legislative history. "Child pornography" is the term used in directive 2011/93/EU, and so it continues to be used for consistency.


  1. This image is licensed under the Pexels licence↩︎