The EU's "terrorist content" regulation and what it means for UK hosting providers

- Posted in hosting by - Permalink

Screenshot of front page of regulation

The European Parliament has adopted (deemed approved) a regulation addressing the dissemination of terrorist content online. The stated aim of the Regulation is to "address the misuse of hosting services for terrorist purposes and contribute to public security in European societies".

If you provide hosting services (which is defined broadly, and includes social media services, sites with public comments sections, eCommerce sites with free-text review facilities, as well as "hosting providers" in the more typical sense) to people in the EU, you'll want to read this.

It's a bit long and complicated, so do get in touch if you need advice on how it applies to your specific services.

Important: Even if you are in scope, you do not need to comply yet: it will apply 12 months and 20 days after it is published in the Official Journal of the European Union. I'll update this page when the date is crystallised.

It applies to hosting service providers offering services within the EU which disseminate information to the public

The Regulation imposes obligations on hosting services providers offering services within the EU which disseminate information to the public.

"Hosting service providers"

Hosting service providers are defined as:

a provider of information society services consisting in the storage of information provided by and at the request of the content provider

According to the Recitals (the explanatory text at the front of the Regulation):

The concept of "storage" should be understood as holding data in the memory of a physical or virtual server.

This is very broad, but the Recitals then go on to narrow it:

Providers of "mere conduit" or "caching" services as well as of other services provided in other layers of the internet infrastructure, which do not involve such storage, such as registries and registrars as well as providers of domain name systems (DNS), payment or distributed denial of service (DdoS) protection services therefore fall outside the scope of this Regulation.

(The latter bit looking quite a lot like a Cloudflare-shaped exception.)

Email and messaging services are also out of scope:

Interpersonal communication services, as defined in the Directive 2018/1972 establishing the European Electronic Communications Code such as emails or private messaging services, fall outside the scope of this Regulation.

As are providers of "cloud infrastructure":

providers of services such as cloud infrastructure, which are provided at the request of other parties than the content providers and only indirectly benefit the latter, should not be covered by this Regulation

Some services are expressly included:

providers of social media, video, image and audio-sharing, as well as file-sharing and other cloud services, in as far as those services are used to make the stored information available to the public at the direct request of the content provider

Providers of web hosting services would amount to "hosting service providers", as would every blog with a comments section, as well as all eCommerce sites which offer a commenting / review facility.

I expect argument / case law as to which services are in scope and which are not.

Offering services within the EU

Hosting service providers are covered by the Regulation if they "offer services within the EU".

The main repercussion of this is extraterritoriality: that providers established outside the EU can still be in scope of the Regulation.

The definition in Article 2 is this:

enabling legal or natural persons in one or more Member States to use the services of the hosting service provider which has a substantial connection to that Member State or Member States. Such a substantial connection shall be deemed to exist where the hosting service provider has an establishment in the Union. In the absence of such an establishment, the assessment of a substantial connection shall be based on specific factual criteria, such as (a) significant number of users in one or more Member States; (b) or targeting of activities towards one or more Member States

Recitals 10b says that:

the mere accessibility of a service provider’s website or of an email address or of other contact details in one or more Member States, taken in isolation, should not be a sufficient condition for the application of this Regulation

Establishment of jurisdiction based on "targeting" is not new — for example, organisations outside the EU can still find themselves within the scope of the GDPR if they direct the offering of the provision of the services to people in the EU, or monitor the behaviour of people in the EU. See also the UK's "online harms" proposals.

However, as far as I know, the idea that you can be in scope simply because you have "a significant number of users" in the EU is new.

If your service becomes popular in the EU (and there's no definition or guidance in the Regulation as to what constitutes "significant"), then, even though you may have never intended to offer services to people in the EU, it appears that you can be in scope.

Dissemination to the public

Only hosting services which "disseminate information to the public" are caught.

This is defined as:

the making available of information, at the request of the content provider, to a potentially unlimited number of persons

Recital 10a says that:

The concept of "dissemination to the public" should entail the making available of information to a potentially unlimited number of persons that is, making the information easily accessible to users in general without further action by the content provider being required, irrespective of whether those persons actually access the information in question. Accordingly, where access to information requires registration or admittance to a group of users, it should be considered to be disseminated to the public only where users seeking to access the information are automatically registered or admitted without a human decision or selection of whom to grant access. [my emphasis]

In other words, if you can only access content by means of a registration or admission protocol which requires human interaction (such as a mod approving a request for access), then those services are out of scope of this Regulation.

What you have to do if you are in scope

If you are in scope, you have a number of obligations, some more onerous and risky than others.

Establish a point of contact for removal orders

Removal orders are orders from a competent authority (not necessarily, and probably not, a court) to remove, or disable acces, to content deemed to be terrorist content. More on the specifics of removal orders below.

You are required to establish a point of contact allowing for the receipt of removal orders and referrals by electronic means and ensure their expeditious processing.

You must make this information publicly available.

This seems relatively easy to do, but it's still another formality / hurdle to trip up the unwary.

Appoint a legal representative in the EU, if you are not established in the EU

If you are not established in the EU, you need to appoint ("in writing", no less) a legal representative which resides in the EU. This representative will be sent removal orders, and other correspondence.

The designated legal representative can be held liable for non-compliance with obligations under this Regulation, so I'm not sure who'd want to offer that service, which may mean prices for the service are high.

I wonder just how many providers — other than perhaps the "big" players, or those who target services to users in the EU — are going to do this.

Respond to removal orders, requiring removal or disabling access to content within 60 minutes, 24/7.

You can be ordered to remove or disable access to specific content notified to you by a "competent authority". This is a "removal order".

This is limited to removing or disabling access to it for people in the EU, so if you operate a global service, this Regulation does not demand global takedown. (If, however, other local laws impose obligations or liability on you once you have become aware of problematic content, you'd need to consider if receipt of one of these orders was sufficient to trigger those obligations.)

If this is your first removal order, the competent authority must give you at least 12 hours' advance notice, setting out what you have to do, and reminding you of the deadlines. But they do not have to do this in "duly justified emergency cases".

You have to remove or disable access to the content "as soon as possible and in any event within one hour from receipt of the removal order".

Removal orders must be sent to your designated point of contact, or your appointed legal representative, and must be sent by a mechanism which produces a written record. So no telephone calls.

The one hour period applies 24/7, but, if you cannot comply "because of force majeure or of de facto impossibility not attributable to the hosting service provider including for objectively justifiable technical and operational reasons", you are required to notify the competent authority without undue delay, explain why you can't do it, and do it as soon as you can.

It's not a requirement, but recital 27 says that you might:

replace content which is considered terrorist content, with a message that it has been removed or disabled in accordance with this Regulation

Perhaps an oppportunity for http status code 451?

Include provisions to address the misuse of your services in your terms

If you are "exposed to terrorist content", you have to include in your terms and conditions, and apply, provisions to address the misuse of your service for the dissemination to the public of terrorist content online.

There is a definition of "exposed to terrorist content", but the drafting of the Regulation does not make it clear if it applies in this situation or not. If it does apply, then you are only deemed to be "exposed to terrorist content" if you are told by the competent authority that you are. They have to do this on the basis of objective factors, such as two or more removal orders in a year.

If the definition does not apply, then it's unclear to which providers this obligation applies. It seems not all hosting providers, since otherwise it would just be "hosting service providers" and not "hosting service providers exposed to terrorist content". My initial reaction is that the definition applies, and that you do not need to make any changes to your terms unless you are notified that you are deemed to be exposed.

Time to commission that review of your terms which you've been putting off for so long?

Take specific measures to protect your services against dissemination to the public of terrorist content

Unlike the requirement about your terms, this provision definitely applies only if you have been notified by the competent authority that you are deemed to be exposed to terrorist content.

If you are so notified, you need to take "specific measures" to protect your services against dissemination to the public of terrorist content.

In earlier drafts, this was described as a requirement to take "proactive measures", but this language has been changed.

The Regulation gives an indicative list of the type of things you might do, but says that "[t]he decision as to the choice of specific measures shall remain with the hosting service provider".

The "specific measures" listed in the Regulation are:

  • appropriate technical and operational measures or capacities such as appropriate staffing or technical means to identify and expeditiously remove or disable access to terrorist content;
  • easily accessible and user-friendly mechanisms for users to report or flag to the hosting service provider alleged terrorist content;
  • any other mechanisms to increase the awareness of terrorist content on its services such as mechanisms for users moderation;
  • any other measure that the hosting service provider considers appropriate to address the availability of terrorist content on its services.

There is also a list of non-functional requirements for your specific measures.

You are not required, and cannot be compelled, to use automated tools, although recital 19 says hosting services providers:

may however decide to use automated tools if they consider this appropriate and necessary to effectively address the misuse of their services for the dissemination of terrorist content

(If you do, you'll need to comply with the GDPR's rules on automated decision-taking, if you determine that whatever your automated tools do might have a legal or similarly significant effect on someone.)

The requirement does not extend to a general obligation to monitor the information which you store, nor a general obligation to actively seek facts or circumstances indicating illegal activity.

If you are obliged to take specific measures, you are required to notify the competent authority, within three months of receipt of the determination that you are "exposed to terrorist content" of what measures you have taken, and then report on an annual basis.

Although the onus is you to determine what specific measures you take, if the competent authority thinks they do not meet the requirements of the Regulation, theye can require you to:

to take the necessary measures so as to ensure that those requirements are met. The decision as to the choice of measures remains with the hosting service provider.

Content preservation

If you remove or disable access to content because of a removal order or your own specific measures, you must preserve the content for six months, or longer if requested by a competent authority or court.

In addition to the content, you are required to retain "related data". This is not defined in the Regulation itself, but the recitals say that this covers:

data such as ‘subscriber data’, including in particular data pertaining to the identity of the content provider, as well as ‘access data’, including for instance data about the date and time of use by the content provider, or the log-in to and log-off from the service, together with the IP address allocated by the internet access service provider to the content provider.

Transparency obligations

Update your terms and conditions

Seemingly whether you are "exposed to terrorist content" or not, you are required to:

set out clearly in [your] terms and conditions [your] policy to address prevent the dissemination of terrorist content, including, where appropriate, a meaningful explanation of the functioning of specific proactive measures including, where applicable, the use of automated tools.

Does this really mean that every blog with a signficant number of readers in the EU needs to change it terms to mention terrorist content? That seems entirely unnecessary. (And note it does not compel you to have terms and conditions, only to set things out in your terms and conditions.)

Publicly-available transparency reports

If, in any given year, you have taken action (i.e. on a voluntary basis) against terrorist content or have been compelled to do so under this Regulation, you have to make publicly available a transparency report on what you've done. You have to do this within two months of the end of the year.

The Regulation sets out what has to go into the report.

Hopefully updating your terms to comply with this Regulation does not amount to "taking action against terrorist content", else everyone who does that would also have to issue a transparency report, which would be silly.

Tell content providers if you remove or disable access to their content

Unless the competent authority demands secrecy (which can last for a maximum of 12 weeks), if you remove or disable access to someone's content, you have to "make available" to that person "information on the removal or disabling of access to terrorist content".

If they ask you for it, you must inform the content provider about the reasons for the removal or disabling of access and possibilities to contest the decision or provide them with a copy of the removal order.

Establish a complaints mechanism

You are required to establish an "effective and accessible" mechanism, allowing people who upload content to complain about the fact you’ve disabled it or removed access to it.

You are required to:

"promptly examine every complaint that they receive and reinstate the content without undue delay where the removal or disabling of access was unjustified".

If you have received a removal order, are you — a technical services provider, or even a small blog operator – really in a position to determine whether the removal was "justified" or not?

If you decide not to reinstate the content, you must tell the complainant within two weeks.

Apparently, this procedure is "a necessary safeguard against erroneous removal of content protected under the freedom of expression and information". A cynic might say that requiring competent authorities to get a judicial order that content is terrorist content before imposing a removal order would be a better safeguard.

Ominously, the Regulation says that "[a] reinstatement of content shall not preclude administrative or judicial measures against the decision of the hosting service provider or of the competent authority." So it would probably be wise to update your terms to attempt to limit or exclude liability for content which you remove following a removal order or because of your specific measures.

Who gets to determine what is "terrorist content?

There is a complex definition of "terrorist content' in the Regulation, which I'm not going to include here.

I'm very much hoping that you don't need to dump the whole definition in your terms, as it's difficult enough for a lawyer to follow, let alone your average person who (let's pretend) reads terms and conditions.

In terms of a removal order, the determination that the content in question is "terrorist content" is taken by the competent authority.

In terms of your own specific measures, the onus is on you to ensure that they are effective in tackling terrorist content.

Penalties

Of course there are penalties for non-compliance.

These are laid down by Member States, so the penalties which will apply may vary, but they must:

ensure that a systematic or persistent failure to comply with obligations pursuant to Article 4(2) is subject to financial penalties of up to 4% of the hosting service provider's global turnover of the last business year

In other words, you are between a rock and a hard place. Again.

Whether anyone is going to look to apply penalties, or take enforcement action, against organisations in the UK who fall within scope but do not meet the obligations is a different matter. Perhaps not. We do not have enough information yet to offer a firm view on that.