Internet service providers comply with police requests to remove content on a large scale on a voluntary basis, but a legislative proposal would force them to cooperate. An agreement could still be reached under the German Presidency of the Council.
Negotiations on an EU regulation against the distribution of terrorist content online could be successfully concluded in the coming weeks. Following the recent attacks in France and Vienna, the Parliament and the Member States of the European Union have made concessions on key points. This emerges from a draft of 9 November which was put online by the British civil rights organisation Statewatch on the trilogue negotiations in which the Commission is also involved.
With the legislative proposal for a “Regulation on Preventing the Dissemination of Terrorist Content Online” presented by the Commission two years ago, the EU is pressing for “enhanced action” against terrorist activities. A whole chapter of the draft is devoted to measures that should “effectively tackle” uploading and sharing of text, images, sound recordings and videos, including a one-hour time limit between placement of the order and its implementation as well as technical means to prevent a reupload. Critics had understood this to mean the introduction of upload filters even for small providers.
Reporting obligation on “specific measures”
The Member States represented in the Council now want to avoid mandatory “proactive measures” for Internet service providers. This had been one of the main demands of the Parliament. The wording will now be changed to “specific measures”. While “automated tools” for the detection of incriminating content will continue to be mentioned, their possible introduction is be preceded by “where appropriate”.
However, service providers will have to report on which “specific measures” they apply. A competent authority in the Member State in which they are established will then assess whether these are effective and proportionate. Where upload filters are used, the content removed should in any case be subject to human review. The companies should also publish statistics on issued removal orders and their execution, which will then be put in relation to the users of the platform. It is therefore still possible that the providers will be forced to exercise more control over their content.
The Council and Parliament also agreed on the definition of “terrorist content”, the scope of which was even extended with the “glorification of terrorist acts”. It also includes postings which constitute the manufacture of explosives, firearms “or other weapons”. The paragraph also contains the unspecified wording “other specific methods or techniques”.
Consent by keeping silent?
Controversy remains over cross-border removal orders under which an authority in one Member State may want a content deleted in another Member State. At the beginning of November, under the German Presidency, the Council discussed a compromise on which the Parliament is now to give its opinion. Under the proposal, Internet service providers would have to comply with a request for removal within one hour, but initially only block the content. The Member State on whose territory the provider is established would then have 24 hours to comply with the order.
However, Member States have not been able to agree on a definitive solution for confirmation by the Member State in which the provider is established. Many governments support the principle of “silence means consent”, while others demand an active confirmation. Small countries in particular have little capacity to carry out such checks in one day, and as the country where Facebook’s European representation is based, this would mainly affect Ireland. Presumably, the “silence rule” is also not compatible with EU treaties.
Proposal for extension
Member States are required to designate competent authorities which may issue a removal order. These would be placed on an Internet portal and notified once to the providers before issuing an order. For this purpose the “SIRIUS” project of Europol could be used. Only in emergencies may a removal order be issued without prior notification, for example in the event of imminent danger to the life or physical integrity of a person. For their part, companies should indicate which body is designated to receive an order.
So, while the two-year negotiations for a regulation against the distribution of terrorist Internet content may be nearing completion, EU anti-terrorism coordinator Gilles de Kerchove has surprised with a proposal to extend it. According to the proposal, gaming platforms “play a growing role in the context of radicalization”. Kerchove therefore calls on EU leaders to encourage internet gaming providers to cooperate.
Voluntary measures and crisis protocol
It is unclear why a regulation against terrorism propaganda is actually needed. In 2015, the Commission, Europol and numerous large and small internet service providers have joined forces in the “EU Internet Forum” and have taken voluntary measures. Many companies are participating in a hash database, where removed material is stored for comparison. A year ago, the “EU Internet Forum” adopted a crisis protocol against real-time streaming of terrorist attacks.
Member States’ police forces and Europol are already sending out referrals for the deletion of terrorist (and also migration-related) content, which, although not binding, are met by the platforms at around 90%. To this end, Europol has set up an “EU Internet Referral Unit” (EU IRU) and an Internet platform (IRMA) where requests are collected and pooled. This enables a Member State to know whether a content has already been reported for removal or should remain online so that secret services can monitor its activities. After the adoption of the contested Regulation, this Europol platform would be maintained, but a request for removal would then become an order.
Leave a Reply