By Julia Tar | Euractiv Est. 6min 19-03-2024 Content-Type: News News Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources. [Miljan Zivkovic/Shutterstock] Euractiv is part of the Trust Project >>> Languages: Français | DeutschPrint Email Facebook X LinkedIn WhatsApp Telegram A new approach by the Belgian Presidency of the EU Council to the draft law to detect and remove online child sexual abuse material puts focus on the Coordinating Authority’s roles, such as risk categorisation or detection orders. The Coordinated Authority is a designated body in each EU country responsible for receiving risk assessments, implementing mitigation measures, and coordinating efforts to detect, report, and remove online child sexual abuse material (CSAM). The regulation seeks to create a permanent solution for detecting and reporting online CSAM. It has attracted criticism as in its original form, it would empower judicial authorities to ask communication platforms like WhatsApp or Gmail to scan users’ private messages to find suspected content. The presidency’s new approach, dated 13 March and seen by Euractiv, mentions another document, presented during the Law Enforcement Working Party’s (LEWP) meeting on 1 March 2024. LEWP is responsible for tasks concerning legislation and operational issues associated with cross-border policing. The new approach was on the agenda of a LEWP meeting on Tuesday (19 March). The approach consists of more targeted detection orders for improving risk assessment and categorisation and safeguarding cybersecurity and encrypted data, while still encompassing services using end-to-end encryption within the scope of detection orders. End-to-end encryption (E2EE) is a method of secure communication, preventing third parties, including the messaging app being used, from accessing data being sent from one user to another. E2EE has been a much-debated part of the file, revolving around whether compromising it is necessary to detect CSAM. Some have advocated this approach, while others argue for methods that preserve encryption and emphasise the importance of data privacy. In the document, risk assessment consists of risk categorisation, mitigation measures, and detection orders, which would be issued to give the green light to the detection of CSAM on platforms. The latest text puts the spotlight on these. EU countries share case studies on fighting child sexual abuse material An EU Council document about case studies illustrating the implementation of the regulation aiming to prevent online child sexual abuse material gives examples of how reports of such material were handled by member states. Risk categorisation The Presidency suggests a methodology to evaluate the risk associated with services or their components, categorising the services into three risk levels: high, medium, and low. These categories are determined considering factors like the nature of the service, its core architecture, provider policies, safety features, and user behaviours, aiming to assist service providers in assessing the risks of CSAM in their services. The category also provides criteria for the Coordinating Authority to determine the measures to mitigate these risks. Service providers will be required to facilitate the risk categorisation process by the Coordinating Authority, using a template, after determining the risk of CSAM on their services. They must report the risk assessment results, mitigation measures, and self-assessment to the Coordinating Authority. which verifies the categorisation, requests more information if needed, and confirms or assigns a different category based on its assessment. Providers may flag if they have identified a risk in their service requiring a detection order, but this does not automatically trigger one. Only the Coordinating Authority can decide whether to request a judicial or independent administrative authority to issue an order. EU Ombudsman launches probe into Europol officials' alleged conflict of interest The EU Ombudsman launched an investigation into the transfer of two former Europol officials to child protection organisation Thorn on Wednesday (4 January). Depending on the risk level, providers will be subject to different levels of safeguards and obligations. All providers must implement mitigation measures, and, based on the service category, additional tailored measures, specific to the service’s characteristics, also apply. Depending on the category, the service or its component will undergo re-categorisation after a certain period. The regulation could set maximum terms for each category: 12 months for high risk, 24 months for medium risk, and 36 months for low risk. The maximum terms provide flexibility for Coordinating Authorities to set specific durations, which can be shared with service providers when issuing the categorisation. The Authority can initiate re-categorisation at any time, which could result in mandatory mitigation measures if the service is reclassified as high-risk or medium-risk. For high-risk and medium-risk services, additional mitigation measures are mandatory. The Coordinating Authority will explain why these measures are necessary when issuing the categorisation decision. Non-compliance will lead to penalties. For low-risk services, additional mitigation measures are recommended to aid the provider in identifying potential service improvements. Non-compliance incurs no penalties. Detection orders The Presidency suggests restricting detection orders to high-risk services as a last resort. The Coordinating Authority can customise the order based on specific risks, prioritising the least intrusive method. According to an annex, for high-risk services, the detection orders include services using E2EE, while this is not the case for medium and low-risk ones. Criteria for customisation will be verified by competent authorities issuing the order upon request of the Coordinating Authority. This request encompasses the duration of the detection order, the technologies employed, the impact on interpersonal communication protection, the potential scope limitations, and other safeguards. The document introduces the term ‘user of interest’, referring to a potential sender or receiver of CSAM or grooming attempts. Grooming denotes manipulative practices aimed at exploiting and abusing people. In this case, detection happens automatically with no one, including the provider, being aware until a certain number of incidents in users’ accounts hint at potential CSAM sharing or grooming attempts. When a provider receives a detection order for CSAM and/or grooming, they set up mechanisms to detect possible “interest” by looking for connections to known or new CSAM and/or grooming. Commission highlights data shortfall in interim child sexual abuse regulation A European Commission report on the implementation of the interim regulation on online child sexual abuse material called for improved standardisation and reporting of data submitted by member states on their actions under the temporary rules. An adult user is flagged as a user of interest after a specific number of automatically detected hits, depending on the type of online CSAM and accuracy rates: once for known CSAM and twice for new CSAM or grooming. For child users, when a potential hit is detected, the child is immediately alerted without the provider knowing. The child can then report to the provider, who will only find out after the child’s report. Providers become aware of possible abuse only when a user is identified as a user of interest. Only these users are reported to the EU Centre, a new central hub of expertise to help fight CSAM. This, according to the document, reduces the error rate for detecting new CSAM and grooming. Next steps The approach suggests there should be a focus on understanding and addressing potential risks associated with familiar technologies. Therefore, delegations are asked to provide technical concerns about detection technologies, enabling the inclusion of extra safeguards. Additionally, defining the EU Centre’s role in technology verification and collaboration with other EU cybersecurity agencies is also needed. [Edited by Zoran Radosavljevic] Read more with Euractiv Council strengthens call to make influencers accountable for their actionsThe Belgian Presidency of the Council of the EU is taking a stronger position on the accountability of influencers for content shared online, according to a draft document dated 13 March and seen by Euractiv.