
The European Commission first proposed the “Chat Control” regulation in May 2022. Its full title is the Proposal for a Regulation of the European Parliament and of the Council laying down rules to prevent and combat child sexual abuse. Since then, the European Council (sometimes referred to as the EU Council) and Parliament have debated it extensively. Privacy advocates and European bodies have called for its withdrawal. So why does a regulation aimed at protecting children spark such widespread alarm?
The origin and scope of the initial “Chat Control” proposal
In 2022, the European Commission brought forward the proposal for a regulation aimed at tackling child abuse online which would also seek to include the solicitation and proliferation of Child Sexual Abuse Material (CSAM) as well as the prevention of grooming through online platforms and communication channels. The idea was not entirely new. It came to replace an existing temporary derogation to the ePrivacy Directive from 2021, which aimed to tackle the same issue by allowing providers of certain interpersonal communication services to use technology to detect CSAM in communication that would otherwise be private. This derogation, as of April 2026, is no longer in place; however, deliberation over the regulation that is to replace the derogation is currently ongoing.

Understanding the first draft
What made the first proposal of the regulation such cause of concern and what led to the naming of the Regulation to “Chat Control” was the substantial expansion of the scope. The scope explained from a temporary derogation of rights for only a limited number of providers to an obligation. This obligation is for any and every: information society service, hosting provider, internet service providers and search engines to, if mandated by the relevant authorities, comply with so-called “detection orders.” Article 10 of the proposal determines compliance to these orders as: “installing and operating technologies to detect the dissemination of known or new child sexual abuse material or the solicitation of children as applicable”.
Privacy experts, including EU bodies such as the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS), immediately raised their concerns. They interpreted this requirement as giving providers the chance to carry out scanning of all communications by implementing technology that undermines security by means of bypassing end-to-end encryption, or implementing client-side scanning. This risk was in conjunction with a previous claim by the Commission of potential inaccuracies in detection methodologies.
Main issues of the proposal, and what has changed
Since the proposal, a wealth of opinions have been published by European bodies, arguing whether the first version of the proposal could even be considered due to violations of key fundamental rights.
For example, the EU Council has noted the following limitations and risks:

- Screening of communications following detection orders affects the right to respect for private life and could have a deterrent effect on the right of freedom of expression.
- Additionally, screening of all communication necessarily means the processing of vast amounts of personal data, involving automated means as well, which would affect one’s rights to privacy and GDPR fundamental principles of data minimization and purpose limitation.
- The scanning would need to work on encrypted environments as well, that would imply that the providers would have to consider:
- (i) abandoning effective end-to-end encryption or
- (ii) introducing some form of “back-door” to access encrypted content or
- (iii) accessing the content on the device of the user before it is encrypted, which also affects one’s rights to privacy and the fundamental principle of integrity and confidentiality under the GDPR.
- Lastly, detection orders are issued on providers and their service as a whole, rather than in relation to single individuals who might be suspected of committing criminal activities. As a result, the automated mass-processing of personal data could have potential severe legal repercussions that will affect all individuals, including those that have no connection to or suspicion of criminal behaviour. This would then undermine the general presumption of innocence of individuals.
A joint opinion
The EDPB and EDPS reiterated similar sentiments in a joint opinion, and included additional criticism on the resulting necessity of age assurance mechanisms, which hold their own limitations:
- In order to scan communication and determine child abuse is currently taking place, one would have to determine the accurate age of all users of a communication platform or service. This means that all users will have to be mandated to provide their age upon signing up for such services.
- As a result, this will require additional processing of personal data. Additionally, age assurance mechanisms themselves work on a tradeoff between accuracy and invasiveness. AI-powered age assurance mechanisms may rely on image or language analysis, or through questionnaires are not 100% accurate. However, relying on official identity documents is not considered privacy-preserving and not accessible to all individuals, especially children.
Overall, EU bodies’ opinions were in agreement that the initial version of the proposal did not appropriately meet the necessity and proportionality test to balance with other fundamental rights granted by the Charter and other EU legislation. Consequently, the proposal was changed. As of 2025, the detection order provision was eliminated, instead opting for a risk-based approach: whereby scanning of communications was only a possible mitigation measure based on level of risk, and no longer an obligation.
Where we are now
Discussion remains open with the updated version of the proposal still subject to criticism. Additional provisions have been added to ensure that the use and effectiveness of end-to-end encryption is not to be affected by the regulation. Additional language was used to ensure that whatever measures are implemented by providers are in line with fundamental rights. However, privacy bodies and advocates remain skeptical.
The main issue with age assurance implementation remains: namely the trade off between accuracy and privacy. Other jurisdictions that have currently implemented more stringent age assurance rules, such as the United Kingdom with the Online Safety Act, demonstrate that the general public does have privacy concerns. Specifically, concerns when it comes to providing “hard identifiers” e.g. IDs/Passports as opposed to self-declaration. Similarly, parents and children have claimed lack of trust and discomfort in the idea of sharing more data with third party providers just to access platforms and communication channels. This discomfort is stronger when users are accessing platforms they had previously accessed prior to age assurance. Privacy advocates also warn of surveillance systems behind age assurance and the risk of censorship.

Communication scanning

However, communication scanning is not entirely off the table in the new version of the Proposal. Art. 4.1.h includes “voluntary activities under Regulation (EU) 2021/1232”, which gave the initial derogations to allow for communication scanning, as a potential risk mitigation measure. The catch is that this is considered a “voluntary” measure. That being said: if the risk mitigation implementation is mandatory, one can argue whether this can be really considered voluntary, or it is just a roundabout way to still allow such wide scanning practices.
The vagueness of the language used for implementation of the proposed regulation procedure, in regards to age assurance and key risk mitigation measures, still raises concerns from privacy advocates and bodies. It is still the case that complete accuracy of results cannot be ensured and accountability for the issues bound to arise from extensive data collection will occur. As a result, it remains likely that the regulation will infringe on privacy rights in an attempt to protect minors online.
Conclusion
The most controversial elements from the initial European Parliament’s proposal, i.e. the “chat control” provisions, have now been removed from the Regulation’s proposed text. However, issues concerning the privacy and security of private communication of individuals, and the freedom and accessibility of the internet, still remain.
While nobody argues against the necessity to implement measures to protect children online, there is still a point to be made about the willingness of governmental bodies to consistently sacrifice privacy under the guise of security. This can be seen through examples of national security legislation, anti-terrorism legislation, public health concerns and now gathering and dissemination of CSAM. European and international governmental bodies can demonstrate better commitment to the right to privacy by being more intentional in the language used in the proposal. This can be done by drawing from standardization bodies to ensure that no vagueness is left in decisions that could affect the fundamental rights of millions.