EU age verification app
The European Commission has announced that a new age verification app designed to protect children online is ‘technically ready’ and will soon be available for citizens to use. The app will allow users to prove their age when accessing online platforms, helping protect children from harmful or inappropriate content. It can be set up with a passport or ID card, enabling users to prove their age when accessing online services.
Stay up to date! Sign up to receive our fortnightly digest via email.
Reportedly, the app is ‘completely anonymous’, works on any device, and is fully open source. Cyber and privacy experts, however, immediately examined the source code on the GitHub software platform and reported several issues with the app’s design, including low cybersecurity standards and the possibility of bypassing the app’s biometric authentication features.
Unjustified parking fines through automated means

The deployment of scanning vehicles to check parked cars has resulted in an estimated 500,000 unjustified fines. This is evident from a new thematic study by the Dutch Data Protection Authority AP. Municipalities carry out an estimated 250 to 375 million scans yearly. This results in 3 to 5 million parking fines per year. According to calculations, more than 10 per cent of these are unjustified. People who object to the fine are successful in 40 to 62 per cent of cases.
A scanning vehicle only takes a snapshot, and the algorithms in the monitoring system do not see the circumstances. As a result, a scanning vehicle cannot, for example, determine that someone is loading or unloading. In such a situation, an exception may apply. The disabled parking permit, which is not registered to the license plate by default and is placed behind the windshield, is also not ‘seen’ by the scanning vehicle. If payment has not been made, the systems are unforgiving, and a fine follows automatically.
Other legal updates
Alabama comprehensive privacy law: The Alabama Personal Data Protection Act (APDPA) was enacted on April 16. It includes one of the lowest applicability thresholds for businesses in the US that:
- handle personal data of more than 25,000 consumers (excluding data processed solely for completing a payment transaction), or
- derive more than 25% of gross revenue from selling personal data.
From 1 May 2027, it will empower a consumer to confirm whether a controller is processing any of the consumer’s personal data, correct inaccuracies, delete, obtain a copy, and opt out of the processing of their data. Controllers will be required to respond to consumer requests within 45 days, with a possible 45-day extension, and provide a secure and reliable method for consumers to exercise their rights; the analysis from vitallaw.com sums up.
Scientific research in the EU: The EDPB has, in the meantime, adopted Guidelines on processing of personal data for scientific research purposes. Many areas of scientific research rely on the processing of individuals’ personal data. In the guidelines, the EU data protection regulator provides clarifications on the:
- concept of ‘scientific research’
- further processing for scientific research purposes
- reliance on “broad consent” where the purposes of research are not fully known
- rights of individuals to erasure and objection when their personal data are processed for scientific purposes
- qualifications of data controller, joint controllers or processors.
The guidelines will be subject to public consultation until 25 June.
DPIA template
The EDPB has also adopted a template for Data Protection Impact Assessments (DPIA). The template will help organisations structure, harmonise and substantiate their DPIA reporting processes. The template is complemented by an explainer document providing concise explanations for completing this template effectively, by breaking down key concepts in a simple language and addressing possible questions and knowledge gaps controllers might have.
Controllers can conduct their risk analysis and management processes as they prefer, using the DPIA methodology of their choice. A DPIA is a process required in situations where the processing is likely to result in a high risk, to describe how personal data will be processed, assess whether the processing is necessary and appropriate, and identify and reduce risks to individuals’ rights and freedoms.
Frontier AI systems

According to the Guardian, British banks will be given access in the next week to Antropic’s latest AI tool, highly skilled at cyber-security and hacking tasks, that was deemed too dangerous to be released to the public. Advances in the Claude Mythos model capabilities have come with concerns about hackers using such tools to figure out passwords or crack encryption meant to keep data safe.
Anthropic, which has so far limited the release of the new model to a small clutch of primarily US businesses, including Amazon, Apple and Microsoft, said it would expand that to UK financial institutions. UK regulators are due to raise the issue of Mythos’s risks with bank bosses and government officials in the coming weeks.
According to the presented results, Mythos can detect vulnerabilities faster and link them into complete exploits and attack chains. This can strengthen defences, but can also accelerate digital attacks. Defenders can deploy AI to detect vulnerabilities earlier and remedy them faster. But attackers with access to similar models will scale up investigation, identification, and exploitation as well. To that end, the Dutch National Cyber Security Centre suggests practical steps to adopt:
- Explicitly incorporate AI developments into your security measures, particularly patch management; delaying action by days or weeks no longer fits the current threat landscape.
- Anticipate attacks that occur faster, more automatically, and in larger numbers, for example, in the detection of anomalous behaviour in networks.
- Maintain solid basic security and supplement it with appropriate additional measures, as attackers already use AI to improve and automate existing techniques.
More official guidance
Secure database configurations: The German Federal Office for Information Security (BSI) has published a collection of secure configurations for database systems. It provides recommendations for optimally configuring encryption, authentication, authorisation, and other security-relevant aspects. It serves as a template for securely operating the database management systems MariaDB, MongoDB, and Weaviate. The repository is continuously being developed and will be expanded to include support for other database management systems.

Healthcare institutions’ data security audit: The Lithuanian State Data Protection Inspectorate VDAI carried out 10 scheduled audits of the security measures of healthcare institutions. Security checks related to access control, backup management, and event log management were assessed. As a result, several areas for improvement were identified:
- Only 11% of institutions use multi-factor authentication (MFA).
- Only 56 % of institutions centrally store and encrypt log entries.
- 67% of institutions have implemented automated alerts for suspicious events.
- 78 % of institutions have a log entry management policy and review it regularly.
- 78% of institutions document backup and recovery procedures.
Pixel tracking: The French data protection authority CNIL publishes the final version of its recommendations on tracking pixels in emails (in French). The tracking pixel is an alternative tracking method to cookies, usually implemented in the form of a reduced image (1 pixel by 1 pixel). Loading this image, which contains a user ID, tracks a user when they visit a page or read an email. This technique is used for personalising communication according to the interests of users, measuring the audience, improving the proper reception of emails, etc.
The recommendation specifies the cases in which consent will be required for the use of tracking pixels in emails and those which are exempt. It also specifies the procedures for withdrawing consent.
In other news

Data breaches on the rise: The Estonian data protection agency provides an analysis of the received data breach notifications in Q1 2026. One of the most insidious threats in today’s cyber landscape is data-stealing malware. (eg, RedLine, Vidar). It is often downloaded onto personal devices unintentionally – through illegal software, malicious ads, or fraudulent links generated using artificial intelligence. Data thieves don’t just limit themselves to passwords: they also steal session cookies, which allow attackers to bypass even multi-factor authentication by “hijacking” the active logged-in session.
If employees use personal devices to check work emails or access SaaS platforms like Slack or Salesforce, a single infected home computer can compromise the entire corporate network.
Illegal GPS tracking: The Slovenian Information Commissioner found that one of the providers of public utilities was continuously and indiscriminately collecting location data of employees, obtained through GPS transmitters installed in company vehicles, without clearly defining the purpose of the data processing. Employees were not properly informed about the scope and purpose of such tracking. Besides, the objectives could be achieved with less stringent measures (eg, manual entries, use of vehicle odometer data).
Employee computer monitoring: In a similar inspection procedure, a Slovenian regulator found another employer’s covert surveillance (via Spyrix Employee Monitoring software), was carried out without a legal basis, without informing employees and to an extent that exceeded the permissible limits of interference with privacy in the workplace, as it targeted the content of employees’ communication via private e-mail and completely private conversations. The regulator imposed a fine of 71,474 euros due to the violations found.
Receive our digest by email
Sign up to receive our digest by email every 2 weeks
Amazon multimillion fine annulled
The Administrative Court of Luxembourg has annulled a 746 million euro GDPR fine imposed on Amazon, citing procedural failings by the national regulator. Judges ruled that authorities did not properly assess the company’s level of fault before setting the penalty, DigWatch News platform reports. The sanction was issued in 2021 by the national data protection commission over Amazon’s targeted ad system and appealed in March 2025. While the violations were upheld, the court found the regulator failed to determine whether the conduct was intentional or negligent.
Other enforcement decisions
Access to an employee’s email after the end of employment: An employee can access messages on their company email account and documents stored on their computer after the end of their employment. Any restrictions must be justified by specific and proven reasons, such as protecting company secrets. This is what Italy’s ‘Garante’ established in accepting the complaint of a former employee of an insurance company who had requested a copy of his company email messages and documents saved on his computer.
The company had accessed the former employee’s email and, after examining the contents, provided only the messages deemed “strictly personal,” excluding those related to work. According to the regulator, the right of access applies to all personal data, including communications exchanged through an individualised company account. Therefore, it is unlawful to pre-select the content to be provided, nor to limit or obscure it based on the distinction between personal and professional contexts. For the violations identified, a fine of 50,000 euros was imposed.
Face recognition in the airport: Garante also declared the processing of biometric data of passengers at Milan Linate Airport using the facial recognition system “FaceBoarding” to be unlawful. The system was used to allow passengers to access the security-restricted area and board at the gate after registering at special kiosks or via an app and subsequently associating their face with their identification document and boarding pass. The system requires that the acquired biometric data be stored entirely centrally on the servers, preventing passengers from exercising exclusive control over their data.
And Finally

AI awareness: While almost half of internet users in Germany feel capable of recognizing AI-generated content, in reality, hardly anyone looks closely: only a minority have ever searched for inconsistencies in the image or checked the source (28 % and 19%, respectively). Knowledge about potential fraud scenarios is also limited. Only 38 per cent believe it’s possible that cybercriminals could, for example, manipulate an AI program to transmit sensitive data. Similarly, only 40 percent consider it conceivable that criminals could insert invisible instructions for AI systems into documents.
In fact, both scenarios are technically possible.
Police data reach: US police have access to a wide range of databases that they can use to look up and misuse information about people. This can result in humiliating and bad decisions, sometimes causing long-term damage to people’s lives. In-depth research by Rights & Security International and Privacy International reveals the impact of this and argues for more effective limits on what kinds of personal information police can view, when, and why. The US is not alone in this trend. The UK and the EU are also expanding law enforcement’s data-access powers, introducing facial-recognition surveillance and proposing scanning of private messages, PI resumes.