‘Reject all’ button
The State Commissioner for Data Protection of Lower Saxony has ruled that the “Reject all” button is a must on the first level of the consent banner for cookie preferences when an “Accept all” option is available. Consent banners may not specifically encourage consent and discourage the rejection of cookies. Otherwise, the consents obtained in this way are invalid, which constitutes a violation of the Telecommunications Digital Services Data Protection Act and the GDPR in Germany. The background to the proceedings was an order issued by the Commissioner, (confirmed by the Hanover Administrative Court recently), against a Lower Saxony media company on the findings that:
- Rejecting cookies was much more complicated than accepting them
- Users were pressured to consent by constantly repeating banners
- The “optimal user experience” and “accept and close” labels were misleading
- The number of partners and third-party services involved was not apparent
- References to the right to withdraw consent and data processing in third countries outside the EU were only visible after additional scrolling on the page, etc.
Stay up to date! Sign up to receive our fortnightly digest via email.
GDPR simplification
The European Commission has published its final proposal aiming to simplify and clarify the derogation from the record-keeping obligation under Art. 30 of the GDPR. The scope of the derogation in the amending regulation will be broadened to include SMCs and organisations with fewer than 750 employees.
The proposal will also clarify that SMCs are exempted from the record-keeping obligation, unless their processing is likely to result in a ‘high risk’ to data subjects, defined in Art. 35 of the GDPR, and that the processing of special categories of personal data by Art. 9(2)(b) does not, as such, trigger the obligation to maintain the records.
Meta AI training in the EU will proceed
Concerning Meta’s AI models training using social network user data, the Hamburg data protection regulator, in agreement with the German data protection authorities, has decided against being the only EU supervisory body to issue a national provisional injunction against Meta’s AI training. Given the planned evaluation of Meta’s approach by the EU supervisory authorities, and following the decision of the Cologne Higher Regional Court, (the use of data for AI training is lawful under Article 6(1)(f) of the GDPR without requiring user consent, citing Meta’s legitimate purpose), an isolated emergency procedure for Germany is not the appropriate instrument to clarify the existing assessment differences across Europe.
More legal updates
CJEU decision on Meta’s “Pay or Ok” model: At the same time, the European Court of Justice (CJEU) has ruled in the case of Meta Platforms Ireland Ltd v. European Data Protection Board (EDPB). The case concerned the Board’s opinion focused on the circumstances under which so-called “pay or consent” models – where users of large online platforms are invited to either consent to the processing of personal data for behavioural advertising or to pay for the service to avoid such processing – can be considered to meet the conditions for valid consent under the GDPR.
The EDPB considered that in most cases, it was unlikely that large online platforms could ensure valid consent when users were given only two options: to consent to the processing of all their data for marketing purposes or to pay. The EU top court rejected Meta’s claim, holding that since the opinion was advisory, it did not have a legally binding effect on third parties and could therefore neither be annulled nor give rise to a claim for damages.
China facial recognition: According to digitalpolicyalert.org, the Cyberspace Administration of China’s rules on the secure use of face recognition technology go into effect on 1 June. Except for research and algorithm training, the rule covers organisations that process this data in China. It proposes express consent, transparency, carrying out impact assessments, security measures in place, and purpose limitation. Additionally, it stipulates that face recognition cannot be the only verification technique when there are other options and that its application in public areas is restricted to public safety, excluding private areas.
Personal data breach handling
According to the GDPR, there is a general obligation for data controllers to report personal data breaches to a supervisory authority, unless the breach is unlikely to result in a risk to the rights or freedoms of natural persons. At the same time, data controllers must notify data subjects if the personal data breach is likely to result in a high risk to their rights and freedoms. The obligation of data controllers to report personal data breaches entails several advantages, as reporting breaches is, among other things, a tool that contributes to the ongoing improvement of data protection.
For failing to report the incident, the authority may make use of its corrective powers. To that end, the Danish data protection authority has just updated the remaining parts of its guidance on handling personal data breaches (in Danish).
More from supervisory authorities
Employer obligations: The IDPC of Malta published a useful set of FAQs relating to the employment sector. These FAQs seek to address common questions which employers may have about their data protection obligations under the GDPR, particularly about how to handle the personal data of their employees. The FAQs cover questions relating to biometric data processing, police conduct certificates, pre-employment medical checks, employee monitoring, management of employee email accounts, and data retention. You can read the FAQs available in English here.
AI impact assessment standard: The International Standards Organisation has published ISO/IEC 42005 guidance for organisations conducting AI system impact assessments. These assessments focus on understanding how AI systems — and their foreseeable applications — may affect individuals, groups, or society at large. The standard supports transparency, accountability and trust in AI by helping organisations identify, evaluate and document potential impacts throughout the AI system lifecycle.
Age assurance online: The Vermont Legislature passed the Vermont Age-Appropriate Design Code (AADC). The Vermont AADC joins several other states’ efforts in protecting kids’ privacy, autonomy, and online safety by prohibiting abusive data and design practices. The bill now awaits the Governor’s approval. According to EPIC legal analysis, significant provisions in it include:
- Requiring covered businesses to configure minors’ default privacy settings to the highest level of privacy.
- Providing minors with the ability to limit unwanted adult contact.
- Regulating how minors’ data is used to ensure that personalised feeds are not driven by surveillance data, but instead by minors’ expressed preferences.
- Requiring companies to be transparent about how they use minors’ data.
- Requiring the Attorney General to update rules prohibiting abusive data processing or design practices that “lead to compulsive use or subvert or impair user autonomy, decision making, or choice”, etc.
Email security
Germany’s Federal Office for Information Security (BSI) issued a cybersecurity recommendation to upgrade your email security. This guide is aimed at all companies that send and receive emails within their domain. Using concrete, practical examples, such as Microsoft Exchange Online and Google Workspace with Gmail, it demonstrates how the cybersecurity of email communication with customers, other companies, or third parties can be improved. Often, states the regulator, this requires only a few steps, such as adjusting the configuration of the groupware used by the company or more careful implementation of the SPF, DKIM, and DMARC standards.
Legitimate interest
The Estonian data protection agency meanwhile answers the questions on legitimate interest: when and how to rely on it in data processing? While other legal bases for data processing such as consent, contract or contract negotiations require the person’s own will or initiative, (eg, consent to receive campaign offers, submitting a CV for a job), the legitimate interest is always the data controler’s initiative, whether for their benefit or the benefit of a third party. However, to use legitimate interest as a basis for data processing operations, a legitimate interest analysis must also be carried out, which should be in writing, verifiable and traceable, detailing how the result was reached. Three conditions must be met simultaneously:
- The controller or the third party, or third parties receiving the data, have a lawful legitimate interest in the processing.
- The processing of personal data is necessary for the exercise of a legitimate interest.
- The fundamental rights and freedoms of the data subject are to be protected.
Additionally, the public sector cannot rely on legitimate interest unless it has an activity that is not related to its main task, which arises from the law. And it cannot be relied on when processing special types of data (eg, health data).
Receive our digest by email
Sign up to receive our digest by email every 2 weeks
AI and personal data
Finland’s privacy regulator published guidelines on taking data protection into account in the development and use of artificial intelligence systems (in Finnish). An organisation must choose a suitable basis for processing personal data. It is also required when personal data is used to train an artificial intelligence system. The guidance describes in more detail the applicability of the different processing legal bases. Any organisation must also assess the data protection risks of the AI system even before personal data is processed. The risks must be assessed from the perspective of the people whose data is being processed. Based on the risk, the organisation must decide, for example, on the necessary security measures. Organisations are given guidance on how to comply with the data protection principles set out in the GDPR, such as data minimisation, purpose limitation and information obligation.
IT systems’ new security measures
The Danish data protection agency is adding two new measures to its catalogue of measures with a focus on preventing security breaches through hacking. The two new measures have the following titles: a) Security management and maintenance of software, and b) Network segmentation. The regulator notes that there is nothing revolutionary about the new measures, but many of the breach cases it receives could have been avoided by following what is described in these measures. For instance, several breaches related to IoT, where software in surveillance cameras does not seem to be handled with the same attention as other IT equipment, even though this very equipment can provide an easy access route to the internal network.
Lufthansa data breach
The Hungarian data protection agency announced a data breach involving Lufthansa Group. An unauthorised access occurred in a system operated by an external service provider that handles hotel accommodation for passengers on cancelled flights. As a result, unauthorised persons had access to data such as the passenger’s name, gender, mobile phone number, flight number, reference to travelling with a small child, and the date of the hotel reservation. Lufthansa said no payment details were affected and there was no evidence of any data being publicly disclosed.
The incident may affect those who received hotel vouchers for cancelled flights between November 2, 2019 and January 22, 2024. The company has since taken the necessary security measures and notified data protection authorities. Passengers are advised to be cautious, especially when receiving calls and messages from unknown sources.
Aggressive real estate brokerage
The Italian regulator Garante spotlighted a new and worrying phenomenon of aggressive telemarketing that has emerged in the real estate brokerage sector. Thousands of potential sellers and buyers were contacted via phone calls and WhatsApp messages, without having given valid consent to receive promotional communications, by real estate agencies that used very detailed lists provided by a service company. The lists used constituted a real mass mapping of the territory and were “enriched” with telephone numbers (landline and mobile), and cadastral information was also obtained. Each owner residing in a specific area of commercial interest for the agencies was subjected to a real filing.
Similar investigations were concluded by the French CNIL, which resulted in a fine against CALOGA and SOLOCAL MARKETING SERVICES for canvassing prospects without their consent and transmitting their data to partners without their consent. Companies acquired prospects’ data mainly from other data brokers, publishers of competition and product testing sites (so-called ‘first-time collectors’). They used this data to canvass people by e-mail, on behalf of their advertising clients. They could also transmit some of this data to their customers, so that they could carry out prospecting themselves.
In other news
Excel spreadsheet: The UK ICO reprimanded the London Borough of Hammersmith and Fulham (the local council) after it left exposed the personal information of 6,528 people for almost two years. The personal data breach occurred when the council responded to a freedom of information request made via the WhatDoTheyKnow.com (WDTK) website in 2021. The council’s response included an Excel spreadsheet which contained 35 hidden workbooks. The information was immediately removed. In total 6,528 people were affected, with 2,342 being children. The personal information relating to the children was classed as sensitive as it included details of children in care and unaccompanied asylum-seeking children.
Dutch municipalities: The Dutch data protection authority AP will be visiting municipalities on a random basis in the coming months. These inspections aim to check how municipalities deal with the personal data and privacy of citizens and to guide municipalities in the right direction, where necessary. During the visits, the AP will be looking at:
- Do municipalities have a complete and up-to-date overview of everything they do with the personal data?
- Do municipalities properly identify potential privacy risks before they use personal data for something?
- Do municipalities have their internal privacy supervision properly arranged?
- Do municipalities have a data protection officer who can act freely and independently?
Spanish fines statistics: The Spanish AEPD received 19,000 complaints in 2024, with AI, data spaces, and neurodata among its priority challenges. The most frequent complaints relate to video surveillance, internet services, commerce, transportation and hospitality. The areas of activity with the highest amount of fines are related to energy/water companies, financial institutions/creditors, internet services, telecommunications, and fraudulent contracting. The agency also led 22 cross-border cases as the lead authority and has cooperated as a stakeholder in 348. The year closed with almost 120,000 data protection officers reporting to the agency.
In case you missed it
Bank data: The Swedish data protection authority, together with SEB, Nordea, Swedbank and Handelsbanken, has looked at some of the legal conditions for increasing information sharing between banks to combat money laundering, terrorist financing and fraud. The project has, among other things, investigated whether there is a legal basis for a bank to share information about customers within the framework of another bank’s customer due diligence process and risk assessment.
The regulator concluded that legislative amendments were likely needed to enable the sharing of personal data that the banks wish to implement within the framework of the current project.
Replika AI fine: The Italian regulator Garante imposed a 5 million euro fine on a US-based company Luka Inc., which manages the chatbot Replika, and launched an independent investigation to assess whether personal data is being properly processed by the generative AI system behind the service. The chatbot features both a written and voice interface, allowing users to ‘generate a virtual companion’ that can take on the role of a confidant, therapist, romantic partner, or mentor. The authority also found that the company had not implemented any age verification mechanisms—either at registration or during use of the service—despite having declared that minors were excluded from potential users.
Corporate digital responsibility: Germany’s Federal Office for Information Security (BSI) has published a white paper on “Corporate Responsibility in Digital Consumer Protection” (in German). A central component of the white paper is the aspect of information security in consumers’ everyday use of digital offerings. Various fields of action are highlighted, including education, awareness-raising, product safety throughout its entire life cycle, communication in the event of a crisis or incident, and ecological sustainability. Interested parties are therefore invited to actively participate in the discussion and provide feedback.