Meta

Data protection digest 3 – 17 Apr 2025: Meta AI training restarts in Europe, virtual assistants vs data privacy

Meta AI training in EEA

According to the Norwegian regulator Datatilsynet, Meta will start training its AI service on photos, posts and comments from Facebook and Instagram users in the EEA at the end of May 2025. The purpose of the training is to develop and improve Meta’s generative AI services, based on users’ content and interactions with Meta’s AI services. The training will only include content that is publicly published. Furthermore, Meta will only use photos and posts published by users over the age of 18 to train the AI ​​model. The training includes both historical and future information that is shared publicly. If you do not want your posts and photos to be used to develop Meta’s AI, you can object. If you have both a Facebook and Instagram account, or multiple accounts, the protest applies to all accounts if they are added to the same ‘Account Center’. You do not need to justify your protest. Meta has stated that they accept all objections

Stay up to date! Sign on to receive our fortnightly digest via email.

GDPR supervision in Germany to be eased?

According to a DLA Piper analysis, the future German government plans to centralise the country’s data protection supervisory authority structure and to ease the regulatory burden for small and medium-sized companies. Responsibilities and competencies for the private sector in all 16 states are to be bundled into one Federal Commissioner for Data Protection and Information Security (BfDI).

Therefore, there would be no need to report data security breaches to multiple state supervisory authorities where impacted data subjects reside, and data controllers and processors would only need to collaborate with one national supervisory authority. The German plan coincides with the recent announcement of the Commission’s plans to amend or simplify some obligations for small and medium-sized companies, among others, under the GDPR. 

More legal updates

Cloud computing and data sharing in the EU: Before the Data Act starts being applied from 12 September 2025, the Commission is providing guidlines on non-binding Model Contractual Terms (MCTs) for data sharing, and Standard Contractual Clauses (SCCs) for cloud computing contracts. These models (B2B) intend to help especially small and medium-sized companies and other organisations which may lack the resources to draft and negotiate fair contractual clauses.  The Commission also seeks feedback on the preparatory work for the Cloud and AI Development Act and the single EU-wide cloud policy for public administrations and public procurement. The Commission would like to gather different stakeholders’ views on the EU’s capacity in cloud and edge computing infrastructure, especially in light of increasing data volumes and demand for computing resources, both fueled by the rise of computer-intensive AI services. Submissions are open from 9 April to 4 June

EU cybersecurity: To strengthen the EU’s resilience against rising cyber threats, the Commission seeks input to evaluate and revise the 2019 Cybersecurity Act. This initiative reflects the Commission’s ongoing commitment to simplifying the rules and facilitate their implementation. Interested parties, including Member State competent authorities, cybersecurity authorities, industry and trade associations, researchers and academia, consumer organisations, and citizens, are invited to give their views on the Have Your Say portal until 20 June. In parallel, the Commission seeks contributions to enhance cybersecurity for hospitals and healthcare providers, as well as for the implementation of the European Digital Health Space, following the publication of the Action Plan in January. This includes citizens, healthcare professionals, healthcare authorities, patients, compliance and data privacy professionals, cybersecurity professionals, organisations, and academia, among others, to share their views. The deadline for contributions is 30 June.

EDPB on blockchain technology

The EDPB has adopted long-awaited guidelines on the processing of personal data through blockchain technologies.  A blockchain is a distributed digital ledger system that can confirm transactions and establish who owns a digital asset  (such as cryptocurrency) at any given time. Blockchains can also support the secure handling and transfer of data, ensuring its integrity and traceability.  Depending on the purpose of processing for which blockchain technology is used, different categories of personal data may be processed. 

The guidelines highlight, among others, the need for Data Protection by Design and by Default and adequate organisational and technical measures.  As a general rule, storing personal data on a blockchain should be avoided if this conflicts with the GDPR (eg, in fulfilling the rights of data subjects regarding data rectification and erasure). The guidlines provide examples of different techniques for data minimisation and for handling and storing personal data. 

Consent management

The Consent Management Ordinance in Germany comes into effect. Effective from April 1, it regulates obligations for trusted consent management service providers. It mandates certain recognised services to store user settings and allows voluntary integration by digital service providers. In addition, it protects data portability rights of users and restricts consent management services from processing personal data beyond the purpose for which it was originally collected and stored. 

Data breach statistics

The Estonian data protection regulator estimates that in the first quarter of 2025, the number of breach reports compared to the same period in 2024 increased by 48%. In January, February and March, organisations notified the agency of a total of 65 data breaches. In 30 cases, the breach involved the public sector or an agency they manage. The most common causes since the start of the year are negligence and human error, technical errors in information systems, and unlawful access to personal data caused by cyberattacks. In particular:

  • There were cases where employees abused the access rights granted to them to perform their duties. Requests to view personal data are made both out of curiosity and to distribute it on various social networks or leak it to the press.
  • An employee who left an educational institution, being the sole administrator of the school’s Facebook group, refused to transfer the group’s administration rights to the school. He changed the group’s name and smeared his former employer there.
  • A popular e-learning environment used in schools was attacked by a cyberattack, in which an attacker, likely using user rights obtained from previous data leaks, (not related to the learning environment), attempted to hijack the accounts of users of the e-learning environment. The environment was not required to use multi-factor authentication.

More from supervisory authorities

AI Privacy Risks and Mitigation: To help developers and users of large language model-based systems handle privacy issues, the EDPB provides a new practical guide. The paper offers organisational and technical measures to maintain data protection following GDPR Art. 25 – Data protection by design and by default, and Art. 32 – Security of processing. The guideline, however, is not meant to replace a Data Protection Impact Assessment (DPIA), following GDPR Art. 35. Instead, by addressing privacy issues unique to LLM systems, it enhances the DPIA process. 

Mobile apps: The French CNIL published a modified version of its recommendations to better protect privacy in mobile applications, adopted in 2024, (in French). It is aimed at professionals working in the mobile application sector in the role of data controllers and processors, namely: a) app publishers; b) app developers; c) software development kit (SDK) providers; d) operating system providers; e) app store providers. This recommendation covers all types of applications, which can be: 

  • “native”, (developed in the programming language specific to the operating system in which they are executed); 
  • “hybrid”, (developed with languages ​​and technologies from web programming, then transformed into an application using specific tools;
  • “progressive web” PWA (dynamic web pages which are presented to the user in the form of apps).

AI public sandbox:  The CNIL has also published the results of its “sandbox” personalised support programme for players who wish to be advised on how to deploy an innovative project: 

  • France Travail’s tool, (French unemployment agency), helps its advisors to offer a personalised training course adapted to the needs of job seekers. 
  • Nantes Metropole’s Ekonom’IA project: raising awareness among residents about their water consumption levels through an AI program; and 
  • The RATP’s, (Paris transport operation company), PRIV-IA project: studying algorithmic processing of images from new video capture technologies (so-called Time-of-flight cameras). 

Emotion recognition under the AI Act

Meta

A recent analysis by DLA Piper examines two real-world uses of emotion in AI work environments to highlight the effects of the recently passed EU AI Act. The first case study uses emotion analysis on sales conversations. The global company’s chief revenue officer, who is situated in the US, is trying to implement new software that would enable staff members worldwide to get consistent sales training by comparing the calls made by top performers with those of the lowest performers

In the second case study, a busy consulting business wants to use a remote application and onboarding process to broaden its pool of candidates to include people who want to apply for wholly remote positions. The company is eager to implement software that enables interview scheduling through a platform with cutting-edge AI-powered capabilities. One element of the system analyses applicants’ speech tones, facial expressions, and other non-verbal indicators.

Receive our digest by email

Sign up to receive our digest by email every 2 weeks

In other news

Brute force attack: The UK’s Information Commissioner’s Office has issued DDP Law firm a 60,000 pound fine following a cyber-attack which resulted in highly sensitive and confidential personal information being published on the dark web. The brute force incidents were targeted at an administrator account for a legacy case management system. It was only available online sporadically. At the time of the incident DPP had multi-factor authentication for the purposes of connecting to its network via a VPN. However, the administrator account  did not have MFA due to its role as a service-based account. 

Search services: Sweden’s IMY has received a large number of complaints against search services that publish personal data about the population of Sweden. Many of these complaints concern search services that publish information about violations of the law, such as criminal convictions. IMY is now initiating inspections of two of these search services: Lexbase.se and krimfup.se. In a legal opinion from 2024, the IMY ruled that the authority is competent to review search services that have a so-called certificate of publication. There was also a recent decision from the Supreme Court that it is not compatible with EU law to release large numbers of criminal convictions online . 

Unwanted insurance: The Romanian data protection agency fined the operator Banca Transilvania SA the equivalent of 5,000 euros. Following a complaint from a natural person, the data subject claimed that their data had been processed without consent, within the framework of an insurance policy mandated by the operator Banca Transilvania. It was found that the petitioner, although he terminated his real estate loan contract, was erroneously issued a new insurance policy against natural disasters, accessory to the terminated real estate loan contract.

Employee email accounts

The Maltese regulator IDPC published a set of FAQs on the management of employee email accounts once an employee leaves an organisation. While employers have a legitimate interest to maintain business continuity following an employee’s departure from the organisation, the employer’s operational concerns must be balanced against the data protection rights of outgoing employees and any other individuals involved, as set out in the GDPR. This includes handling work email accounts in a manner that is proportionate, transparent, and respects the confidentiality of any personal correspondence that may be in the account. The most common real life cases include:

  • Can an employer set up automatic email forwarding following an employee’s departure?
  • Can an employer set up an automatic reply message following an employee’s departure?
  • As an employer, what are some general practical steps I can take to manage employee email accounts in a manner that complies with the GDPR?

In case you missed it 

Meta

AI assistants: Privacy International questions whether we can trust the developers of AI assistants to protect our privacy and security. AI Assistants need to access apps, data and device services to deliver on their promise to operate as agents capable of doing work for us. This is a significant change from the existing voice assistants: the messaging app Signal will ask to access your contacts to identify people with a Signal account you haven’t talked to; similarly, a navigation app will require access to your phone’s location services and hardware to guide you. 

What makes an AI Assistant different from apps is the level of access they constantly require to function. Prioritising automation as one of the main goals/features of AI assistants means that developers will be tempted to allow processing of your data with the lowest amount of friction possible.  

Opt out from Tesla processing your data: Lastly, a piece from The Guardian examines how Tesla owners may safeguard their data and privacy. Any connected car must track and gather a lot of information about you in order to use any of its capabilities. A detailed picture of your life and movements may be created using these data – sent via GPS trackers, sensors, and other devices. The Guardian studied Tesla’s privacy policy, talked to privacy experts, and even asked the company’s AI chatbot how to share as little data as possible with Tesla. There are some safety measures you can and, in many situations, ought to take if you own a Tesla. However, adjusting these settings so that you share the least possible amount of data with Tesla will shut off access to many of your car’s functions.

Do you need support on data protection, privacy or GDPR? TechGDPR can help.

Request your free consultation

Tags

Show more +