AI-generated

Data protection digest 3-17 July 2025: AI-generated voice and visuals’ potential to violate people’s rights and freedoms

AI-generated

A recent Guardian article caused a stir when it reported that an AI-generated band got 1m plays on Spotify in the past couple of weeks. Only after releasing two albums, the group called “The Velvet Sundown” admitted their music, images and backstory were created by AI. The story has triggered a debate on authenticity and the lack of any legal obligation on tagging music created by AI-generated artists so that consumers can make informed choices.

For the data protection professionals, the story opens an even broader discussion of what risks voice and image generation technology bring to the rights and freedoms of individuals.

AI-generated speech and images

In its recent opinion, the Latvian data protection regulator DVI presumed that, when using an image created with the help of AI from scratch (eg, by entering the keywords “children playing”), personal data is not processed as it does not refer to a specific real person. However, there are many cases where the image is created using a photograph or visual description of a specific person. And if such an image is later associated with an identifiable person, its generation and publication may be considered as processing of personal data. Although the use of synthetic images can raise doubts about the veracity of the content, AI-generated visual materials still allows for the provision of the necessary information to the audience while respecting people’s privacy, (eg, fundraising campaigns for children in distress), stipulates the regulator.

Similarly, voice generation technology is taking over our everyday lives. The Liechtenstein data protection commissioner, in its recent interview, reminds us that, for instance, cloned voices can be deceptively similar to genuine ones and can therefore easily be used to mislead third parties, for example, in fraudulent calls or fake audio recordings of politicians, celebrities or even colleagues. Anyone who makes their voice publicly available or works with language professionally is providing potentially valuable training material for AI systems. Thus, it is recommended to provide clear copyright notices and, if necessary, contractually agree to the use by third parties. A general or tacit consent to processing is not sufficient – rather, an explicit, informed consent is required. The data controller may be also obliged to conduct a data protection impact assessment (DPIA) if the data processing is expected to pose a high risk to the rights and freedoms of natural persons.

Stay up to date! Sign up to receive our fortnightly digest via email.

EU AI Code of Practice

The European Commission published the final version of the General-Purpose Artificial Intelligence Code of Practice. The document helps industry comply with the AI Act legal obligations on safety, transparency and copyright of general-purpose AI models. The code was published on July 10, 2025. In the following weeks, Member States and the Commission will assess its adequacy. Additionally, the code will be complemented by Commission guidelines on key concepts related to general-purpose AI models, to be published later in the month. More information on the code is available in this dedicated Q&A.

US child privacy updates

On 1 July in Connecticut, the Act concerning Social Media Platforms and Online Services, Products and Features enters into force. According to a digitalpolicyalert.org analysis, the act expands the Connecticut Data Privacy Act, defining “heightened risk of harm to minors” to include risks such as anxiety disorders, compulsive use, physical violence, harassment, sexual exploitation, unlawful distribution of restricted substances, and unlawful gambling. The act requires owners of social media platforms to incorporate an online safety methodology by 1 January 2026. Data controllers must use reasonable care to avoid such risks, conduct data protection assessments, and implement mitigation plans. Processing of minors’ personal data for targeted advertising, sales, or profiling is prohibited, and precise geolocation data collection requires safeguards. Impact assessments are mandated for profiling-based services, detailing purpose, risks, data categories, and transparency measures.

In parallel, Oregon will begin to regulate the use of minors’ information and sale of users’ location data (regardless of age) with an update to its Oregon Consumer Privacy Act. These revisions will go into effect January 1, 2026. As amended, those subject to the law will not be able to profile or serve targeted advertising to anyone under 16. And Maryland will impose a similar prohibition on the same date, but for information of those under 18, eyeonprivacy.com law blog reports.

Anonymisation

The Asia Pacific Privacy Authorities (APPA) have published an overview of basic anonymisation concepts and practical steps that can be put in place to enable organisations to kickstart their anonymisation journey. Proper anonymisation requires both good knowledge of the data context and competency with the technicalities of anonymisation. Where the data controller does not have the necessary level of skills, they should consider engaging an expert to perform the anonymisation.

It is also recommended to refer to the ISO standard titled ‘Information Security, Cybersecurity and Privacy Protection – Privacy Enhancing Data De-identification Framework’ (ISO/IEC 27559:2022). This standard recognises that anonymisation involves not only the data itself but also the context in which data is shared and used, as well as the governance practices in place.  

Audience consent exemption

The management of a website or mobile application generally requires the use of traffic or performance statistics, which are often essential for the provision of the service. Cookies placed for this purpose may be exempt from consent under certain conditions, states the French CNIL. In order to limit themselves to what is strictly necessary for the provision of the service and thus be exempt from consent, these trackers must:

  • be used for a purpose strictly limited to the sole measurement of the audience of the site or application (performance measurement, detection of navigation problems, optimisation of technical performance or its ergonomics, estimation of the power of the servers required, analysis of the content consulted);
  • be used to produce anonymous statistical data only.

Conversely, to be exempt from consent, these trackers must not:

  • lead to data being cross-referenced with other processing operations or to non-anonymous data being transmitted to third parties;
  • allow tracking of the individual’s browsing experience using different applications or browsing different websites. Any solution using the same identifier across multiple sites (for example, via cookies placed on a third-party domain loaded by multiple sites) to cross-reference, split, or measure a unified content reach rate is excluded.

AI system data quality

The Federal Office for Information Security in Germany presented a methodological guide called QUAIDAL (in German), aimed primarily at providers of high-risk AI systems, for which the AI Act defines detailed requirements regarding documentation, data management, and continuous quality assurance. The modular design of the guideline allows project managers and development teams to select appropriate measures to ensure data quality at an early stage and systematically demonstrate their implementation. Furthermore, this modular concept can be flexibly expanded in the future to accommodate new technological developments. 

More from supervisory authorities

Emotion recognition: The Dutch data protection regulator AP notes that organisations are increasingly using AI to recognise emotions in people: the voice can be used to analyse your emotional state during a customer service conversation; your smartwatch measures your stress; or a chatbot that recognises your emotions can therefore respond more empathetically.

AI-generated

However, emotion recognition is based on controversial assumptions about emotions and their measurability. It’s not always clear how AI systems recognize emotions, nor whether the results are reliable. People are also not always aware that emotion recognition is being used, nor are they always aware of the data used. Finally, in education and the workplace, the use of AI systems for emotion recognition is already prohibited under the EU AI Act. 

LLMs and data subject rights: A consultation on processing personal data in large language models in a way that complies with data protection laws has been launched by the German Federal Data Protection Commissioner, running until August 10. Limits on anonymisation, the memorisation of personal information, the dangers of data extraction, and the protection of GDPR data subject rights in AI systems are among the main topics. The results will aid in the creation of compliant methods for handling AI’s memorised personal data, summed up in a digitalpolicyalert.org legal blog. 

EU minors data:  The European Commission publishes guidelines on the protection of minors under the Digital Services Act. These guidelines aim to ensure a safe online experience for children and young people by fostering online platforms accessible to minors (excluding micro and small enterprises). It suggests measures such as setting minors’ accounts to private by default so their personal information, data, and social media content is hidden from those they aren’t connected with to reduce the risk of unsolicited contact by strangers, also – effective age assurance methods, prohibiting the downloading or screenshotting of minors’ content, introducing measures to improve moderation and reporting tools, and much more. 

Receive our digest by email

Sign up to receive our digest by email every 2 weeks

Data-driven pricing

The Future of Privacy Forum reports that US state lawmakers (eg, a new New York bill) are seeking to regulate various pricing strategies that fall under the umbrella of “data-driven pricing” (often algotithm-based): practices that process user data to continuously inform decisions about the prices and products offered to consumers. They fall under one of the following categories:

  • Reward or loyalty program: A company offers a discount, reward, or other incentive to repeat customers who sign up for the program. 
  • Dynamic pricing: Rapidly changing the price of a particular product or service based on real-time analysis of market conditions and consumer behavior.
  • Consumer segmentation or profiling: A profile is created for a customer based on their personal data, including behavior and/or characteristics, and they are placed within a particular audience segment. 
  • Search or product ranking: Altering the order in which search results or products appear, to give more prominence to certain results, based on general consumer data or specific customer behavioral data. 

Age-verification in shops

The French CNIL also considers that the use of “augmented” cameras to estimate the age of customers of tobacco shops in order to control the sale of prohibited products to minors is neither necessary nor proportionate. Currently deployed devices are enabled by default and scan the faces of all people in their field of vision. They then indicate, by a green or red light, whether or not the estimated age of the people exceeds a predetermined age (18 years old, 21 years old or other). The law requires tobacconists to check that their customers are of legal age before selling tobacco or alcohol. However, these devices can only estimate the age of people, without certainty, and they carry a risk of error, like any artificial intelligence system. 

To fulfil their age control obligations, tobacconists must therefore resort to other solutions, such as verification of an identity document or any official document containing the person’s date of birth.

Prohibited AI practices facing privacy enforcement

The Spanish privacy regulator AEPD stated that it can now act against prohibited AI systems that process personal data, regardless of the entry into force of the AI Act.  A series of its sections will come into force as of August 2, 2025 even though the Spanish draft AI law has not yet been approved and the AEPD has not yet been formally assigned as a market surveillance authority. However, the agency’s status as the competent authority for personal data protection remains unchanged. Therefore, although this is not a direct application of the AI Act, the regulator may supervise and act against processing of personal data carried out using prohibited systems. 

In other news

Insurance agency data leak: The personal data protection agency in Croatia has imposed eight new administrative fines totaling 350,500 euros. In particular, following an anonymous report that personal data of more than a million vehicle owners had been “leaked” from the state register the regulator conducted supervisory procedures at several related entities – the Croatian Insurance Bureau, the Croatian Vehicle Center, the Ministry of the Interior of the Republic of Croatia, as well as other legal entities that were associated with the incident.

It was established that the leaked data submitted to the regulator on a USB stick – vehicle owner data, vehicle data, insurance data and data on reduction (bonuses/minimums) matched the database of the Croatian Insurance Bureau. As the data controller, they did not take appropriate organisational and technical measures to protect the personal data of the respondents. Additionally, they did not separately prescribe maximum retention periods for the personal data of the respondents contained in the register. 

Biometric identification fine: The Spanish AEPD fined sports centre operator SIDECU 160,000 euros for offences including illegal biometric data processing; the amount was eventually lowered to 96,000 euros, according to Data Guidance. Without offering any other options, SIDECU used a face recognition technology as the only way to enter its sports facilities, which violated GDPR Art. 9. In violation of Art. 13, they also did not properly notify members about data processing and did not conduct a data protection impact assessment as mandated by Art. 35. SIDECU was given ten working days to halt the processing.

Political party fine

The Romanian data protection regulator fined the Alliance for the Unity of Romanians Party, AUR, (a right-wing populist political party in Romania and Moldova) approx 25,000 euros following a data leak. One of the notified security breaches targeted the aur.mobi application used and managed by the party, whose vulnerability was exploited by a third party by accessing the application’s source code. Due to a configuration error, at the time of the incident, the following categories of personal data of its users, (supporters/members – individuals who provided personal data in the operator’s application), could be viewed within the application: 

  • first and last name, 
  • telephone number, e-mail address, residence address, personal id number, 
  • date of birth, nationality, citizenship, gender, religion, 
  • profession, occupation, field of activity, experience in other fields, studies (institution, specialisation, start and end dates), 
  • political experience (party, position, start date, end date), 
  • administrative experience (institution, position, start date, end date), 
  • foreign languages spoken (language, level).

The investigation found that personal data were processed by the controller for the purpose of informing data subjects about an AUR campaign and for statistical purposes, and that the processed data are not adequate, relevant and limited to what is necessary in relation to the declared purposes.

DPO’s conflict of interest

In Estonia, a county court overturned the decision of the Data Protection Inspectorate, which imposed a fine of 85,000 euros on Asper Biogene for violating data protection requirements. The inspectorate accused Asper of two significant violations in the misdemeanor proceedings. Firstly, the company appointed a sole board member as a data protection specialist, who lacked both the necessary independence and competence to perform this role.  Secondly, Asper Biogene had not implemented sufficient security measures, which allowed unauthorized persons to access the company’s database during a cyber attack in 2023. A large volume of data was downloaded, including special categories. 

The county court agreed that that a member of the board, who manages the company’s activities and decides on the purposes and means of data processing, cannot at the same time independently perform the duties of a data protection specialist. However, the court found that the violation was committed through negligence and took into account the fact that the company had later appointed a competent specialist and implemented additional security measures. The court decided that the fault of the person subject to the proceedings is minor and there is no public interest in the proceedings. The regulator does not agree with these findings and is prepearing an appeal. 

In case you missed it 

Swimming pool surveillance: It’s the height of Summer, and concerns about theft, break-ins, and swimming accidents are increasing. Facilities are therefore increasingly turning to video surveillance and AI. However, not everything that is technically possible is compatible with data protection, explains North Rhine-Westphalia data protection regulator. 

In one example, burglaries in swimming pools regularly occur outside of business hours, so recording must therefore be limited to these times. To prevent unauthorized access during normal business hours, only the entrance area or access barrier may be recorded. Locker break-ins also frequently occur. In these cases, video surveillance may be permitted in a limited capacity. However, changing areas must never be included. Areas subject to video surveillance should be specially marked, for example, by color-coded flooring.

At the same time, operators are increasingly turning to artificial intelligence to prevent swimming accidents. However, their use should not replace existing supervisory measures, but can at best complement them, because AI systems still have a significant error rate.

Traveling with data privacy in mind: Online activity onboard trains requires a few simple precautions to travel with peace of mind, states the French CNIL. A password written on a piece of paper stuck to your computer, a screen visible to other passengers or an unlocked computer when you leave your seat are small seemingly innocuous mistakes that can expose your personal data, your private and professional life and compromise the security of your devices. The essential safeguards can include:

  • Always lock your devices when you’re away.
  • Decrease the visibility of your ecran to other passengers and use a privacy filter.
  • Pay attention while using public Wi-Fi.
  • Do not memorise your credentials or other data in the browser.
  • Protect your passwords with dedicated tools.
  • Stay vigilant against phishing attempts, etc.

Do you need support on data protection, privacy or GDPR? TechGDPR can help.

Request your free consultation

Tags

Show more +