Legitimate Interests

Data protection digest 4-18 Jan 2026: Legitimate Interests Assessment, AWS Europe Sovereign Cloud, Google settlement over child data

Legitimate Interests Assessment (LIA)

The Hamburg Data Protection Commissioner provided a comprehensive questionnaire for determining the legitimate interests legal basis for processing. It helps those responsible to examine and document precisely what their interest in data processing is and whether the rights and interests of the data subject are adequately considered. It guides users step-by-step through the most important checkpoints:

  • Determination: What objectives are pursued with the data processing, and are these legally permissible?
  • Necessity: Is the processing necessary, and is only the required personal data collected?
  • Balancing: Are the rights and interests of the individuals concerned sufficiently considered and protected?
  • Documentation and compliance: Are the audit procedures recorded and regularly updated?

You can download the LIA questionnaire in German or the LIA questionnaire in English.

Stay up to date! Sign up to receive our fortnightly digest via email.

EDPB updates

The European Data Protection Board welcomes comments on the recommendations on the elements and principles to be found in Processor Binding Corporate Rules – BCR-P. Such comments should be sent by 2 March. BCRs are a tool for providing appropriate safeguards for transfers of personal data by a group of undertakings engaged in a joint economic activity with third countries that have not been providing an adequate level of protection pursuant to the GDPR. The recommendations clarify when BCR-P can be used, namely, only for intra-group transfers between processors, when the controller is not part of the group. Read more about the scope of BCR-P and its interplay with the data processing agreements here.

Other developments

Legitimate Interests

AWS Europe Sovereign Cloud: The German Federal Office for Information Security BSI has announced its support for the US cloud provider Amazon Web Services in the design of security and sovereignty features for its new European Sovereign Cloud (ESC): an independent cloud infrastructure located entirely within the EU, whose operation will be technically and organisationally independent from the global AWS instance.

Later this year, the BSI will publish general sovereignty criteria for cloud computing solutions based on the new framework. It will serve as a basis for assessing the degree of autonomy of cloud solutions and can also be used in procurement processes. 

HIPAA Security Rule: In the US, for HIPAA-covered entities and business associates, the HIPAA Security Rule requires ensuring the confidentiality, integrity, and availability of all electronic protected health information (ePHI) that the regulated entity creates, receives, maintains, or transmits. To that end, the US Department of Health and Human Services has published the latest recommendations on System Hardening and Protecting ePHI. The measures include: 

  • patching known vulnerabilities
  • removing or disabling unneeded software and services
  • enabling and configuring security measures that sometimes intersect with some of the technical safeguard standards and implementation specifications of the HIPAA Security Rule, such as access controls, encryption, audit controls, and authentication.

GDPR certifications and codes of conduct

France’s CNIL maps the deployment of GDPR compliance tools across Europe. Two maps list the certifications and codes of conduct approved by national supervisory authorities or by the European Data Protection Board since the entry into force of the GDPR. These instruments may operate at either the national or European level. Certification (Art. 42 of the GDPR) makes it possible to demonstrate that a product, service, or data processing activity meets data protection criteria set out in an approved referential. And a code of conduct (Art. 40 of the GDPR) translates the Regulation’s obligations into concrete, sector-specific rules, and becomes binding on its members. 

UK international transfers

The UK Information Commissioner published an updated guidance on international transfers of personal data, making it quicker for businesses to understand and comply with the transfer rules under the UK GDPR. It sets out a clear ‘three-step test’ for organisations to use to identify if they’re making restricted transfers. New content also provides clarity on areas where organisations have questions, such as roles and responsibilities, which reflects the complexity of multi-layered transfer scenarios.

Multi-device consent

The French regulator also published its recommendations (in French) on the collection of cross-device consent. For instance, when a user accesses a website or a mobile app, they express their choices about the use of cookies or other trackers on a device connected to their account. These choices are then automatically applied to all devices connected to that account. This includes, but is not limited to, their phone, tablet, computer or connected TV, as well as the browser or app they are using. Thus, users must be well-informed of this login system.

More from supervisory authorities

Remote job interviews: According to the Latvian regulator DVI, an employer may collect the content of a remote job interview using AI tools if an appropriate legal basis can be applied. Such data processing may be carried out based on the candidate’s consent or the legitimate interests of the company. Consent must be freely given, specific, unambiguous and informed. If the processing is carried out based on legitimate interests, a balancing test of the interests of both parties must be carried out before such processing is initiated.

Regardless of the chosen legal basis, the data controller is obliged to inform the candidate before the interview about the planned data processing during the interview, including the use of AI tools, the purposes of processing, the data retention period and the candidate’s rights. The candidate has the right to object, and such objections must be taken into account; in the event of potential harm, the processing must be stopped.

Cybersecurity guide: The Australian Cyber Security Centre published guidance with a checklist on managing cybersecurity risks of artificial intelligence for small businesses when adopting cloud-based AI technologies. Reportedly, more small businesses are using AI through applications, websites and enterprise systems hosted in the public cloud like OpenAI’s ChatGPT, Google Gemini, Anthropic’s Claude, and Microsoft Copilot. Before adopting AI tools, small businesses should understand the related risks and ways to mitigate them, including: 

  • data leaks and privacy breaches
  • reliability and manipulation of AI outputs
  • supply chain vulnerabilities.

Data subject rights in the event of a bankruptcy

The Norwegian data protection authority has imposed a fine on Timegrip AS. The case concerns a retail chain that went bankrupt, and the employees needed to document the hours they had worked. The company Timegrip had been the data processor for the retail chain until the bankruptcy, and stored this data. However, they would not provide the data to either the bankruptcy estate or the employees themselves. 

Timegrip argued that the company did not have the right to provide the complainant with a copy because a data processor can only process personal data on the basis of an instruction from the controller. Since the controller retail chain had gone bankrupt, Timegrip claimed that no one could give them such an instruction. At the same time, Timegrip refused access requests from 80 different individuals, despite the company being aware that they were in a vulnerable situation and dependent on the timesheets to document their salary claims. 

In addition, it was Timegrip that made decisions about essential aspects of the processing, such as what the data could be used for, the storage period and who could have access to the personal data. In other words, it was clear that it was Timegrip that exercised the real control over the personal data.

Receive our digest by email

Sign up to receive our digest by email every 2 weeks

Google multimillion-dollar settlement over child data

In the US, a federal judge granted final approval for a 30 million dollar class action settlement against Google, after six years of litigation with parents claiming the tech giant violated children’s privacy by collecting data while they watched YouTube videos. Although Google doesn’t charge for access to YouTube, the company does use it as a revenue source. It collaborates with advertisers and the owners of popular YouTube channels to advertise on specific videos, with Google and the channel owners splitting the payments received from advertisers.

In other news 

Free mobile fine: The French CNIL issued two sanctions against the companies FREE MOBILE and FREE, imposing fines of 27 and 15 million euros, respectively, over the inadequacy of the measures taken to ensure the security of their subscribers’ data. In October 2024, an attacker managed to infiltrate the companies’ information systems and access personal data concerning 24 million subscriber contracts, including IBANs, when the people were customers of both companies. 

The investigation has shown that the authentication procedure for connecting to the VPN of both companies, used in particular for the remote work of the company’s employees, was not sufficiently robust. In addition, the measures deployed by the companies in order to detect abnormal behaviour on their information system were ineffective.

Major university data breach: In Australia, a cyberattack compromised the personal information of students from all Victorian government schools. An unauthorised external third party accessed a database containing information about current and past school student accounts, including student names, school-issued email addresses, and encrypted passwords. In the opinion of the Australian legal expert from Moores, who analysed the breach, certain factors tend to correlate with such incidents. These include:

  • Adoption of new CRMs and platforms (including leaving administrator access open, and having incorrect privacy settings, which make online forms publicly searchable);
  • Keeping old information which is no longer required;
  • A spike in emails sent to incorrect recipients on Fridays and in the lead-up to school holidays.
  • Spreadsheets sent via email (instead of SharePoint, for example).

Business email compromise

Business Email Compromise (BEC) is currently one of the fastest-growing forms of digital fraud, according to the Dutch National Cybersecurity Centre. In BEC, criminals pose as trusted individuals within an organisation, often a director or manager, but also a colleague, supplier, or customer.

The criminals’ goals can vary, such as changing account numbers, obtaining login credentials, stealing sensitive information, or using compromised accounts for new phishing campaigns. The power of BEC lies not in its technical complexity but in exploiting the principles of social influence. BEC fraudsters cleverly utilise subtle social pressure, for example, by capitalising on scarcity by creating a sense of urgency, exploiting reciprocity by first building trust or asking for small favours, or relying on an authority figure. 

And finally 

AI prompting guide: IAB Europe has published its AI Prompting Guide. It provides practical, reusable techniques you can apply immediately, including, among others, managing risks such as hallucinations, sensitive data exposure, bias, and prompt injection. Mitigating methods in this case may be addressed through careful prompting, review, and user judgment, while others require more structural safeguards such as validation, monitoring, and clear boundaries around how models are used. 

For instance, sensitive data exposure occurs when confidential, personal, or proprietary information is included in prompts or generated in outputs inappropriately. This can involve personal data, commercial secrets, or information subject to legal or contractual restrictions. The mitigation strategy would include: 

  • removing or anonymising sensitive information before including it in prompts 
  • limiting the amount of context shared to what is strictly necessary for the task 
  • following organisational guidance on approved tools and data handling, and 
  • applying access controls where models are integrated into workflows. 

For sensitive use cases, ensure outputs are reviewed before being stored, shared, or acted upon.

Do you need support on data protection, privacy or GDPR? TechGDPR can help.

Request your free consultation

Tags

Show more +