'legitimate interest' criteria

Data protection digest 20 Jul – 2 Aug 2024: ‘legitimate interest’ criteria, surveillance pricing, Olympics and AI

This edition includes: the CJEU expands on the legitimate interest criteria, a summary of the most common mistakes by data controllers, AI tools enter Olympic venues in Paris, the US FTC expresses concern that user monitoring now permits AI-facilitated individualised pricing.

Stay up to date! Sign up to receive our fortnightly digest via email.

Legitimate interest criteria

A CJEU advocate general clarifies the obligation of the data controller when relying on the legitimate interest legal ground. The mere reference to ‘legitimate interest’, without any indication of precisely what that legitimate interest is, cannot satisfy the GDPR requirements. Such legitimate interest could exist, for example, where there is a relevant relationship between the data subject and the controller,  (eg, the data subject is a client of the controller). 

The legitimate interest criteria need careful assessment including whether a data subject can reasonably expect at the time and in the context of the collection of the personal data that processing for that purpose may take place. Preventing fraud or even direct marketing purposes also can constitute a legitimate interest. However, it should be for the controller to demonstrate that a compelling interest overrides the interests or the fundamental rights and freedoms of the data subject.

AI Act entered into force on 1 August

'Legitimate interest' criteria

The EU data protection regulators started to investigate the surveillance authority vested in them by the new law. Large parts of the high-risk AI systems fall within its scope. This covers not just the organisations that use these systems but the whole value chain, including the software, cloud, and security firms that provide AI systems, either by selling them or integrating them into already-existing systems. The data protection authorities are faced with yet another challenge in light of the real-world laboratories that the AI Act establishes to foster innovation. AI developers and users have now until February 2025 to inventory the AI systems they use or sell, as well as the risk category they fall into. Organisations that create or utilise AI that is prohibited must prepare for substantial fines starting in August 2025. 

Weak Children’s Privacy

The UK Information Commissioner’s Office has launched a major review of social media platforms, (SMPs), and video-sharing platforms, (VSPs), as part of the Children’s Code Strategy. It reviewed 34 SMPs and VSPs such as BeReal, Twitch, Threads, WeChat, YouTube Kids, X(Twitter) etc, focusing on the processes young people go through to sign up for accounts with emphasis on information transparency, age assurance, default privacy settings, geolocation and exposure to algorithmic systems. The audited platforms’ full list and non-compliance issues can be seen here

More legal processes

Surveillance pricing: The US Federal Trade Commission (FTC) launched a new investigation as reportedly a growing number of grocery stores and retailers may be using algorithms to establish individualised prices. Advancements in machine learning make it cheaper for these systems to collect and process large volumes of personal data, which can open the door for price changes based on your precise location, shopping habits, or web browsing history.  

Hashing and anonymisation: The FTC has also reiterated its long-held view that hashing or pseudonymising identifiers does not render data anonymous: hashes can still be used to identify or target users, and their misuse can lead to harm. While hashing might obscure how a user identifier appears, it still creates a unique signature, (eg, unique advertising ID), that can track a person or device over time and across apps without individual informed consent. 

NIS2: The Hogan Lovells analysis looks at the speed of national implementations of the NIS2 Directive, as the 17 October deadline approaches. So far, not all EU Member States seem to be on track to implement a common level of cybersecurity. Germany only adopted the draft document on 24 July, (the so-called “IT Security Act 3.0”). The legislation largely demands from critical sectors: implemented security risk management systems following the highest standards, (eg, ISO27001), incident reporting, corporate monitoring, training and auditing obligations. For more on the enforcement, personal liability of directors, and geographical scope read the original publication

Addictive patterns

The Spanish privacy regulator warns against the use of addictive patterns in its latest study. Often online services implement deceptive and addictive design patterns to prolong the time users stay on their services or to increase the level of engagement and the amount of personal data collected and perform profiling. The adverse impact of addictive strategies is considerably greater when they are used to process the personal data of vulnerable people, such as children. 

However, the enacted Digital Services Act establishes that online services will not design, organise or manage their interfaces in such a way as to deceive or manipulate users, or in such a way as to distort or hinder their ability to make free and informed decisions. So far the European Commission has opened two sanctioning procedures for possible non-compliance with the above requirements against TikTok and Meta

More official guidance

Errors in data processing: The Latvian data protection authority explains the most common mistakes by data controllers and how to avoid them. These include: a legal basis is not chosen or is inadequate regarding the purpose of the processing; data subjects are not properly informed, privacy by default is not represented as part of information system management,  ignoring technical and organisational security measures, incidents are not processed and recorded, improper exercise of the data subject requests, lack of core documentation and impact assessments, and poor due diligence of data processors. 

Generative AI: The European AI Office has opened a call for expression of interest to participate in the drawing-up of the first general-purpose AI Code of Practice. The Code of Practice will detail the AI Act rules for providers of general-purpose AI models and general-purpose AI models with systemic risks. These rules will apply 12 months after the entry into force of the AI Act by August 2025. The Code will be prepared in an iterative drafting process by April 2025. 

According to the latest guidance from America’s NIST, one of the primary risks in Gen AI is that such systems may leak or generate sensitive information about individuals, (included in the training data). Also, the integration of nontransparent or third-party components and data may lead to diminished accountability and the possibility of potential errors across the AI value chain. Finally, the GenAI training raises risks to widely accepted privacy principles, including transparency, individual participation, (consent), and purpose specification.

Receive our digest by email

Sign up to receive our digest by email every 2 weeks

Facial recognition at school

In the UK, an Essex school was reprimanded after using facial recognition technology for canteen payments. The school, which has around 1,200 pupils aged 11-18, failed to carry out a prior assessment of the risks to the children. The school had not properly obtained clear permission to process the students’ biometric information and the students were not allowed to decide whether they did or didn’t want it used in this way.

It also failed to seek opinions from its data protection officer or consult with parents and students before implementing the technology. Instead, a letter was sent to parents with a slip for them to return if they did not want their child to participate in the FRT. Affirmative ‘opt-in’ consent wasn’t sought, meaning the school was wrongly relying on assumed consent.

Emergency calls disabled

In light of the recent global IT outage, BBC articles pay attention to a major incident in Britain from a year ago. BT, (formerly British Telecom), has just been fined 17.5 million pounds for a failure of its emergency call handling service which led to thousands of 999 calls not being connected. The network failure lasted for more than 10 hours. The emergency call handling outage was caused by an error in a file on a BT server, which meant systems restarted as soon as call handlers received a call.

It led to staff being left logged out and calls being disconnected or being dropped as they were transferred to the emergency services. The tech company was not prepared to respond to the problem: instructions on how to solve such an issue were “poorly documented” and staff were unfamiliar with the process.

More enforcement decisions 

French Guiana fine: Finally, the French CNIL decided to impose a penalty on the municipality of Kourou, in the overseas department of French Guiana, (also known as the main spaceport of France and the European Space Agency). The municipality will have to pay 6,900 euros for still not having complied with its obligation to appoint a data protection officer despite the CNIL’s injunction of December 2023. This penalty payment does not close the procedure as the injunction with its penalty payment still runs as long as the municipality has not appointed a data protection officer. A new penalty payment may therefore be ordered.

Human error in an educational ministry: The education minister in Northern Ireland has apologised after the personal details of more than 400 people who had offered to contribute to a review of special education needs were breached, the Guardian reports. According to the education department, 407 persons indicated their interest in attending the end-to-end review of special education needs, (SEN), events around Northern Ireland, and a spreadsheet attachment including their names, email addresses, and titles was accidentally emailed to 174 people. Several people’s remarks were included in the spreadsheet. 174 persons who unintentionally obtained the personal information were requested to remove it and attest to having done so.

Olympics, performance, privacy and AI

The International Olympic Committee determined over 180 potential use cases for AI in the Olympics, with some of them already in use at the Paris venue, according to a fortune.com article. The primary purposes include “enhancing the fairness and accuracy of judging and refereeing through the provision of precise metrics”. In another case, Google was announced as “the official search AI partner of Team USA”.

Finally, event organisers and the French government are also leaning on AI to monitor potential threats, (prompting the French government to temporarily change the law to allow this use of experimental surveillance technology for the Olympics).

Data security

Data breaches and exploitation of APIs: In the US, the Federal Communications Commission settled with TracFone Wireless, (a telecommunications carrier), to resolve data security investigations. The underlying data breaches involved the exploitation of application programming interfaces, (APIs).  They allow different computer programs or components to communicate with one another. Numerous APIs can be leveraged to access customer information from websites, and thus are a common attack vector for threat actors.  The settlement includes a mandated information security program, consistent with standards, identified by the NIST and OWASP; subscriber Identity module, (SIM), changes and port-out protections; annual security assessments by independent third parties, and privacy and security awareness training for employees and certain third parties. 

Big Data

Third-party cookies: Google has officially changed its plans and no longer intends to deprecate third-party cookies from the Chrome Browser, as this transition requires “significant work by many participants and will have an impact on everyone involved in online advertising”. Implementation of the Privacy Sandbox project started in 2019. Now the tech giant is proposing an updated approach that elevates user choice. Google reportedly is discussing this new path with regulators and will engage with the industry soon.

'Legitimate interest' criteria

Meta record settlement: Meta has also reached a 1.4 billion-dollar settlement to resolve claims brought by the Texas Attorney General. It aims at stopping the company’s practice of capturing and using the personal biometric data of millions of Texans without authorisation. This settlement is the largest ever obtained from an action brought by a single State. In 2011, Meta rolled out a new feature that it claimed would improve the user experience by making it easier for users to “tag” photographs with the names of people in the photo.

For more than a decade Meta ran facial recognition software on virtually every face contained in the photographs uploaded to Facebook. 

Data centre’s electricity hunger: According to official estimates cited by The Guardian, Ireland’s data centres consumed more power last year than all of the country’s urban households put together. Specifically, Google, which has its European headquarters located in Ireland, stated that its data centres might potentially delay its environmentally conscious goals following a 48% surge in its total emissions last year. This is the outcome of increased demand for cloud services and data processing, which includes advances in artificial intelligence.


Do you need support on data protection, privacy or GDPR? TechGDPR can help.

Request your free consultation

Tags

Show more +