EU Health sector
The Commission presented an EU Action Plan to improve health sector cybersecurity. It will include hospitals, clinics, care homes, rehabilitation centres, various healthcare providers, the pharmaceutical, medical and biotechnology industries, medical device manufacturers, and health research institutions. A significant challenge for the cybersecurity of the health sector is the intersection of information technology (IT) and operational technology (OT), where different security priorities meet as regards data confidentiality, availability and reliability, and where a breach in one area can affect the other. In many cases, IT and OT are at least partly outsourced.
Deficiencies are observed in key areas such as sufficient human resources, organisations’ knowledge of their information and communications technology supply chains, and installation of up-to-date security features in products, (for services like IaaS, PaaS, and SaaS). The sector struggles with basic cyber hygiene and fundamental security measures, as illustrated by the fact that nearly all health organisations surveyed face challenges when it comes to performing cybersecurity risk assessments, while almost half have never performed a risk analysis.
Stay up to date! Sign on to receive our fortnightly digest via email.
Right of access
The EDPB published a one-stop-shop case digest on the right of access. Natural persons’ right to access personal data related to them is enshrined in Art. 8 of the EU Charter of Fundamental Rights and is, therefore, to be considered the most essential data protection right. Art. 15 of the GDPR applies to requests for access submitted after the law became applicable. It can be divided into three components:
- Confirmation as to whether personal data related to the data subject is processed or not.
- Access to information related to the data subject if it is processed at the time of the data subject’s access request.
- Information about the processing and the data subject´s other data protection rights.
The CJEU has also repeatedly stated that the practical aim of the right to access, firstly, is to enable data subjects to verify that the personal data concerning them are correct and processed lawfully. In particular, the right of access is necessary to enable the data subject to exercise their right to rectification, erasure, restriction and objection to processing, as well as the right of action when they suffer damage.
More EDPB updates
Pseudonymisation: The EDPB also awaits comments on the Guidelines on Pseudonymisation until the end of February. The GDPR does not impose a general obligation to use pseudonymisation. Similarly, the explicit introduction of pseudonymisation is not intended to preclude any other measures. However, data controllers may need to apply pseudonymisation to meet the requirements of EU data protection law, in particular, to adhere to the data minimisation principle, to implement data protection by design and by default, or to ensure a level of security appropriate to the risk. In some specific situations, Union or Member State law may mandate pseudonymisation.
Complex algorithms: Finally, the EDPB also publishes an opinion piece on AI and effective data protection supervision. This report covers techniques and methods that can be used for the effective implementation of data subject rights, specifically, the right to rectification and the right to erasure when AI systems have been developed with personal data. However, there are several challenges:
- Limited understanding of how each data point impacts the model;
- Stochasticity of training, (random sampling of batches of data from the dataset, random ordering of the batches, and parallelisation without time-synchronisation);
- Incremental training process, (updates relying on a specific training data point will affect all subsequent updates);
- Stochasticity of learning, (difficult to correlate how a specific data point contributed to the “learning” in the model).
AI prohibitions in the EU
From 2 February, for any organisations that offer or operate AI systems, the first key provisions of the AI Act will apply: the ban on certain AI practices in both public and private sectors, (mass surveillance, social scoring, behavoural and emotional analysis), and obligations to ensure that employees have sufficient AI skills. Additionally, manipulative AI practices that exploit human vulnerabilities are now prohibited. Particular focus is placed on protecting vulnerable groups such as children and adolescents.
From now on, such violations can not only lead to sanctions under the AI Act but also trigger action from data protection authorities.
More legal updates worldwide
China cross-border transfers: At the beginning of January, the Cyberspace Administration of China released for public consultation the draft certification measures to legitimize cross-border transfers of personal data outside of China, (CBDTs), DLA Piper reports. Chinese law requires data controllers to take one of the following three routes: a) mandatory security assessment; b) Standard Contractual Clauses filing; or c) certification.
The certification route is available to data controllers inside China and outside the country if they fall under the extraterritorial jurisdiction of the Personal Information Protection Law, (eg, processing data of residents in China to provide products or services to them or analyse or evaluate their behaviour). Regardless of the chosen route, data controllers must implement other compliance measures for CBDTs, including consent requirements, impact assessments, and maintaining records of processing activities.
US Child privacy: On 16 January, the FTC finalized changes to children’s privacy rules, (COPPA). By requiring parents to opt into targeted advertising practices, this final rule prohibits platforms and service providers from sharing and monetising children’s data without active permission. It requires certain websites and online services to proactively obtain verifiable parental consent before collecting, using or disclosing personal information from children under 13, provides the right to require deletion of these data and establishes data minimization and data retention requirements. Entities will have one year from the publication date to come into full compliance.
Open Data
The French CNIL alerts data controllers who use databases freely made available on the Internet or provided by a third party that they must verify that their creation, sharing or re-use is legal. These include such areas as scientific research, development of artificial intelligence systems, commercial prospecting, as well as data brokers. To initiate and define compliance process data controllers will need to – identify legal basis, inform individuals, minimize data, obtain explicit consent for the processing of sensitive data, maintain up to date data processing agreements and other core documentation and conduct impact assessments.
SDK and app privacy
Software Development Kit, (SDK), plays a central role in how mobile apps work. The French CNIL has made recommendations on how to integrate SDKs and conduct controls to ensure their compliance with the GDPR. The most popular SDKs offer tools for software error management, audience measurement, ad monetization, notification management, and more.
The SDK code embedded within the app has the same level of software access as the rest of the code written by the app developer. If permission is granted to the application, all built-in SDKs have, by default, the technical capability to access the data. This access by the SDK can then escape the developer’s control and infringe on the privacy of the users of the application. It is therefore important that the publisher gives clear instructions to the developer as to the process to be implemented for the selection and configuration of the in-app SDKs.
More official guidance
Medical wearables: The Federal Office for Information Security, (BSI), in Germany has published the results of its project on the “Security of wearables with partial medical functionalities“. The project deals with the security of wearables, (marketed in Germany), that use sensors to record health and fitness status. These sensors can be used to measure or calculate heart rate, blood oxygen saturation, sleep patterns, and calorie consumption, among other things. Many of these devices use mobile apps to evaluate sensitive data and create statistics. Vulnerabilities in devices used to record health and fitness data open up a new form of personal cybercrime for criminals. On the one hand, it is conceivable that wearables could be used specifically to attack people who have the appropriate sensors. Targeted attacks could also be made on recovery processes, for example, when sick people adjust their medication based on sensor data.
Financial apps: In parallel, the BSI published the technical guidelines on “Requirements for applications in the financial sector” – fintech companies, such as banks, financial service providers or start-ups in the field of financial technology. The aim is to achieve a uniformly high level of security for existing banking apps and payment services – but also for financial services on smartphones or smartwatches. These may include apps that users can use to pay in the supermarket or manage accounts, but also crowdfunding platforms or microcredit initiatives, etc. The guide in German can be found here.
Receive our digest by email
Sign up to receive our digest by email every 2 weeks
Selling drivers location and behaviour data
In the US, the FTC is taking action against General Motors over allegations they collected, used, and sold drivers’ precise geolocation data and driving behavior information from millions of vehicles—data that can be used to set insurance rates—without adequately notifying consumers and obtaining their affirmative consent. When consumers bought a vehicle, they were encouraged to sign up for a feature which they were often told would be used to help them assess their driving habits.
The information notice was confusing and misleading. GM failed to clearly disclose to consumers the types of information it collected, including their geolocation and driving behavior data, such as hard braking, late night driving, and speeding, or that it would be sold to consumer reporting agencies. These consumer reporting agencies used the sensitive information GM provided to compile credit reports on consumers, which were then used by insurance companies to deny insurance and set rates. Additionally, through faulty claims on its websites and in email and social media ads, the company claimed that it deployed reasonable security and that it was in compliance with the previous EU-US and Swiss-US Privacy Shield Frameworks.
More enforcement decisions
Loan promotion: The UK’s ICO meanwhile fined ESL Consultancy Services Ltd 200,000 pounds for knowingly sending unlawful loan promotion nuisance text messages to people who had not consented to receive them. The regulator found that in 2022 and 2023, ESL used a third party to send marketing text messages without ensuring valid consent was in place to send promotional materials. ESL also took steps to try and conceal the identity of the sender of the messages by using unregistered SIM cards. As a result the ICO received 37,977 complaints.
Failed internal policies: An investigation of the Romanian supervisory authority revealed that the telecoms operator Vodafone Romania repeatedly failed to ensure the confidentiality of data belonging to several customers as a result of non-compliance with internal policies. For these acts the operator had to pay an approx. 15,000 euro fine. The data security breach was caused by:
- unauthorised transmission of a picture of a data subject’s invoice to a third party;
- not hiding recipients’ email addresses and not selecting the “BCC” option when informing data subjects of changes;
- sending via WhatsApp by an employee of an authorised representative of the operator, a photo containing a screenshot of data displayed in the app interface.
Failed erasure request: The Romanian regulator also fined Orange Romania approx. 40,000 euros for a failed data erasure request. After an unsuccessful attempt to subscribe to the mobile services offered by the operator, a request was made to delete all personal data. During the correspondence, the operator requested more personal data and no complete and adequate responses were provided to the requests received. Moreover, the operator had excessively collected and stored scanned copies of documents, although they were no longer necessary for the purpose of identification related to the conclusion of a subscription contract.
Data security
Hosting services: America’s FTC reminds us that a business website is one of the most important sales and marketing tools. It is not only the virtual storefront, but also a repository for data – yours and your customers. Thus, when you go looking for a web host – the company that’ll store your site on its servers – security is non-negotiable. The recent FTC settlement with GoDaddy, one of the largest web hosting companies in the world, shows what can happen when security slips.
In particular, when the hosting provider neglects to inventory its assets, manage software updates, use multifactor authentication, and appropriately monitor for security threats.
New security measures listed: The Danish data protection regulator published two new measures in its technical catalogue, both of which deal with ‘secure data transmission’. If two or more parties use external networks, such as the Internet and telecommunications networks, they often do not have the same control and protection as when rising their own networks. In such cases, the parties must assess whether the data transmission should be protected with encryption. However, encryption of data transmission can also be used to protect against “insider threats” or physical intrusion into one’s own networks. During transmission, there may also be a risk that data may become known to unauthorized persons. Validation of sender, recipient and content is thus a preventive measure that reduces the likelihood of data being read by unauthorized parties. At the same time, it can ensure non-repudiation and validation of the sender.
Valio data breach investigation in Finland
The data protection ombudsman is investigating a data security breach targeting Valio’s, (country’s largest milk processor), information network. The attacker had obtained the personnel data of Valio and its subsidiaries operating in Finland, as well as milk purchasing cooperatives. Former employees of Valio have also been targeted. In addition, the breach targeted data in the databases of the Valio Mutual Insurance Company and Valio Pension Fund. The data breach targeted a significantly larger amount of personal data than initially estimated by the data controller.
Big Tech
Meta AI: Meta began to gradually roll out a new feature that lets its AI tool remember certain details that you share with it in 1:1 chats on WhatsApp and Messenger. The company is also rolling out a greater level of personalisation for Meta AI on Facebook, Messenger and Instagram, (by tracking and memorising details about you, including information about your personal life, ethnicity, health and family).
The changes so far only concern users in the US and Canada. The new policy promises to ”only remember certain things you tell it in personal conversations, (not group chats), and you can delete its memories at any time”.
DeepSeek data whereabouts: Italy’s data protection regulator Garante is requesting answers from, (and temporarily blocks), the Chinese AI model DeepSeek, supposedly a low-cost and open-source alternative to US rivals, over its usage of personal data. What information has been collected, from which sources, for what purposes, on what legal basis, and whether it is stored in China? Other reports claim DeepSeek spreads misinformation, bans political prompts, and how the Chinese state might exploit users’ data.
Open AI meanwhile warns that Chinese startups are ‘constantly’ using its technology to develop competing products. The company is reviewing allegations that DeepSeek used the ChatGPT maker’s AI models to create a rival chatbot, through a technique known as “distillation” – boosting the performance of smaller models by using larger, more advanced ones to achieve similar results, summed up in this Guardian article.