Electronic patient records (ePA) in Germany
From 2025, people covered by health insurance will be able to use the electronic patient records, (ePA in German), voluntarily and free of charge. This record can digitally gather information about the person’s medical history in a single place. Patients will decide how long someone is granted access to their records. The information includes test results and diagnoses, as well as medical treatment reports or information about recommended treatments.
Reportedly, the ePA will be subject to test criteria developed by the German Federal Office for Information Security, (BSI). Encrypted data processing will take place in a technically secure and trustworthy environment. No other authority should get access to it. Additionally, the ePA data will be transferred automatically and securely in the case of a change of health insurer. All existing objections and substitutions will be transferred. Patients can also add their information, such as a pain diary or old results that they already have in paper format.
Stay up to date! Sign on to receive our fortnightly digest via email.
More legal updates
Data scraping on Facebook: In Germany, the Federal Court of Justice ruled on a case from 2021, when data from around 533 million Facebook users from 106 countries was publicly distributed on the Internet. The platform did not take sufficient security measures and enabled the user’s profile to be found using their telephone number, depending on the user’s searchability settings.
Unknown third parties entered randomized sequences of numbers on a large scale via the contact import function and accessed the public data available. The court decided that the plaintiff’s claim for compensation for non-material damage could not be denied. According to the privacy advocacy group NOYB, this decision aligned with the clear provisions in the GDPR, (Art. 82 – Liability and right to compensation), and several CJEU rulings. German courts previously had regularly refused damages in data protection cases.
NIS2 guidance: ENISA has made available the draft implementing guidance of cybersecurity risk-management measures complying with the NIS2 Directive. It can be useful not only for regulated service providers but for other public or private actors to maintain compliance, and streamline audits. A mapping table correlates each requirement with European and international standards or frameworks, (ISO/IEC 27001:2022, ISO/IEC 27002:2024, NIST Cybersecurity Framework 2.0, ETSI EN 319 401 V2.2.1 (2018-04), CEN/TS 18026:2024), and with national frameworks.
In parallel, the Cyber Resilience Act was published in the Official Journal of the EU, setting uniform cybersecurity standards for the development, production and distribution of hardware and software products and remote data processing solutions, placed on the EU market. It also overlaps with other pieces of the EU legislation including the NIS2 Directive, AI Act and DORA, according to a DLA Piper analysis. The Act provides for a transition period of three years ending in December 2027.
Short-term vehicle rental
The data protection authorities of the Baltic States conducted a joint preventive inspection to assess the compliance of the short-term vehicle rental industry. The main problem was the lack of transparency – companies were unable to provide data subjects with clear and understandable information. Some companies chose an inappropriate legal basis or were unable to sufficiently justify its adequacy.
In some cases, the same legal basis was used for all data processing activities. In some cases, customer data was not deleted according to the established criteria. Finally, in some cases, facial images were processed for customer identification based on the data subjects’ consent, without an alternative option.
More official guidance
Data protection by design: Once again the Latvian data protection agency DVI has issued a reminder that when processing personal data, organisations must ensure that their processing complies with the principles of data protection by design and by default. This principle means that the technologies are designed in such a way that the user’s data is processed only to the minimum extent and only for as long as necessary, without requiring the user to take special steps to protect their privacy.
In a broader sense, such measures include any method or means that an organisation may apply in the process of data processing: data pseudonymisation, user-friendly interface and possibilities for users to control their data processing, implementation of malware detection systems, employee training on the basics of cyber hygiene, establishing privacy and information security management systems, and determination of contractual obligations for processors.
Data access response: When a data subject access request is made, an organisation must take reasonable steps to comply. This includes identifying all relevant filing systems and databases, as well as using appropriate search parameters that are considered reasonably likely to find information relating to the person. Organisations must be able to demonstrate why they consider the search parameters used to be reasonable and must also be able to explain why any filing systems or electronic databases have not been searched. Otherwise, data subjects will be unable to understand the full extent of the data being used, states the Guernsey data protection authority, based on a recent enforcement case.
Receive our digest by email
Sign up to receive our digest by email every 2 weeks
MS Copilot
The Norwegian regulator looked at which assessments the Norwegian University of Science and Technology should make before Microsoft’s AI assistant is put into use. M365 Copilot sits on top of Microsoft’s M365 cloud solution. It is a prerequisite that the organisation carries out all necessary security and privacy assessments relating to the M365 platform itself. Responsibility for the data used in the Copilot rests with the businesses that use the tool.
In the next step, purposes, tasks and legal bases associated with the personal data processing must be identified. Additionally, there is a requirement to run a multiple impact assessment when using generative AI that processes personal data and logs all interactions. It is therefore important to assess whether other AI solutions, (eg, locally installed), with a lower privacy risk can meet the specific needs. Finally, structured monitoring must also be made for follow-ups and the quality of what the solution produces over time.
Identity card as a loyalty card
The Belgian DPA has imposed a series of corrective measures on Freedelity, a company specialising in the collection and pooling of consumer identity and contact data in partnership with various retailers. Freedelity keeps the electronic identity card number, the municipality of issue and the date of validity of the card, but this data is of no relevance to Freedelity and to the customer’s relationship with the brands. This data is mainly collected through terminals made available to retailers by Freedelity. These vendors store, share and use the customers’ data for marketing and customer relationship management purposes.
One of the brands requires the acceptance of Freedelity’s terms and conditions to benefit from commercial advantages. Another brand considers that the insertion by a customer of his identity card in a Freedelity terminal amounts to a default consent of the customer to the processing of their data for three distinct purposes. Some brands do not mention, for example, the processing of “data sharing” when asking the consumer for consent. Additionally, the mechanisms put in place by Freedelity and its partners to withdraw consent are not sufficiently accessible or intuitive.
More enforcement decisions
AI-powered cameras: Cameras equipped with AI offer new methods of analysis to assist professional drivers, notes the French regulator. In most cases, the employer’s legitimate interest appears likely to be concentrated on ensuring the safety of goods and people. The measures implemented should not lead to continuous monitoring of employees during their working hours. Only the data necessary to generate an alert in real-time can be processed.
Neither the images nor the technical data, (timestamp, geolocation, alert type), generated as part of the alert should be retained.
X’s Grok: The Norwegian authority looks at X’s AI model training on users’ posts, including the generative chatbot Grok. Last summer it became clear that X had trained its AI models with users’ posts without informing them. The function was pre-ticked in the user settings. X paused the processing of EU/EEA citizens’ posts after 1 August for purposes related to AI training. Now, however, X has resumed processing. According to X, they use the separate company xAI as a service provider to process X posts as well as Grok interactions, inputs and results to train and fine-tune their AI.
Platform workers: The Italian Garante has ordered Foodinho, a company of the Glovo group, to pay 5 mln euros for having unlawfully processed the personal data of over 35.000 delivery riders through their digital platform. The authority has prohibited the further processing of biometric data, (facial recognition), of riders used for identity verification.
Also, through direct access to the systems, the company carries out different automated processing of riders’ data, for example, through the so-called excellence system, (a score that allows priority booking of a work shift), and the order assignment system within the shift, or to deactivate or block the account.
Meta will give users more options
Users of Facebook and Instagram will in future be able to use the services for free and at the same time receive ads based on less personal data than before, (including age, location and gender). The prices for monthly subscriptions also will be reduced. In a low-data environment, Meta plans to introduce ad breaks to allow advertisers to connect with a wider audience. This means that some of the ads will be unskippable for a few seconds. Such practice is already offered by many of Meta’s competitors. The new option will apply in the EU, EEA and Switzerland.
From chatbots to adbots
Privacy International investigates how AI giants want to monetise their tools to pay for their high costs, and advertising appears to be a component of many of these schemes. Microsoft, for example, is experimenting with formats of advertising through its ads for chat API. Amazon’s latest Rufus shopping chatbot aims to enable the chatbot to proactively recommend products based on what they know of user habits and interests.
As a result, the sponsored chatbot outputs can be far more invasive because they can be based on far more intimate information collected over time about the user and how they behave and react.