In this issue, we explore the DORA application deadline and its interference with the GDPR; how to conduct an AI impact assessment or integrate it into your existing privacy risk management processes; what constitutes US-restricted data transfer to countries of concern; and what expectations customers have about their data; a Real-Time Bidding explainer; a Sky Italia telemarketing fine; and a new Meta privacy violation.
Stay up to date! Sign on to receive our fortnightly digest via email.
DORA application deadline
As the Digital Operational Resilience Act will apply from 17 January 2025, the European supervisors have called on financial entities and third-party providers to advance their preparations on the information and communication technology requirements. There are also important interfaces between DORA and the GDPR, in data protection experts’ opinion. Both regulations aim at ensuring data integrity, confidentiality and availability, such as notification of security incidents, risk management, technical and organisational measures, controls and audits. Furthermore, an integrated strategy that considers both data protection and IT security is needed to comply with both regulations.
Third-country authorities and GDPR certification
The EDPB published guidelines on GDPR Art.48 about data transfers to third-country authorities. The sharing of data with the public authorities in other countries can help collect evidence in the case of a crime, check financial transactions, or approve new medications. The board clarifies how organisations, private and public, can best assess under which conditions they can lawfully respond to such requests. The Board also adopted an opinion approving the Brand Compliance certification criteria concerning processing activities by controllers or processors across Europe. GDPR certification helps organisations demonstrate their compliance with the law and helps people trust the product, service, process or system for which organisations process their data.
More legal updates
US restricted transfers: The Department of Justice has suggested restrictions on cross-border transfers of sensitive personal data to “countries of concern”. The regulation would, among other things, restrict data brokerage transactions that pose significant national security threats to China, Russia, Iran, North Korea, Cuba, and Venezuela, and limit some vendor, employment, and investment arrangements with nations of concern unless they fulfil specified security standards.
Those adversaries can be interested in biometric and genomic data, health care data, geolocation information, vehicle telemetry information, mobile device information, financial transaction data, and data on individuals’ political affiliations and leanings, hobbies, and interests. In this way, countries of concern can exploit their access to US government-related data or Americans’ bulk sensitive personal data to collect information on activists, academics, journalists, dissidents, and political figures.
Oregon and several other US states have recently advanced their privacy laws. For instance, the Oregon Consumer Privacy Act applies to all for-profit businesses immediately and to applicable charitable organisations as of 1 July 2025. It provides residents with an opt-out option to a business selling, profiling, and using targeted advertising with their personal information, obtaining a copy, editing any inaccuracies and deleting the personal and sensitive data a business has collected about them.
On January 1, 2025, five more states’ consumer privacy rights laws will take effect – Iowa, Delaware, New Hampshire, Nebraska, and New Jersey.
Customer expectations about their data
The assessment of customer expectations regarding the processing of their data is an essential element in ensuring the lawfulness and transparency of data processing states the Latvian regulator. Reasonable expectations are what a customer, given their specific relationship with the organisation, types of data and available information, can naturally expect from the processing of their data. A practical approach to assessing expectations would be conducting surveys, interviews and focus group discussions, as well as consulting industry standards and previous experience.
Internal procedures and training
Developing appropriate internal procedures and regular training also helps ensure employees know how to act in supporting the company’s compliance efforts. This may be especially useful when a business expands rapidly, hires new employees, and the number of clients also increases. If non-compliance is detected which could result in a violation of customer data processing and protection, the company, with the help of its data protection specialist, has to prepare an action plan, which may include:
- conducting internal audits,
- reporting immediately to the responsible person,
- reviewing and improving legal bases and purposes of processing,
- reviewing related documentation,
- corrective measures such as informing data subjects, etc.
More from supervisory authorities
Machine learning and training data: America’s NIST continues its series of posts about privacy-preserving federated learning, (PPFL). Unlike traditional centralised learning, PPFL solutions prevent the organisation training the model from looking at the training data. Model training is, however, only a small part of the machine learning workflow. In practice, data scientists spend a lot of time on data preparation and cleaning, handling missing values, feature construction and selection. Challenges may result from poor-quality or maliciously crafted data to intentionally reduce the quality of the trained model.
To know more about AI model training the Spanish regulator AEPD has recently discussed a use case: a single-neuron network determines whether a person is overweight vs a network, which allows for more complex classifications but equally can lead to ‘hallucinations’. From a data protection perspective, the question is to choose the one that is most appropriate to the context and purpose of the processing operation. For example, the chosen structure requires such a quantity of data samples and such diversity that it is not possible to obtain them, or that it is not proportional or legitimate to collect them. In this way, the purpose could not be achieved from the design stage.
Software developers: Italian regulator Garante approved the Code of Conduct which concerns the processing of personal data carried out by companies developing and producing management software. Such software, intended for companies, associations, professionals and public administrations, is used to fulfil tax and social security, welfare and management obligations, drafting financial statements, personnel management and corporate obligations, with a significant impact on aspects relating to the protection of personal data.
Receive our digest by email
Sign up to receive our digest by email every 2 weeks
Sky Italia telemarketing fine
The Italian regulator also fined Sky Italia over 840 thousand euros for numerous violations found during telemarketing activities and sending commercial communications. The company carried out marketing activities, by telephone and via SMS, in the absence of adequate checks on the obligations regarding information and consent. Sky did not consult the registration of the users contacted in the public register of oppositions before each promotional campaign.
Some of the users had been contacted based on consent acquired even before the GDPR came into full effect. The documentation of consents acquired from data supply companies also appeared unsuitable to unequivocally demonstrate the will of the interested parties, as Sky stored the details of the consents in editable Excel files. Furthermore, Sky relied on the consent to marketing automatically provided by users during registration on the website and mandatory to use the service offered.
More enforcement decisions
The Irish Data Protection Commission fines Meta 251 million euros. Investigations were launched following a personal data breach, which was reported by Meta in September 2018. It impacted approximately 29 million Facebook accounts globally, of which approximately 3 million were based in the EU/EEA. The categories of personal data affected included the user’s full name, email address, phone number, location, place of work, date of birth, religion, gender, posts on timelines, groups of which a user was a member, and children’s personal data. The breach arose from the exploitation by unauthorized third parties of user tokens on Facebook.
CCTV: The Swedish data protection authority fined Granit Bostad Beritsholm AB due to unauthorized camera surveillance in an apartment building. Previously there were cameras at three main entrances, at elevators and apartment doors, as well as in the basement corridor next to the storage room, laundry room and sauna. There were also several cameras in the garage, bicycle storage, garbage room, and at the back of the property.
The company now has to cease the camera surveillance of all places on the property except the garage. The camera signs must contain information about the company’s identity and contact information.
Prison sentence: A motor insurance worker, who led a team dealing with accident claims, has been handed a suspended prison sentence after an investigation by the UK Information Commissioner. The company reported to the regulator that it suspected an employee was unlawfully accessing its systems. The insurers became suspicious due to the higher-than-normal number of claims being processed. An internal investigation found he had featured in 160 of the claims, despite his role not involving the access of claims. The search of the suspect’s home also found he was sending personal data he had accessed by mobile phone to another person.
AI impact assessment
The Future of Privacy Forum has prepared a detailed guide on how organisations can conduct AI impact assessments. Organisations typically take four common steps when conducting AI impact assessments, including: a) initiating an AI impact assessment; b) gathering model and system information; c) assessing risks and benefits; and d) identifying and testing risk management strategies. There is also a trend within organisations to perform multiple assessments at different points in the AI lifecycle, as well as integrate AI impact assessments into existing risk management processes, including those around privacy.
Real-Time Bidding
America’s FTC announced a new enforcement action in which it alleged that the data broker Mobilewalla collected and retained sensitive location information from consumers, often without their consent, and shared those details with third parties to target advertisements. Most of the advertisements we see online often involve a process called “real-time bidding”, (RTB), where publishers, websites, apps, or other digital mediums with ad space to sell, auction off their empty ad space on exchange platforms, and advertisers can bid for that placement.
Big Tech
LinkedIn suspended AI training in Canada: The Privacy Commissioner welcomed the commitment from LinkedIn to pause training of AI models using the personal information from Canadian member accounts. While LinkedIn indicated that it believed that it had implemented its AI model in a privacy-protective manner, the company agreed to engage in discussions with the regulator to ensure that its practices are compliant with Canada’s federal private-sector privacy law. Recently LinkedIn also suspended AI training using UK and EU data.
The European Data Protection Supervisor is examining the Commission’s compliance regarding the use of Microsoft 365. The Commission could have infringed several provisions of the data protection law for EU institutions, bodies, offices and agencies, including those on transfers of personal data outside the EU/EEA. In its decision of March 2024, the EDPS ordered the Commission to suspend all data flows resulting from its use of Microsoft 365 to Microsoft and its affiliates and sub-processors, located in countries outside Europe not covered by an adequacy decision. There is also an ongoing court proceeding in the matter.
AI development: The UK Information Commissioner is urging Generative AI developers to tell people how they’re using their data. This could involve providing accessible and specific information that enables people and publishers to understand what personal data has been collected. Without better transparency, it will be hard for people to exercise their information rights and for developers to use legitimate interests as their lawful basis. The Commissioner also encourages AI firms to get advice from the regulator through the Regulatory Sandbox and Innovation Advice services, as well as from other regulators through the DRCF AI & Digital Hub.