The DPO as a value for a company

The French data protection regulator CNIL has studied the economic benefits of the presence of a Data Protection Officer within companies. Statistical analysis shows that it is often profitable, especially for companies taking a positive approach to GDPR compliance. The two most represented sectors were research, IT and consulting, and banking, insurance and mutual insurance companies. There are different types of benefits related to the DPO function – leverage to win calls for tenders, avoidance of sanctions, avoidance of data leaks and rationalisation of data management. Here are some examples:
- The DPO is the point of contact for the supervisory authority and the persons whose data is processed. As such, they can take charge of organising the processing of people’s requests to exercise their rights so that a complete response is provided within the set deadlines.
- The DPO contributes to a better knowledge of the company’s information assets. In doing so, their action helps to facilitate the use of data by centralising information and avoiding duplicates or data silos. This makes it easier for teams to access relevant data, which improves the efficiency of internal processes and decision-making.
- A DPO ensures the main GDPR principles of purpose limitation, data minimisation, and limitation of retention, which leads to operational savings in terms of storage space (as well as fewer entry points for cybercriminals).
- Finally, DPOs advise companies on the security measures to be put in place and participate in privacy impact assessments. They can carry out checks and audits and alert managers when security flaws are found.
Stay up to date! Sign up to receive our fortnightly digest via email.
There is also a return on investment in the sense that DPOs who have more time to dedicate to their function have better conditions to ensure the company’s compliance, which reduces the likelihood of being sanctioned. However, these benefits are not received by all companies with DPOs. They are better realised by large companies and by those that are most invested in GDPR compliance and consider compliance as a lever and less as a constraint. The adoption of certain good practices can make it possible to generate economic gains for the DPO function:
- Involving DPOs in certain executive committee meetings allows them to articulate compliance with the company’s overall strategy.
- Integrate GDPR compliance with the CSR strategy and the ISS strategy to promote consistent planning and operations.
- Try to quantify the economic benefits linked to the role of the DPO in the company, informally or through internal consultations.
- Increase other business lines’ understanding of the importance of compliance concerns in the organisation’s strategy, acknowledge a DPO as a value creator, and coordinate their efforts with those of other departments.
EU-UK data transfers
According to a draft document released by the European Commission on 22 July, the UK maintains an adequate level of protection for EU-UK data transfers under the new Data Use and Access Act 2025 (DUAA), aligning with the EU GDPR and the Law Enforcement Directive. While the scope of the DUAA, which amends the UK GDPR and the DPA 2018, goes well beyond the protection of personal data, it provides for limited changes to several aspects of the data protection regime:
a) the rules on data processing for purposes of scientific research, b) the legal bases for data processing, c) the rules relating to the purpose limitation principle, and d) the conditions for automated decision-making. In addition, the DUAA makes amendments to the governance structure of the ICO. Once implemented, these measures will replace the ICO with a new entity, the Information Commission. The role and functions of the regulator will remain unchanged in the UK. The Act also introduces new enforcement powers for the regulator.
More legal updates
UK children’s data: On 25 July, the Protection of Children Code of Practice for regulated search services came into force, as required under the Online Safety Act 2023. The code imposes specific duties on search service providers to implement measures addressing content that is harmful to children, including requirements for governance and accountability arrangements, search moderation systems, content reporting mechanisms, complaints procedures, user support functionalities, and publicly available safety statements, digitalpolicyalert.org reports.
EU AI Act provisions: Provisions of the EU AI Act on general-purpose AI models entered into force on 2 August. These mean clearer information about how AI models are trained, better enforcement of copyright protections and more responsible AI development. The Commission has also confirmed that the GPAI Code of Practice, developed by independent experts, is an adequate voluntary tool for providers of GPAI models. Providers who sign and adhere to the Code will benefit from a reduced regulatory burden and increased legal certainty. Providers must comply with transparency and copyright obligations when placing GPAI models on the EU market. Models already on the market must ensure compliance by 2 August 2027.
AI Act implementation in Germany: EU member states were required to designate competent market surveillance authorities to oversee the AI Act by 2 August. This deadline has been missed by Germany, according to the Hamburg Data Protection Commissioner HmbBfDI. The regulator is therefore appealing to the federal government to promptly designate the AI market surveillance authorities stipulated by the AI Regulation, which, at least in some areas, also include the data protection supervisory authorities. Due to the delay, companies and authorities now lack a reliable contact person for questions about the AI regulation. This is also a disadvantage for Germany as a centre of AI innovation.
Web filtering
A web filtering gateway, often referred to as a web proxy, is a device or service used to control and monitor internet access by filtering web content according to predefined policies. Its main role is to block access to certain websites or categories of content for security and compliance reasons.
Web filtering gateways can help organisations meet their data security obligations (Art. 32 of the GDPR). However, they are based on data processing that must also be ensured to comply with the GDPR. To that end, the French data protection regulator CNIL opened to public consultation a draft guideline (in French) to promote such cybersecurity solutions that comply with the GDPR, both in their use and in their design. The draft document targets data controllers, who, as employers, deploy a filtering web gateway (URL filtering and detection and blocking of malicious payloads) to secure internet browsing on their information system. This applies to the browsing of employees, agents, service providers or external visitors. It does not deal with the use of web filtering gateways by data controllers providing internet access via a public Wi-Fi, as is the case with retailers, media libraries or other public or private organisations.
More from supervisory authorities
Human intervention in automated decisions: The Dutch data protection authority AP has developed guidelines for meaningful human intervention in algorithmic decision-making for organisations (in Dutch only). Art. 22 of the GDPR prohibits a decision based solely on automated processing that produces legal effects for data subjects or significantly affects them in another way. For example, if an employee is hindered, or a credit application is assessed under time pressure or an unclear automated system, this can impact the outcome of any decision. The recommendations have been written as practically as possible to best address the questions organisations have.
Profiling online: The UK ICO prepared a draft of guidelines on Profiling Tools for Online Safety. This guidance applies to any organisations that carry out profiling, as defined in the UK GDPR, as part of their trust and safety processes. It is aimed at user-to-user services that are using, or considering using, profiling to meet their obligations under the Online Safety Act 2023. But it also applies to any organisations using, or considering using, these tools for broader trust and safety reasons.
However, due to the Data Use and Access Act (DUAA) coming into law on 19 June 2025, this guidance is under review and may be subject to change.
Receive our digest by email
Sign up to receive our digest by email every 2 weeks
Data to train AI models
The European Commission presents a template for General-Purpose AI model providers to summarise the data used to train their model (under Art. 53 of the EU AI Act). General-purpose AI models are trained with large quantities of data, but there is only limited information available regarding the origin of this data. The public summary will provide a comprehensive overview of the data used to train a model, list the main data collections and explain other sources used. This template will also assist parties with legitimate interests, such as copyright holders, in exercising their rights under Union law, test particularly powerful models with systemic risk for vulnerabilities and risks, report serious security incidents, etc.
The template is part of a broader initiative linked to the EU-wide rules for general-purpose AI models kicking in on 2 August 2025. It complements the guidelines on the scope of the rules for general-purpose AI models, published on 18 July, and the General-Purpose AI Code of Practice released on 10 July. Also, France’s CNIL offers a guide on how best model makers should ensure their systems comply (in French). It also suggests solutions for companies to avoid using personal data when training their models.
Public disclosure of personal data
The UK ICO released guidelines for public bodies managing Freedom of Information requests and organisations answering Subject Access Requests, which can involve a lot of personal data. It includes simple checklists and how-to videos, covering topics such as:
- Deciding on an appropriate format for disclosure to the public
- Finding various types of hidden personal information, including hidden rows, columns and worksheets, metadata and active filters
- Converting documents to simpler formats to reveal hidden data
- Avoiding using ineffective techniques to keep information secure
- Using software tools designed to help identify hidden personal information (such as Microsoft Document Inspector)
- Reviewing the circumstances of a breach to prevent a recurrence
- Removing and redacting personal information effectively
Data protection complaints increase
In the first half of 2025, significantly more people complained to the Lower Saxony State Commissioner for Data Protection about possible data protection violations than in the same period of the previous year. The authority recorded 1,689 data protection complaints from January to June 2025, compared to 1,186 in the same period of the previous year. This represents a sharp increase of approximately 42 per cent. The authority also noted significant increases in complaints from the health, social services, and municipal sectors, as well as from the real estate industry, credit reporting agencies, and the financial sector. One reason for the high number of data breaches and complaints is the increasing digitalisation of business and administration – more personal data flows, and the risk of data protection violations also increases.
Similarly, the Lithuanian regulator VDAI counted that in the first half of 2025, most data breaches occurred due to human error, as well as due to actions that cannot be protected from by normally applied technical and organisational measures and other reasons (IT system errors, improperly performed programming work, etc.). Also, it was found that a third of data security breaches occurred due to cyber incidents (data encryption and ransomware attacks, unauthorised access to IT systems, social engineering attacks, login data and Brute Force attacks, and SQL injection and system disruption).
In other news
Temporary password fine: In Croatia, the personal data protection agency imposed an administrative fine of 320,000 euros on HEP-Toplinarstvo (an Electric utility company). The agency received a report from a respondent that when requesting a change of a forgotten password on the HEP District Heating “My Account” portal, the user was sent a temporary password by e-mail, which was actually the last password set by the user. Also, all the passwords of users of the “My Account” portal (almost 16,000 of them) were stored in the controller’s database in readable form. This meant that the controller knowingly chose a solution that did not include basic data security measures, such as generating a temporary password or using data encryption methods, did not take into account the risks to the security of personal data, nor did they conduct an assessment of the risks of processing users’ data.
McDonald’s fine: The Polish UODO has fined McDonald’s Polska approximately 3,9 mln euros after a personal data breach. The shared file in the public directory contained data on McDonald’s employees and its franchisees: first and last names, passport numbers, McDonald’s restaurant number, work start date and time, work end date and time, number of hours worked, position, days off, type of day, and type of work.
McDonald’s entrusted the processing of personal data of its restaurant chain’s employees to an external company to manage work schedules. The controller did not have the authority to manage the resources and configuration of the IT system containing the employee schedule module. Only the processor had such authority. At the same time, the provisions of the personal data processing agreement, particularly those related to audits and inspections, were not implemented. The controller failed to exercise proper oversight over the entrusted personal data.
In case you missed it
Agentic AI: The move to AI assistants and agents risks a sea change in privacy and security, states Privacy International. These services’ usefulness increases with the quantity and quality of the data they have access to, and the temptation will be to lower the friction of data controls to allow the processing of personal data. In one example, ChatGPT’s agent uses ‘connectors’ to interface with third-party applications, such as cloud data stores, calendars, email accounts, etc.
This allows ChatGPT’s agent to search data on those services, conduct deeper analysis, and sync data. This seems analogous to Anthropic’s ‘Model Context Protocol’, which provides context data from applications to LLMs. Consequently, Privacy International is worried that:
- the AI tools would generate new datasets on you that create new risks
- could access and share your data at unprecedented levels, and
- will store this data beyond your reach, across their services and in the cloud.
Bias in AI systems: The Federal Office for Information Security in Germany issued a white paper on Bias in Artificial Intelligence (in German). The term “bias” describes the resulting unequal treatment of individuals or organisations. This can have various causes. The document outlines bias identification and mitigation as a continuous process. It describes 11 different forms of bias, such as historical bias and automation bias. Along with 13 mitigation strategies that include pre-processing to post-processing methods, it highlights bias as a cybersecurity issue that compromises availability, confidentiality, and integrity.