Difference between Fundamental Rights Impact Assessment & Data Protection Impact Assessment

Through the AI Act, the EU seeks to ensure that AI systems used within the Union are safe and transparent. The EU AI Act provides a regulatory framework focusing on safeguarding fundamental rights, in relation to high-risk AI systems. Companies making use of AI, regardless of their size or industry, must now comply with the AI Act’s provisions. This marks a significant step towards responsible and ethical AI development and deployment across the region. Article 113 of the EU AI Act states that the Regulation “[…] shall apply from 2 August 2026”. However, some provisions become applicable sooner or later than this date. Most of the Act’s provisions require full compliance 24 months post-enforcement.

Crucial to AI Act is that organisations using high-risk AI systems must conduct a comprehensive Fundamental Rights Impact Assessment (FRIA). This assessment proactively identifies and mitigates potential harms to individuals. Notably, the FRIA shares similarities with the Data Protection Impact Assessment (DPIA) mandated under the GDPR. This underscores the intersection of data protection and fundamental rights in the context of AI systems.

What is a Fundamental Rights Impact Assessment (FRIA)?

While the EU AI Act does not expressly define the FRIA, it explains what the objective of the assessment is. The Act also states what the assessment must contain. Recital 96 of the AI Act states that “The aim of the fundamental rights impact assessment is for the deployer to identify the specific risks to the rights of individuals or groups of individuals…”. Moreso, the FRIA helps to “identify measures [to take] in the case of a materialisation of those risks”. Orgnaisations must conduct the FRIA “prior to deploying the high-risk AI system”. They are also required to update it “when ... any of the relevant factors have changed”.

In other words, a FRIA is an evaluation of the risks high risk AI systems present in relation to individuals’ rights. It is also the determination of remediation strategies to manage and mitigate the risks in case they occur.

What should a Fundamental Rights Impact Assessment contain?

According to Article 27(1) of the EU AI Act, the Fundamental Rights Impact Assessment should contain the following information:

(a) a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;

(b) a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used;

(c) the categories of natural persons and groups likely to be affected by its use in the specific context;

(d) the specific risks of harm likely to have an impact on the categories of natural persons ..., taking into account the information given by the provider pursuant to Article 13 (transparency obligations of AI providers);

(e) a description of the implementation of human oversight measures, according to the instructions for use;

(f) the measures to be taken in the case of the materialisation of those risks,

Interestingly, Article 27(4) of the EU AI Act states that if organisations meet “any of the obligations laid down in this Article […] through the data protection impact assessment conducted pursuant to Article 35 of [the GDPR]…, the fundamental rights impact assessment referred to in paragraph 1 of this Article shall complement that data protection impact assessment”. Essentially, the fundamental rights impact assessment should complement the data protection impact assessment.

Intersection between Fundamental Rights Impact Assessment and Data Protection Impact Assessment

Article 35 of the GDPR states that a DPIA evaluates the impact of processing operations on the protection of personal data. This is especially where the processing operations make use of new technologies and is likely to result in a high risk to the rights and freedoms of natural persons. Based on this, it appears that the FRIA and DPIA relate to the impact, rights and protection of personal data for high risk AI systems and high risk processing operations respectively.

The table below offers a quick overview of the minimum information requirement for the FRIA and DPIA:

TopicFRIADPIAComments
Description of processing✔️✔️FRIA: requires description of the deployer’s processes
DPIA: requires description of controller’s processing operations
Purpose of processing✔️
The legitimate interests pursued✔️
Risks to the rights and freedoms of individuals✔️✔️FRIA: requires inclusion of specific risks to the individuals taking into account, information provided by the provider of the AI system
DPIA: requires inclusion of risks to the individuals taking into account, the nature, scope, contect and purposes of the processing operation
The necessity / proportionality of the operations in relation to the purposes✔️
Measures to address the risks✔️✔️FRIA: requires measures to be followed in case the risks materialise, internal AI governance and mechanism for complaints
DPIA: requires safeguards and security measures to ensure the protection of personal data and to demonstrate compliance with the GDPR
The time period and frequency of intended use✔️
Categories of natural persons likely to be affected✔️
Implementation of human oversight measures✔️

FRIA and DPIA in practice

The minimum requirements for FRIA and DPIA differ. Although in practice, both assessments often include additional information, making them quite similar. For example, Article 35 of the GDPR does not mandate the inclusion of data subject categories in the DPIA. However, organisations logically include such details to identify risks to individuals’ rights and freedoms. Similarly, the EU AI Act does not explicitly require the purpose and proportionality of processes in the FRIA. Yet organisations naturally include them when describing the processes and the necessity of the AI system.

What are the differences?

The major difference between the Fundamental Rights Impact Assessment and the Data Protection Impact Assessment is their focus point. The FRIA focuses on how the AI system directly impacts the rights of individuals. The DPIA focuses on how the processing operation impacts the protection of personal data and the rights of individuals.

The table below provides an overview of the major differences between the FRIA and the DPIA:

FRIADPIA
Required for high risk AI systemsRequired for processing operations making use of new technologies, when:automated processing is used and profiling carried out on a large scalespecial categories of personal data are processeda systematic monitoring of a publicly accessible area occurs. 
Relates to deployers of high risk AI systemsRelates to controllers
Deals with the impact of high risk AI systems on the rights of individualsDeals with the impact of processing operations on the rights of individuals
Is focused on mitigating risks to ensure that the rights of individuals are protectedIs focused on mitigating risks to ensure that personal data is protected
Considers information provided by the provider of the high risk AI systemConsiders information relating to the nature, scope, context and purposes of the processing operation

Summary

The major takeaway is that the Fundamental Rights and Data Protection Impact Assessment play a complementary role. At least, this is the intent of the EU AI Act according to Article 27(4). Therefore, organisations deploying high risk AI systems processing personal data, will have to conduct both assessments. If your organisation is a provider of high risk AI systems, there is no requirement to conduct the FRIA. However, providers must make information available to deployers of the AI system to make the conduct of the FRIA possible. This is because a substantial part of the assessment relies on the information presented by AI providers.

Given that the EU AI Act is new, organisations may struggle with identifying their role in the AI value chain. Orgnaisations may also struggle to comply with requirements based on that role. At TechGDPR, we assess your processing operations, the information provided by AI providers as well as the envisaged implementation of the AI system to help determine what requirements apply under the EU AI Act. We can help you correctly classify the AI system(s) your organization plans to manufacture or deploy, ensuring early detection of any outright prohibitions. This will prevent your organisation from wasting valuable resources on systems not allowed within the EU.

Do you need support on data protection, privacy or GDPR? TechGDPR can help.

Request your free consultation

Tags

Show more +