It is not surprising that Artificial Intelligence (AI) and privacy (by design) live in constant tension. It does not help that laws and regulations are slow in keeping up and lack a coherent framework. Meanwhile, AI technologies are introduced across all sectors of our daily lives. Deloitte released an AI report, The AI Dossier, that highlights the increased use of AI applications, in particular, tools used for Human Resources (HR) such as candidate search, employee engagement and even benefit programs.
Why do GDPR assessments on AI matter?
If you are a company, regardless of size, that already implements, or wishes to introduce, AI tools or apps into the workplace that interacts with humans without carrying out an in-depth assessment that evaluates risks, acquiring both foreseen and unforeseen penalties, then your company may face penalties. Foreseen risks are fairly obvious risks the company did not take necessary and obligatory steps to prevent them from becoming heightened security threats. Unforeseen risks result from a company not carrying out a Data Protection Impact Assessment (DPIA), or not assessing the technology in-detail through human oversight/intervention on the individual level; thus allowing some form of negligence to creep in. This would result in several GDPR violations such as impacting the rights and freedoms for data subjects (Article 12-22), which would otherwise have been averted by privacy by design. It is nearly impossible to assess and predict all risks; however,the objective is more that of displaying user-centricity rather than runaway enthusiasm for the capabilities of the technology thus enabling trustworthy AI with the users.
Risk assessments by product designers that objectively surface risks for the data subjects are particularly challenging -a reality legislators did not ignore. To that effect, the need to assess technology from the perspective of the data subject (as embodied in Art.35.9’s requirement to solicit the views of the individuals whose data will be subjected to the technology) illustrates the intention to provide for a feedback loop in product design, the same way designs are tested on consumers in market research for example.
GDPR Fines related to Artificial Intelligence
In May 2021, the Spanish data protection Authority (AEPD) imposed two fines totaling €1.5 million against EDP ENERGÍA, SAU under articles 6, 13 and 25. One of the key elements in the fines was how DPA based their decisions on the infringement of Articles 6 and 22 were instrumental to the infringement of Article 13. Recall the HR example mentioned above, imagine your HR department not vetting apps or tools being introduced through candidate applications that do use AI capabilities Did your department inadvertently discriminate against potential candidates, thus eradicating a central purpose of HR -that of promoting and sustaining diversity in the workplace. In 2018, Reuters reported that Amazon’s new recruiting engine excluded women from the candidate pool. As a result, the system learned to disqualify anyone who attended a women’s college or who listed women’s organizations on their resume. Amazon has since scrapped and implemented a more “water-down” recruitment system, however, AI in Human Resources is expected to grow. Ultimately, and more concerning, the company has violated anti-discrimination laws which in turn, exposes the company to penalties. Under the GDPR, these penalties range from a simple order to alert the processing to being barred from processing data and or being fined.
Therefore, the disadvantages of not putting in the ground work to ethically evaluate tools that may or may not have AI capabilities likely incurs high costs, lack of trust among your employees and company reputation at stake for further partnerships.
Why ethical assessments are essential for GDPR compliance
To be, or not to be ethical?
One may not always know how to scope ethical questions in today’s world of big data, data collection, AI and ML capabilities; i.e. what is intrinsically right or wrong in regards to collecting large amounts of data, or health data concerning children for example? Today, many private or public organizations -including governments- understand the stakes of considering ethics and its importance in data collection and utilization. The GDPR further embeds ethics into law within the EEA. The GDPR safeguards the rights and freedoms of data subjects by keeping organisations in line with data protection, privacy and ethics. This is notable for instance in the requirements of GDPR Art.5.1.a, lawfulness, fairness and transparency and Art. 5.1.b. purpose limitation providing for a heightened requirement to communicate and to align the processing to what is expected by the subject, what is necessary to the processing. The principle of privacy by design mentioned previously is however introduced by the GDPR in Article 22 and Recital 71.
The European Commission introduced a proposal for an EU regulatory Framework on artificial intelligence (AI) in April 2021. The Framework will be a complement to the GDPR’s regulation of AI in Articles Art. 13 , 15-22, 25, and 25 and intends to focus on specific utilisation of AI systems and associated risks. Waiting for it to be published and come into force is however not the recommended approach. Investing years into product development only to find out that the product will need to be overhauled to satisfy data protection requirements prior to its release will prove dramatic. Tell-tale signs of this happening occur when co-innovation partners start pulling out of discussions. Here at TechGDPR and in preliminary discussions we have, albeit rarely, come in contact with products that are ethically questionable or intrinsically at odds with data protection. With a sharp eye for the current and future trends in regulation, we help innovators understand where their products require consolidation.
As a proactive start, consider the available assessment checklists created by supervisory authorities to guide private and public organizations, to ethically assess tools and their AI features.
Can AI comply with privacy by design requirements?
AI technology and machine learning requires large amounts of data to even function or bring out a workable algorithm. A strong proposition of the technology is its use of data lakes in innovating ways. From the outset this is at odds with data protection law that requires any processing to have a stated purpose before it is performed.
One can argue there is not an explicit law nor regulation enacted that fully clarifies how companies can assess a tool’s ethical footprint. Be that as it may, the duty remains for companies to ensure Privacy by Design under the GDPR. Checklists and assessment methodologies abound, created to guide organizations to assess tools and their AI capabilities.
We recommend product teams to start early and take a proactive role by engaging their DPO, data protection, legal, IT and information security teams.