Strategic Compliance in the EU: Balancing Competition, GDPR and AI Regulation

AI is no longer confined to tech gossips or futuristic movies. The fierce competition within the tech industry for AI continues to intensify. China and North America are poised to drive the largest economic gains from AI, with a projected boost of 26% and 14.5% to their respective GDPs by 2030, amounting to a combined total of $10.7 trillion. Europe, being one of the greatest competitors in the field, must compete with major players such as China and the USA by allocating its resources to the development of new AI technologies. The European Union (EU) faces a difficult balancing act, maintaining its competitiveness and protecting the fundamental rights of its citizens.

The Economic Impact of AI

BITKOM, Germany’s digital association, conducted a survey revealing a significant finding: approximately half of all companies surveyed in the EU have already abandoned new, innovative projects. This is due to ambiguities in the interpretation of the GDPR. Fear of potential penalties and legal ramifications could further discourage companies from investing in new AI technologies.

The new AI act, which is still on the legislative agenda of the EU, will largely determine the competitiveness of the AI industry. The act holds the power to shape the EU’s AI industry for the next decade. However, the unprecedented challenge for the EU’s fast-paced tech industry is that of the different member state laws and regulations that prevent innovation. Privacy concerns of EU citizens are also another important topic that directly threatens AI innovation. The EU’s new AI Act envisions an AI regulatory sandbox to establish a sustainable competitive environment for AI technologies while safeguarding citizens’ fundamental rights.

High-risk AI system is also defined in Article 6(1) as: “The AI system is intended to be used as a safety component of a product, or is itself a product, covered by Union harmonization legislation” or “the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party AI conformity assessment with a view to the placing on the market or putting into service of that product pursuant to Union harmonization legislation.

AI regulatory sandboxes make it easier for innovators to conduct experiments with high-risk AI systems and test their products with fewer legal procedures. AI regulatory sandboxes also offer legal flexibility, but not absolute immunity.

Looking across all types of AI failures, the most frequent problem is privacy risks. High-risk AI systems have the potential to inflict greater harm upon the fundamental rights of citizens.

Incidence of AI failure models

 

Figure: Floridi, L. et al. (2022) ‘Capai – A procedure for conducting conformity assessment of AI systems in line with the EU Artificial Intelligence Act’. (1)

The Role of the EU in AI Regulation

To effectively address the legal implications arising from AI failures, special attention needs to be given to the rules that shape the direction of the regulatory sandbox. These rules include: processing data for public interest, monitoring performance, risk mitigation, secure data environment, data transmission restriction, data subject impact reduction, technical documentation, record-keeping, and transparency for experimenters. These rules, designed to protect the privacy of data subjects, are in line with the General Data Protection Regulation (EU) 2016/679 (GDPR).

Article 54(1)(c) of the AI Act requires effective monitoring mechanisms to identify risks to data subjects’ fundamental rights in sandbox experimentation. If any issue arises that infringes upon the privacy of data subjects, the risks must be mitigated, and, if necessary, the processing halted altogether. Organization must maintain records of decisions and efforts carried out to halt data processing to demonstrate compliance. Each high-risk AI experimentation differs by nature, so a case-by-case examination is necessary. The balancing test between the participants’ interests in privacy and the experimenter’s interests may not practically be determined beforehand or for each experiment. The recommended best practice, also a GDPR Article 25 privacy-by-design requirement, is thus to involve privacy experts in designing the experiments.

Regulatory Sandbox for AI

AI regulatory sandboxes defined in the Article 53(1) of the new AI Act as: “a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan.

For the experiments being conducted, participants in the AI regulatory sandbox remain liable, and as stated in Article 53(2) of the AI Act, “Member States shall ensure that national data protection authorities and other national authorities are associated with the operation of the AI regulatory sandbox.” Additionally, the corrective powers of the competent supervisory authorities in relation to the data subject rights shall remain unaffected.

The AI Act also introduces practices, such as implementing quality management systems, maintaining technical documentation, and establishing post-market documentation plans, specifically designed for high-risk AI systems. However, the overarching goal is to ensure that these practices harmoniously implement privacy concerns to protect the fundamental rights. As stated in the ICO’s “Regulatory Sandbox Final Report,” practices such as using synthetic data for innovation can also help to reduce the risk to privacy. However, this information is still generated from real data and must be carefully analyzed.

The use of personal data for high-risk AI systems is challenging, but necessary in some cases, such as public health and safety. AI regulatory sandboxes facilitate this possibility, particularly when it serves the public interest in these matters. Nevertheless, supervisory authorities have the authority to halt the experiments if they deem it necessary. The new guidelines from the data protection supervisory authorities and the future cooperation of the European Artificial Intelligence Board are expected to reveal how the AI industry will be shaped within the EU’s Single Data Market policy.

(1) Floridi, L. et al. (2022) ‘Capai – A procedure for conducting conformity assessment of AI systems in line with the EU Artificial Intelligence Act’, SSRN Electronic Journal, p. 57

Do you need support on data protection, privacy or GDPR? TechGDPR can help.

Request your free consultation

Tags

Show more +