artificial intelligence robot

AI Age Verification: Big Tech’s Risky Fix for GDPR Violations

One-third of GDPR fines being related to the misuse of children’s data. Big tech companies are yet to implement appropriate measures to safeguard them. In response, major platforms like Google and TikTok are planning to use AI age verification to deduce the age of their users. This is done by deducing their age based on the content they interact with, starting in 2025. However, this raises further concerns. Firstly, is this initiative arriving too late? Secondly, have these companies thoroughly considered the additional risks AI could pose in safeguarding children’s data? 

Enforcement from authorities for violations of rights in relation to children

In recent years, several significant fines have been issued to tech giants over their mishandling of children’s data. Among these are:

2022
  • The Dutch Supervisory authority fined TikTok in 2022 for €750,000,. The fine was for violations concerning children’s privacy. The specific concerns were due to the lack of transparency and information only being provided in English; and
  • Meta was fined by the UK Information Commissioner Office (ICO) for €405 million in 2022 for setting profiles as public by default. This included children aged 13 to 17. It allowed the same age range to set up “business profiles.” A “business profile” makes their email address and phone number publicly available.
2023
  • In 2023 was fined by both UK and Ireland commissioners for £12.7 million and €350 million respectively. The ICO found TikTok guilty of having a vast number of accounts tied to children under 13. Senior employees at TikTok were already aware of this. Additionally, the ICO considered that the measures in place to verify age and ask parental consent were not appropriate. The ICO claimed that information on the processing was not provided in a transparent manner. The Irish Data Protection Commissioner (DPC)’s concern mirrored the concern of the ICO for Meta. It found accounts from minors were publicly available;
  • OpenAI also saw a fine in 2023, this time from the Italian authority. The fine was for €15 million, related to, amongst other issues, lack of age verification concerns; and
  • In 2023, Meta was under fire again, subject to a €251 million fine from the Irish DPC. The fine followed a data breach that impacted approximately 29 million users including, amongst others, children and their data.
2025
  • Most recently in March 2025, articles have come out suggesting a new investigation on TikTok’s practices, meaning that scrutiny over the platform’s handling of children’s data remains ongoing.

Despite these substantial penalties, being some of the highest since the GDPR has taken effect, the effectiveness of these authorities intervening remains questionable. This is due to the lack of visible active changes to the platforms. 

New AI Age Verification Measures: What’s Changing?

In some recent news, however, there have been pledges to make improvements in this sector starting 2025. Both Google, specifically for its Youtube service, and TikTok, suggest that they will be using machine learning in order to help estimate users’ age based on their interactions with the platforms. Meanwhile, Meta deems sufficient that Apple and Google app stores have implemented guardrails which prevent underage users from downloading apps scored above their age range. These proposed measures, whilst a potential improvement from no age assurance at all, still raise questions. One of the most pressing being as to whether this is really the most compliant way forward to avoid further fines related to the use of children’s data.

Flaws in Current Age Verification Methods

The current state of these platforms suggests that their approach to age verification remains flawed. Many still rely on basic verification methods, such as asking users to input their birth date instead of merely ticking a box confirming they are over 13. While this method may encourage slightly greater honesty from children, it remains easily bypassed without additional safeguards.

TikTok has taken a step toward since the fall of 2020 by applying more robust verification. This requires users who wish to go live to be over 18 and confirm their age. This is done through facial age estimation, ID photo submission, or bank account verification. While this is a move in the right direction and aligns with age assurance mechanisms endorsed by Ofcom, it is still limited in scope. It also does not seem to be used when it comes to verifying users’ age in case parental consent is needed.

Parental Controls vs. Platform Responsibility

App stores like Google Play and Apple’s App Store allow parents to set restrictions on their children’s devices. This prevents the download of age-restricted apps. However, this shifts the responsibility onto parents rather than the platforms themselves. Notably, many social media platforms, including Facebook, Instagram, TikTok, and YouTube, are rated as 12+, despite the GDPR’s Article 8 establishing the minimum age for parental consent at 13. This discrepancy allows children to still access these platforms without parental approval.

The Push for Stricter Age Verification Laws

Some countries, like France, are considering following Australia’s example by proposing a complete ban on social media usage for children under 13. However, enforcing such a ban remains a challenge. Without effective age verification mechanisms, prohibiting access becomes difficult. Moreover, some critics argue that such restrictions may be unconstitutional or infringe upon children’s rights.

Research conducted by Ofcom in the UK indicates a rising trend in social media usage among children compared to previous years. While comparable EU-wide statistics are less readily available, it is reasonable to assume that similar trends apply globally. This growing demographic highlights the urgency of implementing effective protections, however, the solutions that have been proposed seem to also come with further risk. Therefore, these promises can be argued to be less geared towards the protection of children’s data, and more so related to avoiding further enforcement actions. 

Is AI Really the Solution?

As mentioned earlier, TikTok and Youtube plan to use machine learning algorithms to infer users’ ages, specifically targeting those who may be under 13. While this approach seems promising, it also introduces compliance risks.

The European Data Protection Board (EDPB) has issued a statement, effective from February 2025. The statement outlines the need for age assurance mechanisms to be effective, secure, and compliant with the GDPR principles. Among the key considerations is the right to avoid automated decision-making. The use of machine learning for age verification must be assessed on a case-by-case basis. It must include appropriate redress mechanisms, including the ability to request human intervention.

Additionally, the statement emphasizes that platforms processing children’s data must fully adhere to GDPR principles. This includes conducting a Data Protection Impact Assessment (DPIA) to evaluate risks and mitigation measures. Given that machine learning is considered high-risk processing and children’s data is inherently more sensitive, platforms must take extra precautions. AI-driven age verification is not outright prohibited. It is crucial that companies deploying such technologies do so with full compliance in mind.

Yoti and Third-Party AI Age Verification Solutions

That is not to say that it is impossible to carry out age verification safely while using AI. One of the providers that has garnered attention by major platforms such as Meta, and OpenAI is UK-based Yoti Ltd.. Yoti is an age verification provider that also makes use of AI when carrying out selfie age-estimation. It provides guarantees that none of the data used for said verification is shared with their controller. Relying on a third party solution, especially one that is based in Europe and may be more aware of GDPR restrictions and subject to more stringent requirements, could help with mitigating some of the risks that have been mentioned so far. 

Meta has provided no news on the use of the provider since 2023, and the result of its use for OpenAI is yet to be seen. Meanwhile, the statements from YouTube and TikTok remain vague on what exactly they mean when they say they will use AI or machine learning. Considering the past violations of the companies proposing these AI-driven solutions, it is fair to question whether they will implement them in a genuinely GDPR-compliant manner. Given the history of non-compliance, skepticism remains warranted. These platforms are looking into compliance from the enforcement point of view, as opposed to focusing on the protection of data subjects. 

Conclusion

Failure to implement effective age assurance mechanisms in line with GDPR’s Article 8 has been a common issue. It has resulted in many of the largest GDPR fines issued to social media platforms over the past three years. Despite this, platforms continue to lag in their efforts to protect children’s data. This continues even as the number of young users continues to grow.

While some governments advocate for stricter bans, platform providers are making promises to implement improved verification methods. The improved verification methods include the use of AI to estimate users’ ages. This concept is not entirely new, TikTok already employs AI-driven age verification for its Live feature. Meta is currently also listed as a client of the UK-based age verification provider Yoti. Notably, Yoti has also been named as the provider required to verify the age of OpenAI’s users. This is a requirement resulting in response to a fine from the Italian DPA. As concerns surrounding AI, machine learning, and data privacy remain pressing, the methodology proposed by large social media platforms remains a cause of concern for the privacy of child users. 

Do you need support on data protection, privacy or GDPR? TechGDPR can help.

Request your free consultation

Tags

Show more +