Weekly digest May 9 – 15, 2022: UK data protection reform, and dark patterns invalidating consent

TechGDPR’s review of international data-related stories from press and analytical reports.

Legal processes: UK data protection reform

Last week in the Queen’s Speech it was announced that the UK’s data protection regime will be reformed through the introduction of the Data Reform Bill, dataprotectionlawhub.com reports. The Bill’s purpose is to “create a trusted UK data protection framework that reduces burdens on businesses and boosts the economy.” Reportedly, the main elements of the Bill include:

  • a more flexible, outcomes-focused approach to data protection focused on privacy outcomes that will replace the “box tick exercises” required under current data protection law; 
  • public bodies will be able to share data to improve the delivery of services, with data protection, ensuring that the personal data of UK citizens is protected to a ‘gold standard’. 

Additionally, the introduction of the Brexit Freedoms Bill in the future will end the supremacy of European law. This would enable the Government to change the position of retained EU data protection law which is currently enshrined under UK data protection law. Taken all together this could undermine the EU’s adequacy decision for data flows with the UK. Read the full governmental proposal here

Official guidance: UK AI toolkit, China cross-border processing, CNIL and EDPB’s annual wrap-ups

The UK’s ICO has presented its AI toolkit designed to provide further practical support to organisations to reduce the risks to individuals’ rights and freedoms caused by their own AI systems. It contains advice on a) how to interpret relevant law as it applies to AI, b) recommendations on good practice for organisations, c) technical measures to mitigate the risks to individuals that AI may cause or exacerbate, d)  an AI glossary. This guidance is not a statutory code. There is no penalty if you fail to adopt good practice recommendations, as long as you find another way to comply with the law, the ICO says. 

The guidance covers both the AI and data-protection-specific risks, and the implications of those risks for governance and accountability. Regardless of whether you are using AI, you should have accountability measures in place. However, adopting AI applications may require you to re-assess your existing governance and risk management practices. AI applications can exacerbate existing risks, introduce new ones, or generally make risks more difficult to assess or manage.

Meanwhile, China issued new specifications for cross-border processing of personal Information for multinational corporations, as stipulated in the Personal Information Protection Law (PIPL). In particular, such companies must meet one of the following criteria in order to transfer personal information over a certain scale overseas: 

  • Undergo a security review organized by the Cyberspace Administration of China, except where exempted by relevant laws and regulations. 
  • Undergo personal information protection certification by a professional institution in accordance with the regulations of the CAC. 
  • Sign a contract with a foreign party stipulating the rights and obligations of each party in accordance with standards set by the CAC, etc.

Personal information can include any data points or information that can be used to identify an individual, such as names, phone numbers, and IP addresses. Separately, the PIPL also defines “sensitive” personal information, which is subject to stricter protection requirements:

  • Biometric data, (fingerprints, iris recognition, facial recognition, and DNA);
  • Data pertaining to religious beliefs or specific identities;
  • Medical history;
  • Financial accounts;
  • Location and whereabouts;
  • Any personal information of minors under the age of 14. 

However, it does not include data that has been anonymised or abstract data that doesn’t contain any specific personal information on individuals, such as aggregated information. Read the full analysis in the original publication

The French regulator CNIL published its 2021 activity report, (in French). One of its objectives was to provide legal certainty to all professionals with regard to the GDPR. To support them, it has thus published new sector guides and resources on its website in 2021, in particular for the voluntary associations’ sector, insurance, health and adtech. In 2021 the CNIL received 14,143 complaints and closed 12,522. It carried out 384 checks and the shortcomings noted during some of the investigations led to issuing 135 formal notices and 18 penalties, entailing fines exceeding 214 million euros. 89 of the 135 formal notices concerned cookies, one of the priority themes set by the CNIL for this year. 

The CNIL also carried out 30 new control missions with medical analysis laboratories, hospitals, service providers and data brokers in health, in particular on treatments related to the COVID-19 epidemic. Some of these procedures are still under review. Finally, it paid particular attention to the cybersecurity of the French web by controlling 22 organisations, 15 of which are public. During its investigations, the CNIL noted obsolete cryptographic suites making websites vulnerable to attacks, shortcomings concerning passwords and, more generally, insufficient resources with regard to current security issues.

At the same time the EDPB presented its annual report 2021 with a detailed overview of its work over the last year. In 2021, the EDPB adopted its final version of the recommendations on:

  • Supplementary measures following the Schrems II ruling by the Court of Justice of the EU, taking on board the input received from stakeholders during public consultation. 
  • Opinions on the UK draft adequacy decisions, under both the GDPR and the Law Enforcement Directive, as well as its opinion on the draft adequacy decision for the Republic of Korea. 
  • Guidance documents on other international transfer tools, such as Codes of Conduct, and adopted joint opinions, together with the EDPS, on the new sets of Standard Contractual Clauses, issued by the European Commission for the transfer of personal data to controllers and processors established outside the EEA. 
  • Guidelines and recommendations on topics such as personal data breach notifications, connected vehicles and virtual voice assistants, and much more.

In the US, the Network Advertising Initiative, (NAI is the leading self-regulatory association comprised exclusively of third-party digital advertising companies – ed.), issued Best Practices for User Choice and Transparency. The term “dark pattern” was coined in 2010 to refer to “tricks used in websites and apps that make you do things you didn’t mean to do, like buying or signing up for something.” They are also sometimes referred to as “deceptive patterns” or “manipulative designs.” These practices can be dynamic and multifaceted, including a series of tactics and specific design choices in apps and on websites. The guide is intended to help member companies better understand the practice of dark patterns and to implement the highlighted best practices to avoid them, namely:

  • to examine the current legal environment at the state and federal levels, (FTC ACT, CCPA and CPRA, Colorado privacy Act, and the GDPR); and 
  • to identify best practices and guide companies in maximizing effective and efficient notice and choice mechanisms with respect to collecting consumer data, (Notice and Choice, Exercising Consumer Requests, User Interface considerations).

Pursuant to the GDPR, the NAI quotes the French CNIL that  asserts “the fact of using and abusing a strategy to divert attention or dark patterns can lead to invalidating consent.” Furthermore, in March 2022, the EDPB released a series of its own guidelines on the use of dark patterns in social media platforms, open for public comment. 

Investigations and enforcement actions: IAB Europe case, IKEA Canada internal threat, whistleblowing, community owners

The IAB Europe, (the European-level association for the digital marketing and advertising ecosystem – ed.), withdrew its request for suspension of the execution of the decision issued by the Belgian Data Protection Authority, (APD), on the Transparency & Consent Framework (TCF). The request for suspension had been submitted as part of the appeal to the Belgian Market Court lodged on 4th March. The withdrawal coincides with confirmation that the APD will not take a decision on validation of the action plan submitted by IAB Europe to rectify alleged EU GDPR violations connected with TCF before Sept. 1, the date by which the Market Court is expected to have issued a ruling on the appeal.

IKEA Canada reportedly confirmed a data breach involving the personal information of approximately 95,000 customers. The furniture retailer notified Canada’s privacy regulator saying that some of its customers’ personal information appeared in the results of a “generic search” made by an employee at IKEA Canada between March 1 and March 3 using IKEA’s customer database, but no financial or banking information was involved in the breach. In a letter sent to impacted customers, IKEA Canada said that the data that may have been compromised included customer names, email addresses, phone numbers and postal codes.The IKEA Family loyalty program number belonging to customers may have also been visible. The company already made changes to reinforce its internal policies and no action was needed by customers. 

The Italian privacy regulator ‘Garante’ fined ISWEB and Perugia Hospital 40,000 euros each for GDPR violations in relation to the whistleblowing system, following an ex officio investigation, Data Guidance reports. ISWEB is an IT company that provides and manages the whistleblowing application used by numerous clients, including Perugia Hospital. The ‘Garante’ found that ISWEB had failed to regulate the relationship with the hosting service provider, noting that ISWEB had engaged the hosting service provider both to carry out processing in its capacity as data controller, and for the processing carried out in its capacity as a data processor on behalf of its clients, including the Hospital. The ‘Garante’ noted that the aggravating factors for the administrative fine were: a) the nature, subject, and purpose of the processing; b) the high degree of confidentiality required by sector regulations in relation to the identity of the data subjects in cases of whistleblowing; c) the fact that no whistleblowing reports were available in the system at the time of the investigation; d) ISWEB had not regulated in any way the relationship with the hosting service provider.

At the same time, the Spanish data protection authority imposed a fine of 500 euros on community owners. In particular, the decision states that the Presidency of the Community of Owners had placed a list of debtors on three community bulletin boards, including the claimant. Moreover, the decision noted that the location of the respective bulletin boards is inside the portals and that all the boards are locked, but exposed to viewing by third parties outside of the community. 

Data security: cybersecurity for regulated industries

EU countries and lawmakers agreed last week to tougher cybersecurity rules for regulated industries such as energy, transport and financial firms, digital providers and medical device makers amid concerns about cyber attacks by state actors and other malicious players under the scope of NIS 2 Directive, proposed by the Commission in December 2020.  Medium and large companies are required to assess their cybersecurity risk, notify authorities and take technical and organisational measures to counter the risks, with fines of up to 2% of global turnover for non-compliance. EU countries and the EU cybersecurity agency ENISA can also assess the risks of critical supply chains under the rules. 

The political agreement reached by the European Parliament and the Council is now subject to formal approval by the two co-legislators. Once published in the Official Journal, the Directive will enter into force 20 days after publication and Member States will then need to transpose the new elements of the Directive into national law. Member States will have 21 months to transpose the Directive into national law.

Big Tech: Twitter’s ‘Data Dash’ game, Clearview AI settlement and future fine, EU biometrics, Zoom’s user emotion detection 

Twitter has rolled out a new web video game to make it easier for users to understand its privacy policy, TechCrunch reports.  The goal of the game, which is called Data Dash, is to educate people on the information that Twitter collects, how the information is used and what controls users have over it: “Once you start the game, you’ll be asked to pick the language in which you would like to play. After that, you’ll have the option to select a character. The game is played by helping a dog, named Data, safely navigate “PrivaCity” by dodging ads, steering clear of spammy DMs and avoiding Twitter trolls.”

According to Reuters, France’s data privacy regulator is about to trigger the process of fining US-based Clearview AI, a facial recognition company the regulator had ordered to stop amassing data from people based in the country. The start of a formal penalty process would indicate that CNIL suspected Clearview of failing to comply with its order within the two-month deadline it had set. 

Meanwhile, under a settlement filed in an Illinois state court in Chicago, Clearview AI will stop granting paid or free access to its database to most local private businesses and individuals, as well as police. However, Clearview AI, based in New York, can still work with federal government agencies, including immigration authorities, as well as state government agencies outside Illinois. The case was brought by the American Civil Liberties Union in 2020. Clearview AI repeatedly violated the Illinois Biometric Information Privacy Act by scraping photos taken from the internet, including from social media platforms, Reuters reports.

The European Digital Rights group and 52 other organisations called for banning remote biometric identification systems in public locations, Biometric Update and IAPP News report. They called the technology, like facial recognition, one of the greatest threats to fundamental rights and democracy that destroys the possibility of anonymity in public. They have called for amendments to Article 5(1)(d) of the AI Act to extend the scope of the prohibition to cover all private as well as public actors. 

And nearly 30 civil society groups wrote a letter to Zoom’s CEO calling on the company to cease use of software that detects users’ emotions, The Hill and IAPP News reports. The letter came in response to reports of Zoom beginning to roll out post-meeting sentiment analysis for hosts: “Facial expressions are incredibly variable from culture to culture and nation to nation, making creating an algorithm that can judge them equally difficult.” The groups also launched an online petition demanding Zoom to drop the technology.

Do you need support on data protection, privacy or GDPR? TechGDPR can help.

Request your free consultation

Tags

Show more +