Data protection & privacy digest 12 – 24 October 2022: first GDPR certification seal, test databases, password management

TechGDPR’s review of international data-related stories from press and analytical reports.

Official guidance: first European data protection seal, GDPR harmonisation rules, data breach notification, children’s data protection, artistic and literary works

The EDPB approved the very first GDPR certification seal, (see the detailed opinion here). Europrivacy became the first certification mechanism that demonstrates compliance. It was developed through the European Research Programme Horizon 2020 and is continuously updated by the European Centre for Certification and Privacy in Luxembourg and its International Board of Experts. Companies and services can use the certification scheme to increase the value of their businesses and trust in their services. They can use Europrivacy to:

  • assess the compliance of their data processing activities,
  • select data processors,
  • assess the adequacy of cross-border data transfers,
  • assure citizens and clients of the adequate processing of their data.

The scheme applies to a wide variety of data processing activities while taking into account sector-specific obligations and risks, such as AI, IoT, blockchain, automated cars, smart cities, etc. It is supported by a ledger-based registry of certificates for authenticating delivered certificates and for preventing forgery. The GDPR certification seal has an innovative format for criteria, which is both human and machine-readable. It is also aligned with ISO standards and can be easily combined with the certification of security of information management systems (ISO/IEC 27001). 

The EDPB is also asking the European Commission for clarification and harmonisation of rules on procedures that still differ in each European Member State. This includes clarity about the rights of people making a complaint, criteria for handling complaints, the scope and nature of the documents that must be shared in complex investigations, deadlines for handling cases, how to close cases, investigative powers, and the publication of decisions. Additionally, complaints can sometimes be resolved in a non-contentious way, for example after the intervention of the SA has facilitated the exercise of a data subject’s rights. However, the current lack of harmonisation regarding amicable settlements creates challenges. 

To support children, their parents and educators in the digital world, the French regulator CNIL provides practical sheets, games, and videos, in clear and straightforward language, (in French only). This includes a digital vocabulary for children explaining what terms like IP address, cookies or paywalls mean, but also teaches children the right reflexes when doing things such as subscribing to a social network,(“TacoTac”), downloading online games on parents’ devices, sharing “funny” images/videos of people online, and much more. 

Latvia’s data protection authority DVI explains the principles of data processing within artistic and literary expression, as creators’ final results may contain other people’s data. An artist or writer, when evaluating the result of their work and before making it available to the general public, must conclude that it:

  • It was created within the framework of the artist’s right to freedom of speech and expression.
  • The right to privacy and data protection of natural persons whose data is included in the artistic or literary object is not threatened.
  • Does not threaten the interests of the data subject, which are more important than the interest of the public to get to know the creation.
  • It would not be desirable to publish works, (eg, photos), in which natural persons are depicted offensively, or which may cause personal injury, moral or other harm, thereby infringing the right to privacy of that person.
  • If the involved natural persons are informed about the planned purpose, it must be expressed clearly, without hidden intentions. 

The EDPB is seeking public comments on updated guidelines on personal data breach notification under the GDPR. Back in 2017, Working Party 29 adopted the document, which was endorsed by the EDPB. The new one is a slightly updated version of those guidelines. In particular, the EDPB noticed that there was a need to clarify the notification requirements concerning personal data breaches at non-EU establishments. The paragraph concerning this matter has been revised and updated. Any reference to the WP29 Guidelines on Personal data breach notification should, from now on, be interpreted as a reference to these EDPB Guidelines.

Legal processes:  test databases, MiCA draft regulation, bank AML monitoring, debt information collection

The CJEU delivered judgment related to retention and purpose limitation principles: creation and long retention of a database to carry out tests and correct errors, and compatibility of such processing with the purposes of initial collection. The request was made in proceedings between ‘Digi’, one of Hungary’s main internet and television providers, and the country’s data protection regulator NAIH, concerning a Digi test database breach, (by an ethical hacker). Digi had not deleted the test database, with the result that a large amount of personal data had been stored without any purpose for almost 18 months. However, data copied into the test database had been lawfully collected to conclude and perform the subscription contracts. On the request of the Budapest High court, the CJEU clarified that:

  • Processing of a database set up for testing and error correction is not exempt from the legitimate expectations of those customers as regards the further use of their data, (such errors are liable to be harmful to the provision of the contractually provided service). 
  • It is not apparent that all or part of that data was sensitive or that the subsequent processing had harmful consequences for subscribers or was not accompanied by appropriate safeguards.
  • At the same time, a database created for testing and correcting errors should not be kept for a period exceeding what is necessary to carry out those tests and to correct those errors. 

The final text proposal for a Regulation on Markets in Crypto-assets (MiCA) has been endorsed by the European Council, and now awaits formal approval in the European Parliament. MiCA attempts to provide a harmonised framework for the protection of holders of digital assets, including their data. Currently some crypto-assets fall outside of the scope of EU financial services legislation. There are no rules, other than AML rules, for services related to these unregulated crypto-assets, including for the operation of trading platforms for crypto-assets, the service of exchanging crypto-assets for funds or other crypto-assets, or the custody of crypto-assets. The lack of such rules leaves holders exposed to risks, in particular in areas not covered by consumer protection rules. 

The proposed regulation states that the issuing, offering, or seeking admission to trading of crypto-assets and the provision of crypto-asset services could involve the processing of personal data. Any processing of personal data under this regulation should be carried out by applicable Union law on the protection of personal data. Furthermore, crypto-assets shall not be considered to be offered for free where purchasers are required to provide or to undertake to provide personal data to the offeror. Also, regarding the transfer of personal data to a third country, the European Banking Authority shall apply Regulation 2018/1725 (‘on the protection of natural persons concerning the processing of personal data by the Union institutions’). 

The Dutch data protection authority, (AP), is concerned that a new anti-money laundering law opens the door to unprecedented mass surveillance by banks. Part of the proposal is to monitor all bank transactions of all Dutch account holders in one centralized database, using algorithms. In addition, banks must start exchanging customer data with each other. In many cases this monitoring could be outsourced to an algorithm-capable third party. Combined, the risks associated with this system are disproportionate to the purpose of the bill, believes the AP. For instance, this system could lead to people losing access to their bank accounts completely wrongly. Banks are already required to carry out individual checks on people or companies that may be laundering money or financing terrorism. And they must report unusual transactions to the authorities. 

The Norwegian data protection authority Datatilsynet responded to the government’s proposal to extend the debt information scheme to also include mortgage-secured debt. The regulator recognizes that banks and other creditors need to process information about existing mortgages and car loans in connection with the assessment of a loan application. However, the proposal conflicts with the data minimisation principle, states Datatilsynet. Banks and other credit institutions already have access to information about mortgages and car loans. It appears that the real purpose of the proposed extension of the debt information scheme is to make the creditors’ collection of information about mortgage-secured debt more efficient. This needs to be done in a more privacy-friendly way, and the regulator also points out that citizens’ debt information is attractive for both public and commercial actors, increasing the risk of purpose slippage.

Investigations and enforcement actions: lost DSAR, generic responses to DSARs, whistleblowing reports management, Clearview AI fine, Zoetop data leak

The Italian privacy regulator Garante fined BPER Banca 10,000 euros for violating Art. 12 and 17 of the GDPR. The complainant asked the bank, via email, to delete his professional account from a job application database. This email was acknowledged by the company, which asked him to repeat the request accompanied by identity documents, which the bank duly received at the same email address. However, this last communication was not followed by any effective action by the person in charge, (HR planning and development service), following an internal misunderstanding: changes in the company’s e-mail system generated some problems in communication flows between the various corporate functions. The account deletion request was finally fulfilled when the complainant’s lawyer sent a registered letter presenting alleged pecuniary and non-pecuniary damage due to the non-cancellation. However, the company noticed that some of the applicant’s data would still need to be processed for administrative, accounting, operational and organizational reasons. Other statutory retention periods would also apply for other litigation, or administrative/judicial proceedings. 

Garante also imposed a 10,000 euro fine on Clio S.r.l for violating Art 5, 6, and 30 of the GDPR, and in connection with similar decisions issued against the Municipality of Ginosa and Acqua Novara.VCO, Data Guidance reports. Clio supplies and manages on behalf of various public and private entities an application used for the acquisition and management of whistleblowing reports. Garante found that Clio had failed to regulate the relationships with various customers, who acted as data controllers, as a result of which Clio had carried out data processing activities in the absence of an appropriate legal basis. In addition, Clio had failed to keep a register of the processing activities carried out on behalf of the data controllers. Garante however noted the collaborative behavior of Clio in the course of the investigation.

The Croatian data protection authority AZOP recently issued a negative statement on a generic response to data subject access requests, (in this case, the location of stored data), by a telecoms provider. The complainant received a generic notice listing the category of data collected along with the legal bases, and was told that any information on the processing of data, (collected with his consent), could only be obtained from the point of sale. Since the applicant was not satisfied with the generic answer, he repeated his inquiry on the same day in greater detail, specifically about where his data was stored, but he did not receive an answer from the company. 

The French regulator CNIL imposed a penalty of 20 million euros, (the maximum financial penalty under Art. 83 of the GDPR), on CLEARVIEW AI and ordered the company to stop collecting and using, without any legal basis, the data of people in France and to delete data already collected. CLEARVIEW previously was given two months to comply with the formal notice and justify it to the CNIL. However, it did not provide any response. CLEARVIEW scrapes photographs from a wide range of websites, including social media, that can be consulted without logging into an account, and extracts accessible images and videos from distribution platforms. Through this collection, CLEARVIEW creates, expands, and markets access to its search engine in which an individual can be searched for using images. The company offers this service to law enforcement agencies. CLEARVIEW boss Hoan Ton-That stated to the media that his company had no clients or premises in France and was not subject to EU privacy law, adding that his firm collected “public data from the open internet” and complied with all standards of privacy.

The New York Attorney General secured 1.9 million dollars from an e-commerce retailer, Zoetop, (owner of SHEIN and ROMWE), for failing to properly handle a data breach that compromised the personal information of tens of millions of consumers. Zoetop was targeted in a cyberattack. Worldwide, 39 million SHEIN account credentials were stolen, including the credentials of more than 375,000 New York residents. Attackers stole credit card information and personal information, including names, email addresses, and hashed account passwords. Zoetop did not detect the intrusion and was later notified by its payment processor that its systems appeared to have been compromised. Zoetop also represented, falsely, that it had seen no evidence that credit card information was taken from the systems.

Data security: data breaches, software support practices, password management

A quick reminder from the Latvian data protection authority DVI was published on what constitutes a data breach and how to report it. Breaches can be classified according to three well-known information security principles:

  • Confidentiality incident, (hackers have found a security “hole” in the organisation’s information system and retrieved the personal data of customers).
  • Integrity incident, (due to an incorrectly organized SQL queue, the integrity of records of a customer database stored in the cloud has been lost. As a result, the new records are assigned to inappropriate reference fields and related information of one customer is attributed to another customer).
  • Availability incident, (due to the organisation’s incorrect backup copy policy, the existing database is overwritten with a half-year-old backup copy, without the possibility of restoring to a more current version of the database).

An organisation must therefore have developed and implemented an internal procedure for determining whether a breach has occurred, as well as a procedure for assessing the risks arising. If it is determined that it is likely that the breach could reasonably pose risks to the rights and freedoms of a natural person: the organisation must notify the supervisory authority within 72 hours. If, however, the notification takes place later, the reasons for the delay must be explained. Finally, the causes of the breach must be thoroughly investigated and measures must be taken to prevent repeated breaches in the future.

Privacy International looked into the software support practices for 5 of the most popular smart devices, (smartphones, personal computers, gaming consoles, tablets, and smart TVs), and concluded that they fail to meet the expectations of the vast majority of consumers. The majority of EU consumers surveyed expect their connected devices to receive security updates for a much longer period than what manufacturers currently offer. This is also the case when software updates, including security updates, are provided for a period that is shorter than the product’s expected life cycle. And when it comes to accessibility of information, only a few companies appeared to have detailed policies online. It is therefore critical that software remains up to date for a long time to ensure a device is secure and reduce risks to consumers’ privacy and security, stated PI.

In the context of increasing compromises of password databases, the French CNIL updates its recommendation to take into account the evolution of knowledge and allow organisations to guarantee a minimum level of security for this authentication method. According to a 2021 Verizon study, 81% of global data breach notifications are related to a password issue. In France, about 60% of notifications received by the CNIL since the beginning of 2021 are related to hacking and a large number could have been avoided by following good password practices, (two-factor authentication or electronic certificates). 

If operations relating to password management are entrusted, in whole or in part, to a subcontractor, roles and responsibilities must be precisely defined and formalised and the level of security required and the security objectives assigned to the processor must be clearly defined, taking into account the nature of the processing and the risks it is likely to generate. Finally, if simple software publishers are not subject to the legal framework for data protection, users must comply. In this sense, the documentation of password management software must specify in detail the modalities of generation, storage, and transmission of passwords.

Big Tech: human behaviour that leads to data breaches, Australia data leaks, Meta’s Pixel tracking tool, AI hiring tools, speech to identify mental health problems

London-based cybersecurity company OutThink has raised 10 million dollars in early-stage investments as it looks to help organisations identify human behaviour that can lead to data breaches. The company, which claims human behaviour is the source of 91% of data breaches, uses machine learning, natural language processing, and applied psychology to identify, understand and manage the attitudes, intentions, and sentiments of individuals.

Australia envisages increased penalties for data breaches following major cyberattacks. Australia’s telco, financial, and government sectors have been on high alert since Optus, the country’s second-largest telco, disclosed a hack that saw the theft of personal data from up to 10 million accounts. The attack was followed by a data breach at health insurer Medibank Private, which covers one-sixth of Australians, including medical diagnoses and procedures. Australia’s Woolworths Group also said its online retailer MyDeal identified that a “compromised user credential” was used to access its systems that exposed data of nearly 2.2 million users, Reuters reports. 

At least 47 proposed class actions have been filed since February claiming that Meta Platforms Inc.’s Pixel tracking tool sent the plaintiffs’ video consumption data from online platforms to Facebook without their consent, in violation of the federal Video Privacy Protection Act, a Bloomberg Law analysis of court dockets found. Almost half of the new cases were filed in September alone. The complaints allege they knowingly disclosed protected information by allowing Meta’s embedded Pixel code to share a digital subscriber’s viewing activity and unique Facebook ID with the social media platform.

AI hiring tools do not reduce bias or improve diversity, Cambridge University researchers say in a study of the evolving technique the BBC called “pseudoscience”, reporting on the study. In particular, claims one of the research team, these tools can’t be trained to only identify job-related characteristics and strip out gender and race from the hiring process, because the kinds of attributes we think are essential for being a good employee are inherently bound up with gender and race. Some companies have also found these tools problematic, the study notes. For instance, a German public broadcaster found wearing glasses or a headscarf in a video changed a candidate’s scores. 

Finally, software that analyses snippets of your speech to identify mental health problems is rapidly making its way into call centers, medical clinics, and telehealth platforms, putting privacy activists on alert, according to Axios news. Unlike Siri and Alexa, vocal biomarker systems analyse how you talk — prosody, pauses, intonation, pitch, etc. — but not what you say. While the voice sample is run through a machine-learning model that uses a capacious database of anonymized voices for comparison, it may increase systemic biases towards people from specific regions, backgrounds, or with a specific accent.

Do you need support on data protection, privacy or GDPR? TechGDPR can help.

Request your free consultation

Tags

Show more +