TechGDPR’s review of international data-related stories from press and analytical reports.
Official guidance: new SCCs, facial recognition technology, DPOs, children’s data
The European Commission has published questions and answers for the two sets of Standard Contractual Clauses, approved last year for data transfers within and outside of the bloc. These Q&As are based on feedback received from various stakeholders on their experience with using the new sets of SCCs in the first months after their adoption. Here are some of them:
- Are there specific requirements for the signature of the SCCs by the parties?
- Can the text of the SCCs be changed?
- Is it possible to add additional clauses to the SCCs or incorporate the SCCs into a broader commercial contract?
- How does the docking clause work in practice? Are there any formal requirements for allowing new parties to accede?
- In which form should instructions by the controller be given to the processor?
- What happens if the controller objects to changes of sub-processors, in the case a general authorisation to the engagement of sub-processors was given?
- Are there any requirements for filling in the annexes? How detailed should the information be?
- Are any specific steps needed to comply with the Schrems II judgment when using the new SCCs? Is it still necessary to take into account the guidance of the EDPB?
- Does the data importer have to inform individuals about requests for disclosure received from a public authority? What if the data importer is prohibited from providing this information under its national law?
- Can the SCCs be used to transfer personal data to an international organisation?
To find answers to these and many other questions, and useful examples, consult the full document by the EC.
The European Data Protection Board welcomes comments on the Guidelines 05/2022 on the use of facial recognition technology in the area of law enforcement. More and more law enforcement authorities apply or intend to apply facial recognition technology, (FRT). It may be used to authenticate or to identify a person and can be applied to videos, (eg, CCTV), or photographs. It may be used for various purposes, including searching for persons on police watch lists or monitoring a person’s movements in the public space. FRT is built on the processing of biometric data, therefore, it encompasses the processing of special categories of personal data. Often, FRT uses components of artificial intelligence or machine learning. While this enables large-scale data processing, it also induces the risk of discrimination and false results. FRT may be used in controlled 1:1 situations, but also in huge crowds and important transport hubs. You can download the guidance and leave your comments here.
The French Ministry of Labour has published the results of the annual study of the profession of data protection officer, carried out with the support of the data protection regulator CNIL. This survey shows the diversification of profiles and the growing importance of the profession of DPO, the appointment of which is compulsory in certain cases. The main findings are as follows:
- a positive professional experience: 58% are satisfied with the exercise of their function and 87% are convinced of the usefulness of their function. They also want to continue their missions with a strong motivation at 67%;
- a diversification of profiles: 47% come from areas of expertise other than law and IT, (+12 points since 2019), for example, administrative and financial profiles or those related to quality or compliance audits;
- decreasing training: 1/3 have not taken any IT and GDPR training since 2016, (+ 7 points), even though more and more of them are neither lawyers nor IT specialists.
This last observation will be studied in particular by the CNIL, which recalls the obligation of data controllers and subcontractors who have appointed a DPO to provide them with the resources necessary to maintain specialized knowledge, (Art. 38.2 of the GDPR). Read the full study, in French, here.
The Irish data protection authority DPC has produced three short guides for children on data protection and their rights under the GDPR. These guides are aimed mainly at children aged 13 and over, as this is the age at which children can begin signing up for many forms of social media on their own. Each of these short guides introduces children to a different data protection right and how to use it. These guides can be read together or separately:
- Your Data Protection Rights – full guide – is available by clicking here.
- Why are data protection rights important? – click here.
- Knowing what’s happening to your data – click here.
- Getting a copy of your data – click here.
- Getting your data deleted – click here.
- Saying ‘no’ to other people using your data – click here.
Legal processes: concept of personal data
InsidePrivacy.com blog post looked at the recent decision by the EU General Court on whether information not identifying an individual by name constitutes “personal data” under the GDPR. The case concerns an online press release published by the European Anti-Fraud Office, (OLAF), announcing that it had determined that a Greek scientist had committed fraud using EU funds intended to finance a research project.
The press release included information about the scientist, her gender, the fact that she is young, her occupation, and her nationality. It also included a reference to the scientist’s father and the place where he works, as well as the approximate amount of the grant supplied to the scientist, the granting body, the nature of the entity hosting the project, and its geographical location. The release did not include the scientist’s name, the subject matter of the research, or the project’s name.
The scientist alleged that someone reading it could use the above-mentioned information to identify her using “means reasonably likely to be used” and even explained how this could be done. However, the court decided that the scientist had not sufficiently proven this allegation. Further, the court held that the information the journalists used to identify the scientist, which fell outside the press release, cannot be attributable to OLAF. For the court to hold OLAF responsible, the scientist would have had to demonstrate that her identification was a result of the press release and did not result from external or additional information.
Investigations and enforcement actions: Clearview AI, Uber, unlawful use of an email address, not handling an access request, dummy CCTV cameras
The Information Commissioner’s Office, (ICO), has fined Clearview AI Inc 7,552,800 pounds for using images of people in the UK, and elsewhere, that were collected from the web and social media to create a global online database that could be used for facial recognition. The ICO has also issued an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet and to delete the data of UK residents from its systems. The ICO found that Clearview:
- Has collected more than 20 billion images of people’s faces and data from publicly available information on the internet and social media platforms all over the world to create an online database. People were not informed that their images were being collected or used in this way.
- The company provides a service that allows customers, including the police, to upload an image of a person to the company’s app, which is then checked for a match against all the images in the database.
- The app then provides a list of images that have similar characteristics with the photo provided by the customer, with a link to the websites from where those images came from.
- Given the high number of UK internet and social media users, the Clearview database is likely to include a substantial amount of data from UK residents which has been gathered without their knowledge.
Although Clearview no longer offers its services to UK organisations, the company has customers in other countries, so the company is still using the personal data of UK residents. The ICO enforcement action comes after a joint investigation with the Office of the Australian Information Commissioner, which focused on Clearview’s use of people’s images, data scraping from the internet and the use of biometric data for facial recognition. The French regulator CNIL is reportedly also considering a similar fine in the near future.
Meanwhile, the Italian privacy regulator ‘Garante’ sanctioned Uber for a total of 4,240,000 euros. Uber BV, with a registered office in Amsterdam, and Uber Technologies Inc, with a registered office in San Francisco, are both, (as joint controllers), held responsible for the violations committed affecting over 1.5 million Italian users, including drivers and passengers:
- Unsuitable, unclear, and incomplete presentation meant it was not easy to understand the information given to users.
- Data processing without consent.
- Profiling users, (on the basis of the so-called “fraud risk”, assigning them a qualitative judgment eg; ‘low’), and a numerical parameter, (from 1 to 100).
- Failure to notify the authority was discovered by the ‘Garante’ during inspections carried out at Uber Italy following a data breach made public in 2017.
The security incident, which occurred before the full application of the GDPR, involved the data of about 57 million users around the world and was sanctioned by the Dutch and UK privacy authorities on the basis of their respective national regulations. The personal information processed by Uber concerned personal and contact data, (name, surname, telephone number, and e-mail), app access credentials, location data, (those that appeared at the time of registration), relationships with other users, (sharing trips, introducing friends, profiling information).
The Icelandic supervisory authority fined HEI medical travel agency for unlawful use of an e-mail address and for not handling an access request. The regulator found out that an employee at HEI had obtained the complainant´s, and several other doctors´ email addresses, by logging into the internal website of the Icelandic Medical Association, with the access of a doctor who was related to the employee. HEI used the mailing list to send a targeted email to doctors, including the complainant. In determining the fine, (approx. 10,700 euros), the regulator considered that even though HEI had considered itself authorized to use the list, there was nothing in the case that proved that the company had ascertained the lawfulness of the processing. Finally, the multinational had not complied with the obligation to notify the Authority of the processing of data for geolocation purposes.
Meanwhile, the Norwegian regulator Datatilsynet imposed a fine on an unnamed company for automatic forwarding of employee emails, Data Guidance reports. Due to disagreements, the employee’s access to email and computer systems was closed and all emails sent to the employee’s email box were automatically forwarded to an email address managed by the general manager, and the forwarding of emails took place for approximately six weeks. The purpose was to take care of customer relationships, but during the period the general manager handled both work-related and private emails that were sent to the employee’s email box. The regulator found that the employer did not have a legal basis for the automatic forwarding of the employee’s emails under the GDPR, and noted that this is also in conflict with the applicable rules on the employer’s access to email boxes and other electronic material.
Finally, the Czech office for personal data protection UOOU published its decision on a complaint, in which it decided that the installation of dummy cameras in a workplace did not violate the GDPR, following an investigation. The UOOU detailed it had received a complaint about the installation of a camera system to monitor and control employees. In this context, the UOOU found that the camera system was not functioning but was in fact a dummy camera and thus did not fall within the remit of the GDPR. However, the regulator suggested that the matter should be referred to the competent employment inspectorate for investigation as it may constitute a violation of employment law regulations.
Data security: data leaks doubled due to cyber-attacks
The Dutch data protection authority AP again measured an explosive increase in the number of reports of data leaks caused by cyber-attacks. This number almost doubled in 2021 compared to the previous year. In total, the AP received almost 25,000 data breach reports last year. Of this, 9% was caused by cyber-attacks. Last year it was 5%. The AP also noticed that in the case of ransomware, affected organisations first restore the systems, and only much later inform the people. As a result, the damage can become even greater, because the victims can only protect themselves against the consequences much later.
The AP also saw that organisations that have paid a ransom to get their data back after a ransomware attack often do not inform victims about the data breach. They state that by paying a ransom to the hackers, personal data was prevented from being distributed further because hackers have made commitments about this. However, paying a ransom does not guarantee that the hackers will actually remove the data and never sell it on. Finally, during cyber attacks, data is often stolen that organizations have collected unnecessarily or have kept for too long.
As a result, “even if only names and e-mail addresses have been stolen, these data can be used in combination with previously leaked information to gain access to user accounts at, for example, banks or webshops. Criminals can also abuse this type of data to carry out new spam and phishing attacks in a very targeted manner”.
Big Tech: Clearview AI increased sales, Twitter settlement over targeted ads and user data
Facial recognition firm Clearview AI is expanding sales of its facial recognition software to companies, having previously mainly served the police, according to Reuters. Meanwhile, a number of EU regulators accused Clearview of breaking privacy laws by collecting online images without consent, and the company this month settled with US rights activists over similar allegations. Clearview AI uses publicly available photos on social media platforms to train its tool, which the company says is of high accuracy. The new private-sector offering matches people to ID photos and other data that clients collect with subjects’ permission. It is meant to verify identities for access to physical or digital spaces. Reportedly, a company selling visitor management systems to schools had signed up for Clearview services as well.
Meanwhile, the US Department of Justice reached an agreement with Twitter that includes a fine of 140 million euros and an order for the social network to better respect the privacy of personal data. Authorities accuse the platform of deceiving its users from 2013 to 2019 by hiding that it was using their personal data to help companies send them targeted advertising. During that period, more than 140 million Twitter users gave phone numbers or email addresses to the US-based service to help secure accounts with two-factor authentication, regulators said. “Twitter obtained data from users on the pretext of harnessing it for security purposes but then ended up also using the data to target users with ads,” the FTC chair Lina Khan stated. Twitter also falsely said it complied with the EU-US and Swiss-US Privacy Shield Frameworks at the time, which barred companies from using data in ways that consumers do not consent to.