TechGDPR’s review of international data-related stories from press and analytical reports.
Official Guidance: workplace conversations, use of the cloud
The Latvian data protection authority suggested when an employee could secretly record a conversation in the workplace to protect their interests, IAPP News reports. The regulator concluded that employees can secretly audio record their employer if it is the only way to collect evidence of illegality; (eg, mobbing, bossing, illegal activities at the workplace). However, some data protection regulations are applicable because a person’s recorded voice still constitutes personal data. It suggests:
- submit recordings as evidence to the state labor Inspectorate, the police, or a court;
- avoid publishing it to social networks or otherwise make a voice recording publicly available, including distribution within a team;
- when audio is transferred to law enforcement, the recording cannot be excessive, unrelated segments must be deleted;
- the information disclosed in a secret recording must also outweigh an individual’s right to data protection.
The Danish data protection authority Datatilsynet has published guidance on the use of the Cloud, (available in English). The guide contains 14 practical examples with explanations. It is targeted primarily at organizations, (data controllers), that would like to start using one or more cloud service(s) and attempts to address the relevant elements of data protection law. However, many of the issues addressed in this guidance apply equally to most other IT service delivery models. A large number of cloud services are usually provided as standardized services where each organization as a customer has limited possibilities to tailor the service in question. Parts of the guide are therefore simultaneously addressed to cloud service providers, (CSP), who can learn more about how they can provide their services in accordance with data protection law. The main steps for data protection when using cloud services include: a) know your services, (data protection and security risk assessments), b) know your supplier, (screening, data processing agreements), and c) audit the CSP and sub-processors.
The guide also evaluates transfers to third countries. In this context, companies should be aware that if their European CSP as a processor complies with a request from law enforcement authorities in a third country, it is considered a personal data breach on part of the controller as unauthorized disclosure of personal data to the concerned law enforcement authority will have occurred. However, this question of an appropriate level of security of processing is limited only to cases where the use of the CSP does not otherwise involve any intended transfers of personal data to third countries, including in relation to the provider’s servicing of its infrastructure, the provider’s provision of support of your cloud service, the provider’s access to its infrastructure for the purposes of capacity planning, etc.
Legal Processes and Redress: EU sanctions & whistleblowing, employee’s image rights, rules on AI
The European Commission launched a whistleblower tool to facilitate reporting of possible sanctions violations. This is a secure online platform, which whistleblowers from around the world can use to anonymously report EU sanctions violations. This information can relate to:
- facts concerning sanctions violations, their circumstances, and the individuals, companies, and third countries involved,
- facts that are not publicly known but are known to you and can cover past, ongoing, or planned sanctions violations, as well as attempts to circumvent EU sanctions.
The EU has more than 40 sanctions regimes in place and their effectiveness relies on their proper implementation and enforcement regarding:
- arms embargoes,
- restrictions on admission, (travel bans),
- asset freezes,
- other economic measures such as restrictions on imports and exports.
The Commission is committed to protecting the identity of whistleblowers who take personal risks to report sanctions violations. If it considers that the whistleblower information it received is credible, it will share the anonymized report and any additional information gathered during the internal inquiry into the case with the national competent authorities in the relevant Member State(s). Access to the whistleblower tool is available here.
An employee can obtain damages simply after the employer delayed to removing, upon request, a group photo including him from the company’s website, L&EGlobal blog post reports. In its recent decision, the French Court of Cassation ruled that “the mere fact that an employee’s image rights have been infringed when he or she objects to the publication of his or her image gives rise to a right to compensation, without the employee having to prove any prejudice.” Other findings of the case were:
- every citizen, every employee, has a right to the protection of his or her image, (Art. 9 of the French Civil Code);
- The employee’s agreement must be obtained before any photo-taking, reproduction, or use, whatever the final medium of this image, (intranet, company newspaper, internet site, promotional video, etc.);
- The agreement must be in writing and as precise as possible, indicating the purpose, the medium used, and its duration;
- The employee’s silence does not constitute tacit consent.
The Irish Council for Civil Liberties, the ICCL, informed the European Commission and co-legislators of two errors in the proposal for harmonized rules on Artificial Intelligence in the EU, Data Guidance reports. In particular:
- A technically inaccurate reference to “validation and testing data sets” accidentally puts most machine learning techniques out of scope, (eg, important AI techniques such as unsupervised and reinforcement learning do not rely on validation and testing data sets).
- The text incorrectly relies on accuracy metrics, which cannot on their own yield adequate reporting about AI systems’ performance, (eg, AI systems based on unsupervised learning and reinforcement learning use other performance metrics, not accuracy. One of the performance metrics used in reinforcement learning is its reliability).
The two errors are unintended and can easily be corrected. However failing to correct these errors will put health, safety, and fundamental rights at risk, (eg, for cancer diagnosis, it is important that the AI system has fewer false negatives than false positives, as false negatives can be fatal while false positives cause inconvenience). The technical errors are available here, and the AI Act proposal is here.
Investigations and Enforcement actions: ex-employees unauthorized access, Clearview AI ban in Italy, video surveillance footage on social media
The EDPB continues to analyze some important recent data breaches within the EU at the request of national regulators. This week it looked at the ‘Santander Bank Polska’ case and levied an administrative fine of 120,000 euros. The controller reported a data breach when it was established that a former employee of the bank, despite the termination of their employment contract, had unauthorized access to the controller’s profile, (on the Electronic Services Platform of the Social Insurance Institution), containing the bank employees’ data. The Polish regulator concluded that a breach of data confidentiality occurred, which simultaneously involved a high risk to the rights or freedoms of the data subjects. Here are some findings from the case:
- The bank posted a message on the internal communication platform, but it was general and not referred to a specified case.
- It was addressed only to those employed at the time of notification, which could leave many data subjects unaware.
- There was a high risk to the rights or freedoms of the data subjects and the controller should have communicated the incident to them, (all employees of the bank who were employed during the period when the former employee of the controller had unauthorized access to the data on the platform).
Meanwhile, the Italian supervisory authority ‘Garante’ imposed a fine amounting to 20 mln euros on Clearview AI Inc for multiple violations of the GDPR. The regulator launched its own proceedings following press reports in connection with facial recognition products which were offered by Clearview AI. Moreover, in 2021 ‘Garante’ received complaints and alerts from organizations that are active in the field of protecting the privacy and the fundamental rights of individuals against Clearview. The personal data held by the company, including biometric and geolocation information, was processed unlawfully without an appropriate legal basis. The company also infringed several fundamental principles of the GDPR, such as transparency, purpose limitation, and storage limitation.
‘Garante’ imposed a ban on further collection and processing, ordered the erasure of the data, including biometric data, processed by Clearview’s facial recognition system with regard to persons in the Italian territory, and the designation of a representative in the EU. It’s the strongest enforcement yet from a European privacy regulator, following prohibiting decisions by UK’s ICO and France’s CNIL last year. However, whether Italy will be able to collect the penalty from Clearview, a US-based entity, is one rather salient question, TechCrunch analysis suggests.
The Croatian supervisory authority AZOP fined a retail chain company 90,000 euros for failure to take appropriate technical and organizational, (TOMs), measures for the processing of personal data, Data Guidance reports. AZOP received a report on alleged violations of personal data from the company, stating that the employees of the company, without authorization and contrary to internal acts and instructions, recorded video surveillance footage with their mobile devices and published it on social networks and in the media. AZOP found that:
- the company did not take adequate actions to prevent its employees from taking video surveillance images using their mobile devices;
- the company took certain organizational measures, such as employee education and adoption of internal acts, but did not take appropriate technical security measures that could reduce the risk of a similar violation, neither before nor after an incident;
- the company did not regularly monitor the implementation of TOM aimed at ensuring the confidentiality, integrity, and availability of personal data;
- the company failed to regularly test, evaluate, and determine the effectiveness of TOMS to ensure the security of video surveillance.
Big Tech: TikTok child privacy class action, cybersecurity firms booming, Twitter Tor version
A class-action lawsuit against TikTok originally initiated by a 12-year-old girl has been granted permission to proceed by the UK High Court. At its heart is the claim the Chinese social networking giant processes children’s personal data unlawfully. The suit seeks damages in the name of millions of children, potentially exposing TikTok to billions in fines. TikTok contests the case and insists it has high-security standards across its platform.
With software security expected to be a booming market, more than doubling in value to 350 billion dollars by 2026, Alphabet Inc’s Google has snapped up Mendiant Inc. for 5.4 billion. The cybersecurity firm has become a reference for companies investigating cyberattacks, and Microsoft was also in the running to buy the company. Analysts say all the big cloud firms will be looking to buy cybersecurity companies, as cyberattacks have spiked with home working, and the Russia – Ukraine war also driving the market for security software.
In what has been described as a tectonic shift at Twitter the company is launching a Tor onion version of its site, with the clear aim of ensuring privacy and avoiding censorship. Software engineer Alec Muffett said, “It’s a commitment from the platform to dealing with people who use Tor in an equitable fashion.” The Tor network will now also feature as a supported browser on Twitter. Unlike accessing Twitter via Tor, the new service is designed specifically for it and adds layers of protection.