TechGDPR’s review of international data-related stories from press and analytical reports.
Legal processes: new EU-US data transfer deal, Digital Markets Act, China’s algorithmic rules
The EU and US have announced a new preparatory data transfer deal, seeking to end the legal uncertainty in which thousands of companies found themselves after the CJEU threw out two previous agreements due to America’s governmental surveillance practices, Reuters reports. It will take months to turn the provisional agreement into a final legal deal, as the US will need to prepare their executive order, and then the EU must complete internal consultation in the Commission and within the EDPB. So far the White House has released a fact sheet on the new deal, which addresses the CJEU ‘Schrems II’ decision concerning US law governing signals intelligence activities:
- Signals intelligence collection may be undertaken only where necessary to advance legitimate national security objectives, and must not disproportionately impact the protection of individual privacy and civil liberties;
- EU individuals may seek redress from a new multi-layer redress mechanism that includes an independent Data Protection Review Court that would consist of individuals chosen from outside the US Government who would have full authority to adjudicate claims and direct remedial measures as needed; and
- US intelligence agencies will adopt procedures to ensure effective oversight of new privacy and civil liberties standards.
Earlier last week, EU privacy experts raised their concerns over the lack of details of the deal. Austrian privacy activist Max Schrems, who started a long-running dispute with Meta/Facebook, (resulting in the invalidation of the EU-US Privacy Shield data transfer framework), stated: “The final text will need more time, once this arrives we will analyze it in-depth, together with our US legal experts. If it is not in line with EU law, we or another group will likely challenge it.” The legal stance over transatlantic data flows has led, in recent months, to European data protection agencies issuing orders against flows of personal data passing via products such as Google Analytics, Google Fonts, and Stripe, along with long-standing and multilayered complaints against Meta/Facebook, TechCrunch sums up.
Meanwhile, sweeping new digital rules targeting US tech giants will likely come into force in October, EU antitrust chief Margrethe Vestager informed. The rules proposed a year ago in the Digital Markets Act set out a list of dos and don’ts for Amazon, Apple, Meta, Google, Microsoft, and others. Fines for violations will range reportedly from 10% of a company’s annual global turnover to 20% for repeat offenders who could face an acquisition ban. Companies that are designated as online gatekeepers, (intermediation services, social networks, search engines, operating systems, advertising services, cloud computing, video-sharing services, web browsers and virtual assistants), which control access to their platforms and the data generated there will have six months to comply with the new rules:
- additional requirements on the use of data for targeted or micro-targeted advertising and the interoperability of services;
- gives users the option to uninstall pre-installed software applications, such as apps, on a core platform service at any stage;
- ensures whistleblowers are able to alert competent authorities to actual or potential infringements of this regulation and protect them from retaliation, etc.
In China, the provisions on the administration of algorithmic recommendations in the Internet Information Service became effective as of March, Chinalawupdate blog reports. It refers to the application of any algorithmic technology, including without limitation, generation and synthesis, individualized push, sorting and selection, searching and filtering, and scheduling and decision-making, to provide information to users. Among many provisions, it requires:
- algorithmic system and mechanism review, science and technology ethics review,
- user registration, information release review, data security protection,
- anti-telecom network fraud, security evaluation, monitoring, and incident emergency plan,
- informing users about its provision of algorithmic recommendation service, and notifying the public, in an appropriate manner, of the basic principles, the purpose and intention, and the main operation mechanism,
- providing users with options that are not customized based on the users’ individual characteristics, or the option to conveniently close the algorithmic recommendation service, etc.
Official guidance: workplace monitoring
The Norwegian data protection authority Datatilsynet has issued workplace monitoring guidance, (in Norwegian). These activities must take into account important data protection criteria such as providing information about the treatment to jobseekers and employees, facilitating data subject rights, deleting the information when no longer necessary, and having satisfactory information security and internal control of their data. One of the examples, automatic forwarding of e-mails is considered continuous monitoring of the employee’s use of electronic equipment and is not allowed. Monitoring of an employee’s use of electronic equipment is prohibited, and can only exceptionally take place if the purpose is to administer the company’s computer network or detect or solve security breaches in the network. The guide also contains provisions for background checks during the recruitment process, access to e-mail and other electronically stored materials, and camera surveillance in the workplace.
Data breaches and enforcement actions: online retailer, third party provider, school’s trade union, insurance company
An American online retailer of stock and user-customized on-demand products CafePress to pay half a million dollars for FTC violations, DLA Piper reports. The online platform failed to secure consumers’ sensitive personal data collected through its website and covered up a major breach. This included:
- Storing personal information in clear, readable text.
- Maintaining lax password policies that allowed, for example, users to select the same word, including common dictionary words, as both the password and user ID.
- Failing to log sufficient information to adequately assess cybersecurity events.
- Failing to comply with existing written security policies.
- Failing to implement patch policies and procedures.
- Storing personal information indefinitely without a business need to do so, etc.
In 2019, a major data breach exposed millions of emails and passwords, addresses, security questions, and answers as well as a smaller number of social security numbers, partial payment card numbers, and expiration dates of the customer accounts. This information was later discovered for sale on the dark web. The company patched the vulnerability but allegedly failed to properly investigate the breach and notify the affected customers. Read more analysis of the case by the Workplace Privacy Report article.
The US authentication firm Okta has admitted that hundreds of customers may have been impacted by a prolific hacking group’s attack via a third-party provider, Infosecurity Magazine reports. Ransom group Lapsus shared screenshots, which purportedly showed “superuser” access to an internal Okta desktop in January. The attackers did have access to a third-party support engineer’s laptop for a five-day window. Okta initially said the matter with the sub-contractor was investigated and contained, BBC reports. Similarly, none of Okta’s clients such as Cloudflare, FedEx, Thanet has reported any issues.
Cyprus’s data protection commissioner fined English school 4,000 euros for failure to implement sufficient technical and organisational security measures to prevent a data breach, Data Guidance reports. The investigation related to the unauthorized access and use of the email addresses of the students’ parents and guardians, by the school’s staff union ESSA. In particular, a school professor who was also the president of the ESSA, sent an email to all parents/guardians and to the staff, for purposes other than those for which said email addresses were originally collected, and without the parents/guardians being informed of such use. The regulator ruled that irrespective of the responsibility of the school professor and the ESSA, the English school, as a data controller, did not apply sufficient security measures following Art. 32 of the GDPR. ESSA, as a separate joint controller, was also fined 5,000 euros.
The Icelandic data protection authority ruled in a case about an insurance company’s processing of personal data following a claim for compensation. There were complaints about the insurance company’s disclosure of the plaintiff’s personal data to an expert who prepared a report on the speed and impact of a traffic incident that the plaintiff had encountered. There were also complaints about the insurance company’s use of the report in question when assessing the claim for compensation against the company. The plaintiff contested that the insurance company was not authorized to administer the further use of the report data and that it did not take care to inform the individuals or obtain their consent. Although the data protection authority concluded that the above processing activities were in accordance with the law, based in particular on a contract (Art. 28 of the GDPR). Since the complainant was not informed or educated about the transfer of the data to the specialist and its processing, the regulator found that the company did not comply with the information and transparency obligations (Art.13 of the GDPR).
Data security: pseudonymisation in the health sector
The European Union Agency for Cybersecurity has published guidance on deploying pseudonymisation techniques in the health sector. From a cybersecurity point of view, the confidentiality, availability, and integrity of medical data and relevant infrastructure are considered essential in order to be able to provide timely, appropriate, and uninterrupted medical care. This is also highlighted by the NIS Directive which categorizes the health sector as an operator of essential service and calls for minimum security requirements to ensure a level of security appropriate to the level of risks presented. Furthermore, the GDPR distinguishes, in Art. 9, data concerning health as a special category of data, and sets out additional requirements and stricter obligations for processing and protecting such data. Lastly, the Medical Devices Regulation imposes requirements regarding the safety, quality, and security of medical devices in order to achieve a high common level for safety. Case studies in the report include:
- exchanging patient’s health data,
- Clinical Trials,
- patients-sources monitoring of health data.
Big Tech: data brokers, smartphone health monitoring, China’s crackdown on Bing algorithms
The legal implications of personal data usage by the data brokerage industry has been analysed by the Guardian. A new lawsuit reportedly involves two companies in this vast network: X-Mode, a data broker, and NybSys, one of X-Mode’s customers. The lawsuit claims people’s exact location data was sold through a chain of industry players, rather than the summary or analysis of that information, without knowledge or permission from X-Mode. Data brokers collect personal data from a variety of sources, including social media, public records and other commercial sources or companies. These firms then sell that raw data, or inferences and analysis based on that data – such as a user’s purchase and demographic information – to other companies, like researchers or advertisers.
Google wants to use smartphones to monitor health, saying it would test whether capturing heart sounds and eyeball images could help people identify issues from home, Reuters reports. The company is investigating whether the smartphone’s built-in microphone can detect heartbeats and murmurs when placed over the chest allowing early detection of heart valve disorders, etc. Google also plans to test whether its artificial intelligence software can analyse ultrasound screenings taken by less-skilled technicians, as long as they follow a set pattern.
Microsoft’s Bing, the only major foreign search engine available in China, said a government agency has required it to suspend its auto-suggest function in the country for a week, Reuters reports. It is a second case for Bing since December, and arrives amid an ongoing crackdown on technology platforms and algorithms from Beijing. Since August, China’s top cybersecurity authorities have published draft rules dictating how internet platforms can and cannot make use of algorithms. These came into effect this month.