Weekly digest March 14 – 20, 2022: smart contracts, AI bias, password managers & privacy

TechGDPR’s review of international data-related stories from press and analytical reports.

Official guidance: smart contracts, DPOs, AI risk management, GDPR cooperation

The Spanish data protection authority AEPD analyzed smart contracts. Smart contracts are algorithms that are stored in a blockchain and that execute automated decisions. The very nature of the smart contract, when applied to data of natural persons, falls within the scope defined by Art. 22 of the GDPR. This refers to the right of an interested party not to be subject to decisions based solely on automated means, including profiling, when those decisions have legal effects on them or significantly affect them, and that the interested party can challenge that automated decision. It also establishes three exceptions to said prohibition: explicit consent, the conclusion or execution of a contract between the interested party and a data controller, or the existence of an enabling law. In any of the cases, it is necessary to identify a person responsible for the execution of the said smart contract. The most famous use case is the one known as the DAO Fork of Ethereum

A new practical guide for Data Protection Officers was published by the French data protection authority CNIL, (available in English). The spirit of the GDPR is to make the DPO the “orchestra conductor” of the management of personal data in the organization which designates them. The hierarchical position of the DPO must bear witness to this, and their resources must be adapted so that they can fully accomplish their job and their role of compliance coordinator. They should not work in a vacuum but be fully integrated into the operational activities of their organization, in conjunction with the CISO and the IT department, etc. The DPO guide is divided into 4 chapters: 

  • the role of the DPO; 
  • designating the DPO; 
  • the exercise of the DPO’s tasks; 
  • CNIL’s support for the DPO. 

Each theme is illustrated by concrete cases and frequently asked questions related to the subject being dealt with.

The US NIST seeks comments on the draft AI risk management framework, (AI RMF), and offers guidance on AI bias. It is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. It aims to provide a flexible, structured, and measurable process to address AI risks throughout the AI lifecycle. Similarly, bias in AI can harm individuals. The NIST researchers thus recommend widening the scope of where we look for the source of these biases — beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed. AI can make decisions that affect whether a person is admitted into a school, authorized for a bank loan, or accepted as a rental applicant. AI systems can exhibit biases that stem from their programming and data sources, (eg, machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group). Read the full draft AI RMF and guidance on AI bias here.

The EDPB adopted a couple of new guides last week:

  • on Art. 60 of the GDPR, (provides a detailed description of the GDPR cooperation between Supervisory Authorities, (SAs), and helps them to interpret and apply their own national procedures in such a way that it conforms to and fits in the cooperation under the one-stop-shop mechanism). 
  • on dark patterns in social media platform interfaces, (gives concrete examples of dark pattern types, presents best practices for different use cases, and contains specific recommendations for designers of user interfaces that facilitate the effective implementation of the GDPR), and
  • the toolbox on essential data protection safeguards for enforcement cooperation between EEA and third-country SAs, (covers key topics, such as enforceable rights of data subjects, compliance with data protection principles, and judicial redress).

Legal processes: cyberattack disclosure in the US

New US cyber security incident reporting mandates have been signed into law, making it a legal requirement for operators of critical national infrastructure, (CNI), to disclose cyberattacks to the government. Namely, it will require CNI owners within the US to report substantial cyber attacks to the Cybersecurity and Infrastructure Security Agency, (CISA),  within 72 hours, and any ransomware payments made within 24 hours. It enables CISA to subpoena organizations that fail to do so, with the threat of referral to the US Department of Justice for non-compliance. CISA has not said how it will use data gleaned from breach reports but has been seeking to build its capabilities and work more closely with the private sector on a voluntary basis. The CISA lists 16 broad sectors spanning health, energy, food, and transportation as critical to the US, although the new legislation is yet to spell out precisely which companies would be required to report cyber incidents. 

Data breaches and enforcement actions: insufficient TOMs, ransomware, unwanted marketing calls, Irish/Meta fine

The Danish data protection authority Datatilsynet criticized Kombit, (IT/project organization), for violating Art. 32 of the GDPR, following data breaches reported by 30 municipalities, Data Guidance reports. An error occurred in the platform used by the municipalities, where a user could access another user’s files, which included personal data if the latter was not logged out of their computer. The IT company had not complied with the rules on data security, namely: no sufficient testing of the platform was carried out in connection with the change of the code implemented, (development of a change to the login solution in the platform), and it applied for insufficient access right controls. Additionally, Kombit along with another company could not agree on what tests could be expected to be performed in connection with the code changes, and whether another company was acting as a sub-processor or not.

The UK Information Commissioner’s Office, (ICO), announced fines totalling approx 482,000 euros to five companies responsible for over 750,000 unwanted marketing calls targeted at older, vulnerable people. Companies, (Domestic Support Ltd, Home Sure Solutions, Seaview Brokers, UK Appliance Cover, UK Platinum Home Care Services), were calling people to sell insurance products or services for large household appliances, such as televisions, washing machines, and fridges. In the UK live marketing calls should not be made to anyone who has registered with the Telephone Preference Service unless they have told the caller that they wish to receive such calls from them. The ICO also issued these companies with enforcement notices that require them to immediately stop making these predatory calls.

The ICO also fined a law firm approx 116,784 euros for contravening Art. 5 and Art. 32 of the GDPR by failing to process personal data in a manner that ensured appropriate security of the personal data, GDPRHub reports. Tuckers Solicitors, a limited liability partnership of solicitors, was the data controller. In 2020, they became aware that their systems were hit by a ransomware attack and reported the data breach to the ICO on the same day. Here are some facts and findings from the case:  

  • The attack had resulted in the encryption of numerous civil and criminal legal case bundles stored on an archive server. 
  • Backups were also encrypted by the attacker.
  • Although the firm’s GDPR and Data Protection Policy required two-factor authentication where available, it was not using the same for remote access. 
  • The firm installed the patch after months of its release, during which the attacker could have exploited the vulnerability. 
  • The firm moved its servers to a new environment and the business was now back to running as normal, albeit without the restoration of the compromised data.
  • The proper encryption could have mitigated the damage, (however it would not have prevented the ransomware attack).

The ICO held that multi-factor authentication was a low-cost measure that could have substantially supported Tuckers in preventing access to its network. The firm also should not have been processing sensitive personal data on an infrastructure containing known critical vulnerabilities without appropriately addressing the risk.

Ireland’s data protection authority, (DPC), imposed a 17 mln euro fine on Facebook parent Meta Platforms after an inquiry into 12 data breach notifications from 2018. The DPC found that Meta Platforms failed to have in place appropriate technical and organisational measures which would enable it to readily demonstrate the security measures that it implemented in practice to protect EU users’ data. Given that the processing under examination constituted “cross-border” processing, the DPC’s decision was subject to the co-decision-making process outlined in Art. 60 of the GDPR and all of the other European supervisory authorities were engaged as co-decision-makers. While objections to the DPC’s draft decision were raised by two of the European supervisory authorities, a consensus was achieved through further engagement between the DPC and the supervisory authorities concerned. Ireland regulates Meta and a number of other large US tech giants because their EU headquarters are in the country. The DPC, which has a number of ongoing investigations into Meta, last year fined its WhatsApp subsidiary a record 225 mln euros.

Data security: password managers

An analysis by the Guardian looks at password managers for convenience and enhanced online safety. The article argues that long and complex passwords are more secure but difficult to remember, leaving many people using weak and easy-to-guess credentials. Password manager apps can resolve this problem by creating long and complex credentials for you, and remember them the next time you log in: “Password managers keep your details secure by encrypting your logins so they can only be accessed when you enter the master password.” Yet reportedly only about one in five people in the UK use one. Some other findings by UK experts are:

  • Never create a virtual book or document on your computer, which could be viewable if your device is hacked.
  • Password managers should be backed by two-factor authentication, whereby you are asked for something such as a one-time code in addition to a password when you log in using a new device.
  • A security key is an option – a token you can insert into your device to double-secure high-risk accounts such as email. 
  • Authenticator apps are another option. These generate a unique code for you to enter into the site and are very straightforward to use.
  • Apple Keychain and the Google Chrome Password Manager lack the features of “full-service” ones. 
  • Physical password books aren’t a bad idea, as long as you create strong, unique logins, and the book is kept somewhere secure and doesn’t leave the house.

DPIA: Zoom case

Zoom is making changes to the privacy agreements for all education and enterprise users in Europe in collaboration with SURF, (the ICT service provider for Dutch education and research).  It has removed the privacy risks identified in the DPIA from 2021 by making changes to the software, making processor agreements, and promising future changes. These contractual and technical adjustments are described in the new recently published DPIA. They include:

  • Data location solutions, (all personal data be processed in the EU by the end of the year). 
  • Data Subject Access Requests: Zoom to use two self-service tools for enterprise and education account administrators. 
  • Clarifying the data protection role of Zoom and its customers, (universities and government organizations).
  • Clarified and minimized customer personal data retention practices. 
  • Privacy by design and default.
  • Updated Data Transfer Impact Assessment, and much more.

Big Tech: all-new GA, apps leaking sensitive data, Tesla’s facial and optical tracking

The all-new Google Analytics 4 will be the first data measurement tool released by the company with privacy designed “at its core”, an upgrade on the privacy features in the recent Analytics 360 tool, which will be retired, along with Universal Analytics. The company says IP addresses will no longer be stored, which could ease compliance in international markets, and the EU GDPR requirements for data transfers.

Are your apps leaking sensitive user data? A study revealed that 2113 apps had vulnerabilities in their Firebase back end because of cloud misconfigurations, IAPP News reports. Certain apps had tens of millions of downloads and included popular e-commerce, social audio platform, logo design, bookkeeping sites, and even a dating app. Lost data included user names, passwords, phone numbers, bank details, and some 50,000 chat messages. A separate study also found that 14% of Android and iOS apps using public cloud back ends had similar privacy issues due to misconfigurations.

Integral to Tesla’s autopilot and full self-driving features is the fact that software looks at your eyes while you look at the road, using facial and optical tracking to check your driving. Now a driver in Illinois has filed a proposed class action against Tesla Inc. for recording and storing biometric data without informed consent, illegal under Illinois’s Biometric Information Privacy Act, (BIPA). The suit also claims Tesla failed to make its data retention policy public, and failed to inform customers where facial recognition data was stored. Damages of 5000 dollars per BIPA violation are being sought.

Do you need support on data protection, privacy or GDPR? TechGDPR can help.

Request your free consultation

Tags

Show more +