TechGDPR’s review of international data-related stories from press and analytical reports.
Legal processes: EU Digital Strategy, IoT, biometrics policing program, US surveillance ads
The EU Parliament moved on the implementation of the Digital Services Act, (part of the EU Digital Strategy), that regulates platforms for a safer online space for users. MEPs gave the green light to open negotiations with member states. The Parliament introduced several changes to the Commission’s proposal, exempting micro and small enterprises from certain obligations, including on:
- Targeted advertising: more transparent and informed choice for the recipients of digital services, including information on how their data will be monetised.
- Refusing consent shall be no more difficult or time-consuming than giving consent.
- If their consent is refused or withdrawn, recipients shall be given other options to access the online platform, including “options based on tracking-free advertising”.
- Targeting or amplification techniques involving the data of minors or special categories of data for the purpose of displaying ads will be prohibited.
- Recipients of digital services and organisations representing them must be able to seek redress for damages.
- Platforms should be prohibited from using user deceiving or nudging techniques.
- Very Large Online Platforms should provide at least one recommender system that is not based on profiling.
The EU Commission published its latest competition sector inquiry report into the consumer Internet of Things, IoT. Among the main areas of potential concerns are:
- The role of voice assistants and smart devices as intermediaries for data generation and collection, which would allow them to control user relationships.
- The extensive access to data, including information on user interactions with third-party smart devices and consumer IoT services by providers of voice assistants.
- The access to and accumulation of large amounts of data allow voice assistant providers to improve their market position.
The IoT inquiry urges companies to review their commercial practices, as its findings will inevitably add to the ongoing legislative process on the EU Digital Markets Act, (part of the EU Digital Strategy) . Read the report and the staff working document for more detailed information.
According to Human Rights Watch, Greece’s new biometrics policing program can undermine privacy, create risks of profiling and other abuses. The police reportedly would use hand-held devices to gather biometric information, fingerprints, faces, from people on a vast scale and cross check it against police, immigration, and private sector databases primarily for immigration purposes. Human Rights watch believes that a) the Greek police should use their authority to stop people and require them to show identity documents only when based on a reasonable suspicion that the person is involved in an illegal activity, b) the police should put in place systems to check the validity of identity documents without detaining people or gathering personal biometric data. In 2019 the Greek police signed a contract with Intracom Telecom to help create the “smart policing” program. Since 2020, the Hellenic Data Protection Authority (DPA) has been investigating its lawfulness. The launch of the program was planned for 2021, but has been delayed a couple of times.
The Banning Surveillance Advertising Act was introduced in the US House of Representatives. The draft legislation prohibits advertising networks and facilitators from using personal data to target ads, with the exception of broad location targeting to a recognized place (such as a municipality). The bill also prohibits advertisers from targeting ads based on protected class status information, such as race, gender, and religion, and personal data purchased from data brokers. However, it makes explicit that contextual advertising, which is advertising based on the content a user is engaging with, is allowable. It also provides authorisations for the FTC or the state attorneys general to enforce violations of the Act. Read the full draft law here and detailed section-by-section summaries here.
Official guidance: Bluetooth security, clinical trials Code of Conduct, the right to access, housing, processor/EU representative
The US National Institute of Standards and Technology, NIST, publishes its updated guide on Bluetooth security. Bluetooth wireless technology is used primarily to establish wireless personal area networks, and has been integrated into many types of business and consumer devices. The Bluetooth specifications define several security modes, and each version of Bluetooth supports some, but not all, and some – do not require any security at all. The updated NIST guide provides exhaustive information on the security capabilities of Bluetooth and gives step-by-step management, technical and operational recommendations to organizations employing Bluetooth wireless technologies on securing them effectively.
The European Federation of Pharmaceutical Industries and Associations, EFPIA, confirmed that its GDPR Code of Conduct on Clinical Trials and Pharmacovigilance has progressed to the final phase of review by data protection authorities prior to formal submission to the EDPB for approval. The EFPIA believes that a GDPR Code of conduct will:
- Enable the sector to align on key data protection positions, providing more consistency, clarity and certainty for clinical research.
- Bring more certainty to third parties (patients, ethical committees and hospitals).
- Clarify the linkages between the GDPR and other key sectoral legislation such as the Clinical Trials Regulation.
- Respond to the Commission’s policy ambition for the European Health Data Space to improve data governance, etc.
The EDPB adopted guidelines on the right of access that enables individuals to get knowledge on how and why their personal data is processed by organisations. Among others, the guide provides clarifications on the scope of the right of access, the information the controller has to provide to the data subject, the format of the access request, the main modalities for providing access, and the notion of manifestly unfounded or excessive requests. The Guidelines will be subject to public consultation for a period of 6 weeks and made available on the EDPB website once these have been completed.
The Bavarian data protection authority for the private sector, BayLDA, is examining the area of housing management and, in particular, self-disclosure of prospective tenants, the DataGuidance reports. The BayLDA clarified that when contact is made and a viewing appointment is arranged, information about the prospective tenant’s occupation and income is not yet required. Only if the person viewing the flat continues to be interested, it is permissible to ask about the number of people moving in, the prospective tenant’s occupation and income. If at the end of the selection process the landlord would like to conclude a tenancy agreement with the person, then the submission of a self-disclosure from a credit agency may also be requested before the conclusion of the agreement.
The Croatian data protection authority AZOP analyzes the possibility for a processor to perform the role of a controller’s EU representative. The regulator states that in order to ensure that the processor in the given scenario is not in conflict in terms of two duties, it would be advisable to establish processes and practices in the work environment that will promote effective control, management and resolution of conflicts of interest, (eg, open communications and dialogues related to ethics, education of its employees). At the same time, the establishment of these procedures and excessive control of the processor, in terms of the representative’s remit, in practice could be unenforceable and counterproductive, which would result in distrust of the controller. Thus, the regulator concludes that performance of two functions in the same person would represent a possible conflict of interest, and should be prevented.
Data breaches, Investigations and Enforcement actions: aggressive telemarketing, Red Cross, demonstrators, IT solutions’ failed security
The Italian data protection authority, “Garante”, fined Enel Energia, (multinational manufacturer and distributor of electricity and gas), 26,5 mln euros for aggressive telemarketing, consumer data used without consent and failure to comply with the accountability principle. The decision was issued following hundreds of complaints by users who had received unsolicited calls, some of them based on pre-recorded messages. Others had found it difficult to exercise their data protection rights and had encountered problems handling their data in connection with the supply of utility services both on the company’s website and through the app released to manage power consumption. Enel Energia was ordered to bring all processing by its sales network into compliance with suitable arrangements, to implement further technical and organisational measures to handle data subjects’ requests, in particular, the right to object to processing for promotional purposes, and to provide feedback on those requests by no later than 30 days.
A massive cyber-attack targeted Red Cross Red Crescent data on 500,000 people. from files at an external company in Switzerland the ICRC contracts to store data. There is not yet any indication that the compromised information has been leaked or shared publicly. The attack compromised confidential information on highly vulnerable people, including those separated from their families due to conflict, migration and disaster, missing persons and their families, and people in detention. In response the ICRC had to shut down the Restoring Family Links systems. The organisation asks those responsible for the attack not to share, sell, leak or otherwise use this data.
The Portuguese data regulator CNPD fined Lisbon city municipality 1.25 mln euros in a case related to the processing of personal data of participants in demonstrations. The mayor’s office had committed 225 breaches of demonstrators’ personal data between 2018 and 2021, namely, when their details were shared with the embassies of several countries, BBC reports. More than 100 other breaches that occurred since 2012 were not covered as they pre-dated the GDPR. Some of the breaches reportedly could have attracted fines of up to 20 mln each, but the regulator had refrained from imposing these due to the effect of the pandemic on public finances. When the story broke in June 2021, the data protection officer and cabinet in charge of handling protesters’ data was dismissed, and an external audit of the city hall’s data protection policies was ordered to take place, Reuters reports.
The Maltese data protection authority, IDPC, issued its decision on the personal data breach suffered by a C-Planet (IT Solutions). In 2020 the regulator was informed about a security incident encountered by the company. The investigation concluded that C-Planet, in its capacity as controller, was processing the personal and special categories of data that were impacted by the breach, in violation of articles 5, 6, 9 and 14 of the GDPR. C-Planet failed to implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk. Additionally, the controller failed to notify the breach to the regulator within the deadline and to communicate the same to the affected data subjects. The IDPC imposed a proportionate fine of 65,000 euros on the microenterprise, taking into account its turnover, and ordered the erasure of the personal data which had been processed in an unlawful manner.
Data security: C-ITS, Smart Cities, Remote identity proofing
The German Federal Office for Information Security published its Technical Guidance on Cooperative Intelligent Transport Systems, C-ITS, (available in English). Among many provisions it describes trust and privacy management concerning the establishment and maintenance of identities and cryptographic keys. Because links between a vehicle and its user can be either directly or indirectly deduced, the impact on privacy of the road users should be minimized through:
- Pseudonymity: a C-ITS station may use a resource or service without disclosing its identity but can still be accountable for that use.
- Unlinkability: Unlinkability denotes that a C-ITS station may make multiple uses of resources or services without others being able to link them together.
Classically, authenticity and integrity are ensured by means of a security architecture with support of a Public Key Infrastructure. In C-ITS pseudonymity and unlinkability are incorporated and balanced with integrity and authenticity by means of separation of duties and commonly changing pseudonym certificates, so-called Authorization Tickets. Read the full C-ITS guide here.
The German Federal Office for Information Security also published its recommendations for action on information security in Smart Cities and Smart Regions, (in German). Smart cities and regions also use the potential of digitization for municipal services of general interest, for example in the provision of services in the public interest, such as local public transport or waste disposal. Information security, especially of the underlying municipal IoT infrastructures, is of crucial importance. The target group is municipal decision-makers and those responsible for operations, such as a chief digital officer of a municipality or a manager for a municipal IoT project. The recommendations are also structured based on the lifecycle of an IoT infrastructure . You can see the full guide here.
Meanwhile the EU agency for Cyber Security, ENISA, published an explainer on Remote identity proofing. Online users expect access to various services anytime and anywhere. The need to securely onboard and prove a customer’s identity remotely is therefore becoming critical for organisations. Identity and technology providers have implemented both active and passive security controls which mostly involve the use of video and operator intervention ((eg, biometric acquisition, liveness checks, ID acquisition, authenticity checks, face comparison). Video allows a greater number of security checks and operators help artificial intelligence to identify any new types of attack. Although many have faith in facial recognition technology, algorithms cannot understand and detect new fraud techniques, (eg, deep fakes), on their own. Therefore, humans are needed to clean and tag data enabling quality training that will result in better performance and the mitigation of adversarial attacks.
Audits: Emailmovers Ltd
Following a test data purchase initiative run by the UK Information Commissioner Office, (ICO), Emailmovers Ltd, (EML), were investigated as serious concerns were identified about their data protection compliance. The investigation resulted in an enforcement notice followed by a consensual audit of the company systems. The checks took one week. The scope of the audit focused on the processing of personal data within EML’s marketing database and covered the following key control areas: governance, sourcing personal data, transparency and lawful basis for processing, data supply and sharing, individual rights. The ICO identified both good practices, (proactive approach, training, managerial involvement in decision making), and areas for improvement, (defining retention periods, maintaining a record of processing activity and decisions taken, notifying recipients of personal data about the existence and outcomes of individual rights), which can be read in the audit documentation.
AI: taxonomy and business models
The European Institute of Innovation and Technology published two reports on Artificial Intelligence business models and taxonomy in Europe. Both reports give in-depth recommendations on how to streamline knowledge, experience and expertise in AI deployment as well as connect, share and encourage an open innovation environment with policy leaders, industrial experts and innovator communities, (AI application providers, infrastructure providers and adopters). The trust ecosystem on Ethical AI includes but is not limited to such dimensions:
- human agency and oversight;
- technical robustness and safety (Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility);
- privacy and data governance (Including respect for privacy, quality and integrity of data, and access to data);
- transparency (Including traceability, explainability and communication);
- diversity, non-discrimination and fairness (Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation), and more.
Big Tech: Apple AirTags, Google’s age-appropriate policy
Police across the US are reporting cases where stalkers have used Apple AirTags to target their victims, according to the Guardian. Paired with the FindMy app, the attachable coin-sized gadget was designed so you would never lose anything again, but slipped into a bag or coat pocket it is the perfect tracking device for criminals. Other international police forces have also reported similar abuse of the AirTag, and associated car theft. While the AirTag’s several anti-abuse features mean it is less dangerous than other stalkerware available, an additional problem is the inconsistency of police response. A 2021 Norton report claims stalkerware is growing fast, jumping in 2020 and the first half of last year.
Google has fallen foul of the rules of the UK’s Children’s code, introduced last September, which sets online services 15 privacy and design standards to protect minors. Google said it would immediately improve enforcement of an age-sensitive ad policy after Reuters reported age-sensitive advertising for high-risk financial instruments, adult toys and alcohol was evading Google’s filters and safeguards. Campaigners 5 Rights Foundation, which reviewed Reuters findings, say all tech companies should do more to ensure compliance with the new rules and consumers should beware of “safety washing” as there were still too many cases, indicating companies had yet to get serious about implementing changes.