Wi-Fi tracking

Data protection digest 3 – 17 May 2024: Wi-Fi tracking, exam monitoring, data theft and extortion

In this issue, we explore the privacy implications of emerging technologies in commerce, education, industries and the workplace, such as Wi-Fi tracking, content moderation and algorithmic management.

Stay up to date! Sign up to receive our fortnightly digest via email.

Wi-Fi tracking

The Spanish data protection regulator AEPD has published guidelines for personal data processing activities that incorporate Wi-Fi tracking technologies. Wi-Fi tracking identifies and tracks mobile devices based on the Wi-Fi signals they generate, detecting their existence in a certain region and determining movement patterns. Practical uses may be found in shopping malls, museums, public places, transit, and huge events to assess capacity, analyse traffic movements, and track dwell times. 

Because technology may make it possible to follow people’s movements without their knowledge or with a valid legal reason, Wi-Fi tracking may cause significant privacy problems. A prior Data Protection Impact Assessment (DPIA) must be completed, despite the possibility that the person in charge of the tracking may not be fully aware of their responsibility, given the risk factors. Using these technologies also requires the provision of easily understandable information via, among other things, voice alerts, public signs, visible information panels, and information campaigns.

Providing public Internet access

Many spaces offer internet access to their users: hotels, restaurants, media libraries, museums, transport, etc. Those responsible for this access provision are subject to legal obligations to retain “traffic data” and to comply with data protection principles according to the French regulator CNIL. “Traffic data” is the technical information which includes, for example, the IP address that can be used to identify the device used, the date, time and duration of each connection, or data that can be used to identify the addressee of the communication, (e.g. the telephone number called). 

Wi-Fi tracking

In principle, this information should be erased or anonymised. However, some legal texts derogate from this rule by requiring bodies to keep them, to allow the investigation and prosecution of criminal offences by the police, gendarmerie and justice services. What data should be kept and for how long, read the original guidance (In French). 

Credit bureau databases

The information available in databases about the financial obligations of individuals may adversely affect the possibility of receiving loan services, states the Latvian data protection authority DVI. To reduce credit risk, promote responsible and honest commitment, and ensure more effective availability of credit information, credit information bureaus collect a wide amount of credit information on natural persons based on the powers specified in regulatory acts, following deadlines set by law. 

As a result, the mere fact that an individual has not granted permission for their information to be included in databases or that they do not wish for it to be collected does not imply that unlawful processing of personal data is taking place. Normative acts specify in detail the sources from which a credit bureau gets its data and the circumstances under which users of credit information are permitted to add details about personal debt to the database, (such as late payments, court orders, or client approval). Should an individual think that inaccurate data is there in the database, they ought to get in touch with the bureau or the source of the credit obligations information by sending a formal objection, as well as attaching copies of the supporting documents. 

More official guidance

AI application: The German data protection authorities have published joint guidance on AI and data protection. It is primarily aimed at those responsible for using AI applications – developers, manufacturers and providers of AI systems. It covers many aspects of AI systems from legal bases, transparency obligations and data subject rights along with warnings regarding special categories of personal data and checking results for accuracy and discrimination. Finally, certain usages of AI applications may be inadmissible from the outset. For example, according to the upcoming EU AI Act, “social scoring” and biometric real-time surveillance of public spaces are considered either completely prohibited or only permitted under very strict exceptional conditions.

Privacy-related survey: Meanwhile in Canada, a new survey states that 12% of businesses across the country collect personal information from minors. Although just 6% of Canadian companies say that they currently use AI, nearly a quarter indicated that they intend to use this emerging technology in the next five years. Actions that businesses report taking to manage their privacy obligations include:

  • designating a privacy officer (56%)
  • having procedures to deal with complaints (53%)
  • having internal privacy policies (50%)
  • having procedures to deal with access requests (50%)
  • providing staff with privacy training (33%)

Car and consumer data: The US Federal Trade Commission reminds us that while connectivity can let drivers do things like play their favourite internet radio stations or unlock their car with an app, connected cars can also collect a lot of data about people. Companies that feed consumer data, (which may include sensitive information like location or biometric data), into algorithms may be liable for harmful automated decisions, (eg, affect their insurance rates). Finally, if a company gathers a lot of sensitive data and shares it with foreign parties, it may cause problems for national security.

Legal processes

Germany’s DSA adjustments: The German Digital Services Act, (DDG), came into effect on 14 May, creating the essential national framework required to effectively implement the EU Digital Services Act, (DSA), including adjustments in jurisdictions and duties of information, summarises a Taylor Wessing law blog. In particular, this requires changes to a website’s legal notice if it still expressly refers to the Telemedia Act and the Telecommunications Telemedia Data Protection Act, which no longer apply. 

The DSA and its member-state implementing acts apply to all digital services across the EU. Among many things, the DSA sets out rules for advertising on online platforms, including a ban on using certain personal data for advertising purposes. The national data protection authorities generally will enforce rules in this area, along with assigned national regulatory authorities. Meanwhile, the compliance for very large online platforms and very large online search engines remains with the Commission in Brussels. 

Combating child abuse online: On 15 May, the amending EU regulation, (derogation from ePrivacy Directive), which allows providers of so-called number-independent interpersonal communications services, (eg, messaging services), the use of specific technologies for the processing of personal and other data to detect online child sexual abuse on their services, and to report and remove it, will now be enforced until 3 April 2026. This prolongation also insists on comprehensive reporting and comparable statistics to be submitted to the authorities and the Commission, available in a structured format. 

Child safety online code of practice

In the UK, communications regulator Ofcom sets out more than 40 practical steps that digital services must take to keep children safer in its draft recommendations: a) introduce robust age checks to prevent children from seeing harmful content; b) ensure that algorithms which recommend content do not operate in a way that harms children; c) harmful material must be filtered out, (‘safe search’ setting), or downranked in the recommended content etc.

The new UK Online Safety Act imposes strict new duties on services, (“user-to-user services” and “search services”), that can be accessed by children, including popular social media sites, apps and search engines. Firms must first assess the risk their service poses to children and then implement safety measures to mitigate it. In some cases, this will mean preventing children from accessing the entire, (or a part of), a site or app. Some platforms will be required to publish annual transparency reports, such as information about the algorithms they use and their effect on users’ experience, including children. 

Receive our digest by email

Sign up to receive our digest by email every 2 weeks

Algorithmic management abuse

Privacy International, (PI), reports that companies are increasingly tracking their workers and deploying unaccountable algorithms to make major employment decisions over which workers have little or no control or understanding. While gig economy workers, content creators and warehouse operatives are at the sharp end of the algorithmic black box, opaque and intrusive surveillance practices are embedding themselves across many industries and workplaces. PI monitors and records these cases by country and by industry and catalogues harms

More enforcement decisions

Telephone operator: In Finland, the data protection regulator considers that a telecom operator has the right to keep the data of its mobile phone customers for three years after the end of the customer relationship. The time limit stems from the fact that, according to the law, debts expire in three years. If the information were deleted earlier than that, the company would not have the opportunity to defend itself in a situation where a customer or other creditor makes claims, (invoicing or complaints). In the related case, the customer had asked the telecom operator to delete all the data about him. The operator had not agreed to the request, despite the customer relationship ending more than ten years earlier. 

Car rental: In the UK, a car rental management trainee was fined, (approx. 800 euros), after unlawfully obtaining customer data. An internal audit found he accessed over two hundred records of customer data concerning 25 different rental branches. He was dismissed for gross misconduct shortly thereafter. The company did not consent to the manager obtaining this data, stating that accessing this information fell outside of his role and there was no business need for him to do so

Exam monitoring: The Danish data protection authority has completed an inspection of Roskilde Katedralskole’s use of software for examination monitoring. The school did not carry out a sufficient risk assessment and as a result, failed to ensure data protection through design. It should have been taken into account that the examination and monitoring took place using the student’s computer. It should be possible for students to shield confidential information against unintentional disclosure during exams. Policies could, for instance, advise students to use a different browser throughout the test that does not save their data. 

Data security

Ransom attacks: The potential harm caused by recent ransom attacks is explained by the UK National Cyber Security Centre. Some groups started to conduct ‘data theft and extortion only’, without deploying ransomware and encrypting victims’ systems. These tactics, whether it’s ransomware encryption or extortion-only, show how cybercriminals will adopt whatever technology, (or business model), allows them to best exploit their victims

For example, criminals employ ransomware attacks to disrupt logistics companies that need the data to function but favour extortion-only attacks against healthcare services, (where patient privacy is paramount). Data stolen in a “least-worse case” scenario is system data,  (necessary for the victim’s IT operations to function). In a worst-case scenario, sensitive personal data, (such as medical or legal information), is compromised. Read more about the main causes of security breaches here. 

Health apps: According to Netskope’s recent analysis, the average user in the healthcare sector interacts with an average of 22 cloud apps per month. However, the top 1% of users,  public and professional, engaged with 94 applications every month. Since its peak a year ago, the percentage of malware downloads across all sectors via cloud applications has progressively declined, averaging around 50%, (the other half originates from standard websites). The inverse is true for the healthcare sector, where cloud apps account for nearly 40% of all malware downloads, up from roughly 30% a year earlier.

The Azorult, Amaday, and Trojan NjRat were three of the most common malware families that targeted the healthcare industry.

Big Tech

Facebook/Instagram investigation: The European Commission has launched an investigation into Facebook and Instagram based on the Digital Services Act. The suspected infringements cover Meta’s policies and practices relating to deceptive advertising and political content on its services. They are also concerned about the non-availability of an effective third-party real-time civic discourse and election-monitoring tool ahead of the elections to the European Parliament, against the background of Meta’s scrapping, (on August 14), of its real-time public insights tool CrowdTangle without an adequate replacement.

The Commission also suspects that the mechanism for flagging illegal content on the services and the user redress and internal complaint mechanisms are not compliant with the requirements of the Act and that there are shortcomings in Meta’s provision of access to publicly available data to researchers. The opening of proceedings is based on a preliminary analysis of the risk assessment report sent by Meta in 2023. Read more allegations in the original publication.

Do you need support on data protection, privacy or GDPR? TechGDPR can help.

Request your free consultation

Tags

Show more +