mobile app

Data protection digest 1-15 Jan 2025: mobile app permissions should work in conjunction with consent requirements – CNIL

Mobile app permissions

Technical permissions in mobile app are very useful for privacy, explains the French regulator CNIL. They allow users to block access to certain data technically. However, these permissions are not designed to validate users’ consent, within the meaning of the GDPR.  Even when consent is required, a simple request for permission does not always allow for free, specific, informed and unambiguous consent. There may also be exemptions from consent, such as for the functioning of a navigation mobile app, when the data is required for the service. However, the OS supplier requires authorization to access this information. An ideal permissions system in conjunction with a consent management system should allow one to choose without any confusion:

  • the degree of processing of the data provided according to the purpose pursued (eg, more or less precise location);
  • the material scope of the authorisation, (eg, access to the selected photos rather than the overall media gallery);
  • The duration of the authorization is given, (eg, one-time activation of the permission or for a predetermined period). 

Stay up to date! Sign on to receive our fortnightly digest via email.

Non-material damages for US data transfers

The CJEU orders the European Commission to pay damages to a visitor to its ‘Conference on the Future of Europe’ website due to the transfer of personal data to the US without appropriate safeguards. In 2021 and 2022, a German citizen complained that the Commission violated his right to personal data protection when he used the Commission’s EU Login authentication service and chose to sign in with his Facebook account.

His data, including his IP address and information about his browser and terminal, were transferred to recipients in the US, (Meta, Amazon Web Services and CloudFront). According to the JD Supra law blog, while the sum is small, it is the first time an EU court has acknowledged that people can be awarded damages for illicit data transfers without demonstrating significant loss, paving the way for future claims, including class actions

More legal updates

“Maximum two complaints per month”: The NOYB privacy advocacy group explains another case, where the CJEU slammed the Austrian data protection authority for discontinuing proceedings against companies. In one example, the authority set the number of complaints that data subjects can file at a maximum of two per month. The CJEU has now made it clear: as long as you do not file abusive complaints, all users have the right to have any GDPR violation remedied by the regulator. NOYB also looked at the EU-wide problem with data protection authorities’ inactivity – statistically many cases wait well up to several years for a decision, (instead of the established 6 months). 

Canada updates: According to an IAPP analysis, the proposed federal privacy law reforms and AI regulation contained in Bill C-27 are in serious jeopardy. Prime Minister Justin Trudeau’s recent resignation has paralysed Parliamentary business. As the country awaits a national election, C-27’s approval in the Senate is delayed. The proposals include enacting the Digital Charter Implementation Act, the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act. 

India updates: The government has released a draft of the Digital Personal Data Protection Rules, (legal text available in English), under the Digital Personal Data Protection Act, (2023), and is currently seeking public feedback and comments, cms-lawnow.com law blog reports. Key rules include: consent obligations, including for children’s data, security safeguards, data breach notification, retention periods, information obligation, data transfers abroad, impact assessments and audits, and the exercise of data subject rights. 

Electronic patient records

mobile apps

On January 15 the “electronic patient record”, (ePA), will start with a pilot phase in Hamburg, Franconia and North Rhine-Westphalia parts of Germany. After the successful completion of the introductory phase, the nationwide rollout is planned for February 15 at the earliest. The use of ePA, was already possible voluntarily. However, from January 15, the Digital Act, (DigiG), stipulates that health insurance companies will create an ePA for all patients who have not explicitly objected to this.

Insured persons should therefore now check whether they want to use it or whether they object to its use completely or partially with an opt-out. The objection can be made at any time, and the health insurance companies must subsequently delete files that have already been created. The ePA brings with it advantages – it facilitates the exchange of medical documents, avoids duplicate examinations and makes it easier for patients to control which data they release to whom. However, there is currently also criticism, particularly regarding data security, (IT experts uncovered security flaws in the ePA at the Chaos Communication Congress at the end of 2024). 

Work agreements and data processing

DLA Piper’s legal blog looks at a CJEU case, where an employer, (in Germany), had initially concluded a temporary agreement with the works council on the use of the software ‘Workday’. It provided, inter alia, that specifically identified employee data could be transferred to a server of the parent company in the US. An employee brought a legal action for access to this information, for the deletion of data concerning him, and for compensation. On this occasion, the CJEU ruled that if employers and works councils agree on more specific rules in a work agreement regarding the processing of employees’ data, these must take into account general data protection principles, including the lawfulness of processing. Furthermore, such a work arrangement is open to judicial scrutiny. Thus, businesses should investigate if other legal bases are applicable.

More official guidance

UK online safety: On 16 December, Ofcom brought into effect new UK online safety regulations. Now digital platforms, especially bigger and riskier ones, (social media firms, search engines, messaging, gaming, dating apps, and file-sharing sites), have three months to complete illegal harm risk assessments and apply necessary safety measures, (from the list of more than 40 safeguards). Among many things, this will include, reporting and complaints duties, better moderation, easier reporting, built-in safety tests, and protecting children. The Act also enables Ofcom to make a provider use, (or in some cases develop), a specific technology to tackle child abuse or illicit content on their sites and apps. 

AI and consumer harm: America’s FTC gathered the latest casework on what companies need to consider when developing, maintaining, using, and deploying an AI-based product. This includes:

Receive our digest by email

Sign up to receive our digest by email every 2 weeks

Video surveillance on a large scale

Depending on the scope and purpose, video surveillance can be divided into three scales: narrow, medium, and wide-scale video surveillance, explains the Latvian regulator. Large-scale video surveillance means that the processing is carried out over a significant area and presents high risks for the processing of personal data at regional, national or transnational levels. The larger the area monitored and the more people visiting it, the higher the risk of data misuse.

If an organisation conducts video surveillance of several separate areas, their total area should be taken into account to determine whether video surveillance is taking place on a large scale. When conducting video surveillance in publicly accessible, but less populated or visited areas, the thresholds for the size of the area and the duration of data retention may be higher to qualify as large-scale. However, if video surveillance involves the processing of biometric data for the unique identification of a person, then it is considered to be the processing of special categories of data.  

Privacy of the art market

An analysis in The Art Newspaper notices that access to historic sales records is becoming more restricted due to increased confidentiality periods at auction houses.

In the EU and the UK, privacy rights are protected through contract, common law and data protection regulations. Thus, the identity of buyers and sellers is protected in several ways, which the auction houses are now restricted from disclosing without the client’s consent. Moreover, the degree to which such data privacy measures can be used to restrict access is still unclear, as the GDPR does not prescribe how long confidentiality clauses can last

More enforcement decisions

Genetic and health data breach: The Estonian data protection inspectorate imposed an 85,000 euro fine in connection with an incident that occurred at the end of 2023, in which the Asper Biogene OÜ system was attacked and approximately 100,000 files with people’s data, including genetic and health data, were obtained. However, the decision can still be appealed by the company. Asper Biogene OÜ is primarily engaged in testing for hereditary diseases, developing genetic tests and providing healthcare services, thereby processing health data extensively

Frontex case: The EDPS issued a warning to Frontex for a breach of data protection rules. The breach involved Frontex systematically sharing the personal data of suspects in transnational criminal cases with Europol without assessing whether the sharing was necessary. Such sharing can have serious consequences for individuals, who could be wrongly linked to criminal activities in Europe. Frontex stopped the transfer of personal data to Europol shortly after the inquiry and now assesses all information individually before sharing it with the agency. 

Facial recognition: The FTC meanwhile finalised an order against IntelliVision Technologies due to false claims that its AI-powered facial recognition software was free of gender or racial bias. The FTC alleged that IntelliVision lacked evidence that its software had one of the highest accuracy rates on the market and performed with zero gender or racial bias.

The complaint also alleged that IntelliVision did not train its facial recognition software on millions of faces, as it claimed, nor did it have adequate support for its claims that its anti-spoofing technology ensures the system can’t be fooled by a photo or video image.

Data security

DORA is enforceable now: The Digital Operational Resilience Act, (DORA), is an EU regulation that entered into force on 16 January 2023 and will apply as of 17 January 2025. DORA brings harmonisation of the rules relating to operational resilience for the financial sector applying to 20 different types of financial entities and ICT third-party service providers. It covers areas of compliance such as:

  • ICT risk management, 
  • ICT third-party risk management, 
  • Digital operational resilience testing, 
  • ICT-related incidents, 
  • Information sharing on cyber threats, and 
  • Oversight of critical third-party providers.

For resources on implementing and delegated acts, policies and guides click here.

Security updates: Privacy International meanwhile reminds us that the CrowdStrike incident, (malformed update), earlier this year had major implications for governments and businesses across the world. Among many things, it emphasises the importance of security updates, including auto-updates, which are incredibly important to keep our devices running properly and safely. What is needed is for auto-updates to be properly tested before being implemented. Moreover, too often we see companies bundling together security and feature updates, meaning that users cannot install one without the other. That’s a problem, especially if a weaker system for testing feature updates pollutes the process for security updates, or if users are prevented from having the latest security updates installed because they don’t want the features or their device does not support the feature updates.  

Big Tech

US vulnerabilities: The outgoing President Joe Biden has just signed an executive order to address US vulnerabilities following cyber attacks, (by China, Russia, Iran and ransomware criminals), that cost the country billions, the Guardian reports. Among its most notable elements is a mandate for government agencies to install end-to-end encryption for email and video communications, as well as new standards for AI-powered cyber defence systems and quantum computing protections.

The order also requires federal agencies to only purchase internet-connected devices with a “cyber trust mark” from 2027, essentially leveraging government procurement authority to encourage manufacturers to tighten security standards for items like as baby monitors and home security systems.

Do you need support on data protection, privacy or GDPR? TechGDPR can help.

Request your free consultation

Tags

Show more +