Weekly digest 4 – 10 July 2022: DSA and DMA adopted, setting standards on EU digital service providers

TechGDPR’s review of international data-related stories from press and analytical reports.

Legal processes: DSA and DMA, China’s data exporters, ransom payments, CASPs

Last week, the European Parliament adopted the new Digital Services Act (DSA) and Digital Markets Act (DMA), following a deal reached between Parliament and Council. The two bills aim to address the societal and economic effects of the tech industry by setting clear standards for how they operate and provide services in the EU, in line with the EU’s fundamental rights and values. The DSA sets clear obligations for digital service providers, such as social media or marketplaces, to tackle the spread of illegal content, online disinformation and other societal risks. These requirements are proportionate to the size and risks platforms pose to society. The new obligations include:

  • New measures to counter illegal content online and obligations for platforms to react quickly, while respecting fundamental rights, including the freedom of expression and data protection.
  • Strengthened traceability and checks on traders in online marketplaces to ensure products and services are safe; including efforts to perform random checks on whether illegal content resurfaces.
  • Increased transparency and accountability of platforms, for example by providing clear information on content moderation or the use of algorithms for recommending content, (so-called recommender systems); users will be able to challenge content moderation decisions.
  • Bans on misleading practices and certain types of targeted advertising, such as those targeting children and ads based on sensitive data. So-called “dark patterns” and misleading practices aimed at manipulating users’ choices will also be prohibited.
  • Very large online platforms and search engines, (with 45 million or more monthly users), which present the highest risk, will have to comply with stricter obligations, enforced by the Commission, (preventing systemic risks, independent audits). They will also have to facilitate access to their data and algorithms to authorities and vetted researchers.

At the same time, the DMA sets obligations for large online platforms acting as “gatekeepers”, (platforms whose dominant online position make them hard for consumers to avoid), on the digital market to ensure a fairer business environment and more services for consumers. To prevent unfair business practices, those designated as gatekeepers will have to:

  • allow third parties to inter-operate with their own services, meaning that smaller platforms will be able to request that dominant messaging platforms enable their users to exchange messages, send voice messages or files across messaging apps. This will give users greater choice and avoid the so-called “lock-in” effect where they are restricted to one app or platform;
  • allow business users to access the data they generate in the gatekeeper’s platform, to promote their own offers and conclude contracts with their customers outside the gatekeeper’s platforms.

Gatekeepers can no longer:

  • Rank their own services or products more favourably, (self-preferencing), than other third parties on their platforms;
  • Prevent users from easily un-installing any pre-loaded software or apps, or using third-party applications and app stores;
  • Process users’ personal data for targeted advertising, unless consent is explicitly granted.

Once formally adopted by the Council in July, (DMA), and September, (DSA), both acts will be published in the EU Official Journal and enter into force twenty days after publication. Their application will start through 2023-2024. 

Meanwhile, China’s cyberspace regulator, (CAC), clarified that rules requiring data exports to undergo security reviews would be effective from Sept. 1, the first time it has given a start date for a new regulatory framework that will affect hundreds, if not thousands, of Chinese companies, Reuters reports. The measures, according to Data Guidance’s report, provide the cases in which a data exporter must submit a data exit security assessment to the CAC through the provincial cybersecurity and informatisation department where:

  • the data processor provides important data overseas;
  • the data processor is a critical information infrastructure operator and the data processor processes the personal information of more than 1 million people;
  • the data processor processes the personal information of 100,000 people or the sensitive information of 10,000 people since 1 January of the previous year; or
  • other situations required to declare data export security assessments as provided by the CAC.

The data export security assessment adheres to the combination of prior assessment and continuous supervision, and the combination of risk self-assessment and security assessment. In addition, the measures outline that a data processor’s pre-assessment should focus on, among other things, the responsibilities and obligations that overseas recipients are subject to, the risk of data being tampered, destroyed, or leaked, and whether data export related contracts fully stipulate the responsibility and obligation of data security protections. The full legal text, (in Chinese), is available here

The UK National Cyber Security Centre, (NCSC), and Information Commissioner’s Office, (ICO), say it is incorrect for organisations to assume paying ransoms is a) the right thing to do and they do not need to engage with the ICO as a regulator, or b) will gain benefit from it by way of reduced enforcement. Thus both organisations in a joint statement advise solicitors not to advise clients to pay ransomware demands should they fall victim to a cyber-attack. Paying ransoms to release locked data does not reduce the risk to individuals, is not an obligation under data protection law, and is not considered as a reasonable step to safeguard data.

The European Parliament and Council negotiators also reached a provisional deal on a new bill aiming to ensure that crypto transfers, (like bitcoins and electronic money tokens), can always be traced and suspicious transactions blocked. The legislation is part of the new EU anti-money laundering package and will be aligned with the Markets in Crypto-assets rules, (MiCA). The agreement extends the so-called “travel rule”, already existing in traditional finance, to cover transfers in crypto assets. This rule requires that: 

  • Information on the source of the asset and its beneficiary travels with the transaction and is stored on both sides of the transfer. 
  • Crypto-assets service providers, (CASPs), will be obliged to provide this information to competent authorities if an investigation is conducted into money laundering and terrorist financing.
  • There are no minimum thresholds nor exemptions for low-value transfers, as originally proposed. Regarding protecting personal data, including a name and an address required by the travel rule, negotiators agreed that if there is no guarantee that privacy is upheld by the receiving end, such data should not be sent.
  • Before making the crypto-assets available to beneficiaries, providers will have to verify that the source of the asset is not subject to restrictive measures or sanctions, and there are no risks of money laundering or terrorism financing.

The rules would also cover transactions from so-called un-hosted wallets, (a crypto-asset wallet address that is in the custody of a private user,) when they interact with hosted wallets managed by CASPs. In case a customer sends or receives more than 1000 euros to or from their own un-hosted wallet, the CASP will need to verify whether the un-hosted wallet is effectively owned or controlled by this customer. The rules do not apply to person-to-person transfers conducted without a provider, such as bitcoin trading platforms, or among providers acting on their own behalf.

Official guidance: employees location, insurance applications, local authorities, commercial interest vs. consent

The Finnish data protection ombudsman asked service providers in the public sector for a report on use of the location data function in computers used by employees in the municipal sector. The background for the report was a notification of a data security breach filed by a hospital district, when settings that allowed the collection of location data were switched on in employees’ Windows 10 workstations and remote work laptops, although there was no intention to collect the data. As a result, the regulator found that:

  • The hospital district did not have a need required by law for processing employees’ location data.
  • The hospital district did not appropriately review what data it intended to collect. 
  • Since the employees’ location data were unnecessary for the employer and collected unintentionally, these data should not have been processed. In order to ensure data protection by default, the hospital district should have reviewed the basic settings of the system and noticed that the location function was switched on before deploying the workstations. 
  • Since the location function was switched on, employees’ personal data were delivered to Microsoft as well.

The regulator ordered the erasure of any historical data, location logs and other personal data created during use of the location data function. 

The Finnish ombudsman has also investigated the procedures of insurance companies when they request the health information of insurance applicants and insured persons from health care providers in order to determine the insurance company’s responsibility. Deficiencies were found, especially in the appropriate demarcation of the information requested from the health care provider and in the legality of processing. The insurance companies justified the processing of the policy applicant’s health data on the grounds of data protection, according to which the insurance institution can process client or claimant’s health data that is necessary to determine the liability of the insurance institution.

The regulator states that the provision of the data protection law in question only applies to the processing of the data of the insured and the claimant. Insurance companies cannot process the insurance applicant’s health information or request personal information from the health care provider during the insurance application phase, based on the regulations, because the contract has not yet been concluded. It is possible to process health data under certain conditions if the person has given valid consent. However, it requires that the person is told precisely what information is collected about them and for what purposes it is used. Asking for consent in a general way without detailing the information and purposes of use therefore does not meet the requirements of the data protection regulation.

The French data protection regulator CNIL published a guide on the obligations and responsibilities of local authorities with regard to data protection. The study was conducted at the end of 2021. Focusing on communities smaller than 3,500 inhabitants, which represent 91% of municipalities in France, this study aimed to understand digital usage, identify risks/obstacles and data needs. It appeared that the majority of respondents are not aware of the legal framework in force, with the exception of the GDPR. The provisions relating to competences and responsibilities in the field of digital security are little or not known to local elected officials and territorial agents, who consider cybersecurity regulations to be particularly complex.


The purpose of this guide is to inform local elected officials and territorial agents about the obligations related to: a) the protection of personal data; b) the implementation of local teleservices; c) hosting of health data. This guide also recalls the different types of legal liability to which local authorities and their public institutions are exposed in the event of cyberattacks and damage related to: administrative responsibility, civil liability, criminal liability.

The European Commission says that the Dutch data protection authority AP is hindering free enterprise in the EU by interpreting privacy legislation too strictly. The legal battle refers to the dispute between the AP and streaming service VoetbalTV. The service broadcasted video images of amateur matches via the internet for, among others, players, trainers and fans. More than 150 clubs used it, until the AP imposed a fine of 575,000 euros on the service in  2019. Football TV then went bankrupt.

According to the AP, the profit motive of the company could never constitute a ‘legitimate interest’ for the broadcasting of the images without the individual consent of players and the public. According to Brussels, the Dutch supervisory authority did not strike the right balance between the right to data protection on the one hand and the freedom of undertaking on the other. Additionally, in 2020, a Dutch court reportedly ruled that VoetbalTV did not have to pay the fine, as personal data may sometimes also be processed when there is only a commercial interest. The AP had appealed against this decision.

Investigations and enforcement actions: website security, data protection requests, employment certificate, cookies, account deletion, health data

As part of one of its priority themes, “the cybersecurity of the French web”, the CNIL has carried out a series of online checks of twenty-one websites of French public sector bodies, (municipalities, university hospitals, ministries, etc.), and the private sector, (e-commerce platforms, IT solution providers, etc.). The verifications carried out by the CNIL therefore focused mainly on technical and organisational flaws: 

  • unsecured access, (HTTP), to websites, (many actors), implemented obsolete versions of the TLS protocol to ensure the security of data in transit, used certificates and non-compliant cryptographic suites for exchanges with the servers of controlled sites;
  • lack of devices to trace abnormal connections to servers;
  • use of insufficiently robust passwords and procedures to renew them that do not sufficiently secure their transmission and retention.

The bodies on notice have a period of three months to take any measure to ensure an appropriate level of security.

The Finnish company Otavamedia was penalised for shortcomings in the implementation of data protection rights. Between 2018 and 2021, eleven cases concerning Otavamedia were brought to the office of the data protection commissioner. Among other things, the complainants had not received an answer to their requests or inquiries regarding data protection rights. According to the report provided by Otavamedia, some of the data protection requests had not been implemented due to a technical problem with the e-mail control in connection with the change of digital service providers. During the error situation, the messages that arrived in the e-mail box reserved for data protection matters were not forwarded to the customer service staff. The situation was discovered only after the data protection authority’s request for clarification. 

Otavamedia should have taken care to test the e-mail box, as it is the main electronic contact channel of data subjects in data protection matters. Additionally, the registrants had the opportunity to make requests to Otavamedia regarding their own information using a printable form. The person’s signature was required on the form for identification purposes. The regulator considers that with this method of operation, Otavamedia collected an unnecessarily large amount of data for identification purposes. Otavamedia does not process signature information in other contexts, which is why it was not possible, for example, to compare signatures with previously held information.

In the first half of 2022, the Czech office for personal data protection UOOU monitored compliance with the GDPR in connection with the setting of the processing of cookie files by various operators of web portals and pages, based on both complaints received and the monitoring plan. Among the main shortcomings detected by the regulator are: 

  • Use of non-technical cookies without consent.
  • A disproportionately long period of validity of cookies in relation to their purpose.
  • Absence of the choice for expressing disagreement with the non-technical cookies in the first layer of the cookie bar.
  • Wrong categorisation of cookies.
  • Absence of information about specific cookies used.
  • The difference in the visibility of the consent and non-consent buttons for the use of non-technical cookies.
  • Information about cookies in a foreign language.
  • The cookie bar makes it difficult or impossible to read the website.

The Polish supervisory authority UODO was notified of potential inaccuracies related to the processing of personal data by a manufacturing company, (Esselmann Technika Pojazdowa). The company made an informed decision not to notify a breach involving an important document of one of its employees to the supervisory authority, despite the letters addressed to it indicating a possible risk to the rights or freedoms of the persons concerned in this case. In the course of explanatory actions by the regulator the loss of a document from the personal file of a company employee – an employment certificate – was revealed. The certificate of employment contains a lot of important information about the person, including:

  • the period(s) of employment;
  • the procedure and legal basis for the termination or expiry of the employment relationship;
  • parental and child care leave taken;
  • information on the amount of remuneration and qualifications obtained – at the employee’s request;
  • information on enforcement seizure of remuneration.

Taking the above into account, the Polish regulator imposed a fine of approx 3,500 euros.

The Irish data protection authority DPC published its recent decision concerning Twitter International Company. In 2019, the complainant alleged that, following the suspension of their Twitter account, Twitter failed to comply with an erasure request they had submitted to it within the statutory timeframe. Further, the complainant alleged that Twitter had requested a copy of their photographic ID in order to action their request without a legal basis to do so. Finally, the complainant alleged that Twitter had retained their personal data following their erasure request without a legal basis to do so.

While the complaint was lodged directly with the DPC by an individual who resides in the UK, the DPC considered that the nature of the data processing operations complained of could have a substantial effect, and that the type of processing meets the definition of cross border processing. As a result, the DPC ordered Twitter, pursuant to Article 58 of the GDPR, to revise its internal policies and procedures for handling erasure requests to ensure that data subjects are no longer required to provide a copy of photographic ID when making data erasure requests, unless it can demonstrate a legal basis for doing so. 

Data relating to health enjoys enhanced protection and, subject to the exceptions provided for by the law, dissemination is prohibited. Administrative transparency cannot violate people’s privacy. For these reasons, the Italian privacy regulator ‘Garante’ sanctioned the Roma local health authority 46,000 euros. It had published in clear text on its website all the names and data relating to the health of the subjects who had requested civic access in 2017 and 2018. In most cases, the documents concerned the health records of the persons concerned, including medical records, disability assessments, tests, technical reports, etc. The first serious violation detected by the Authority, which took action ex officio, was therefore the dissemination of data on the health of the subjects concerned, information relating to both their physical and mental state, including the provision of health care services.

Data security: cybersecurity threat landscape

The European Union Agency for Cybersecurity provided simple steps to map the cybersecurity threat landscape. The methodology aims at promoting consistent and transparent threat intelligence sharing across the EU, (including but not limited to public bodies, policy makers, cybersecurity experts, industry, vendors, solution providers, SMEs). The framework is based on the different elements considered in the performance of the cybersecurity threat landscape analysis. It therefore includes the identification and definition of the process, the methods and tools used as well as the stakeholders involved. Building on the existing modus operandi, this methodology provides directions on the following:

  • defining components and contents of each of the different types of CTL;
  • assessing the target audience for each type of CTL to be performed;
  • how data sources are collected;
  • how data is analysed;
  • how data is to be disseminated;
  • how feedback is to be collected and analysed.

The methodology consists of six main steps with predicted feedback and associated to each of these steps: direction, collection, processing, analysis and production, dissemination, feedback. You can download the the full methodology guide here.

Big Tech: Apple’s new lockdown mode, Chinese CCTV in UK

Apple’s latest iOS 16 security tool can defend against a state-sponsored cyberattack on your iPhone, cnet.com reports. In short, new Lockdown Mode increases security capabilities on iOS 16, iPadOS 16, and macOS Ventura by limiting certain functions that may be vulnerable to attack: 

  • Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.
  • Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode.
  • Apple services: Incoming invitations and service requests, including FaceTime calls, are blocked if the user has not previously sent the initiator a call or request.
  • Wired connections with a computer or accessory are blocked when iPhone is locked.
  • Configuration profiles cannot be installed, and the device cannot enrol into mobile device management, (MDM), while Lockdown Mode is turned on.

Meanwhile, a cross party group of UK MPs have called for a ban on two Chinese surveillance camera brands widely used in Britain, according to Yahoo News. The AI-enabled cameras are capable of facial detection, gender recognition and behavioural analysis and offer advanced features such as identifying fights or if someone is wearing a face mask. The two brands — Hikvision and Dahua — are widely used by government bodies in the UK, by 73% of councils across the UK, 57% of secondary schools in England, and six out of 10 NHS Trusts. Reportedly, Hikvision and Dahua are now banned from trading in the US over security concerns and evidence of their widespread use in so-called “re-education” camps in China. The MP’s call for action also includes “an independent national review of the scale, capabilities, ethics and rights impact of modern CCTV in the UK”.

Do you need support on data protection, privacy or GDPR? TechGDPR can help.

Request your free consultation


Show more +