Engity’s European Data Protection Update Q2/2023
An European overview on the major updates in data protection and privacy in the 2nd quarter of 2023.
The data protection landscape stayed interesting in the second quarter of 2023.
Most headlines were – no surprises here – snatched by everything AI-related. First lawsuits are being filed (in the US) addressing the data scraping that AI firms have done over the entire web to feed their training sets. At the same time, G7 countries contemplate how to approach the data protection challenges posed by generative AI. As the technology is evolving fast, there is a risk to either over- or undershoot in regulation or to choose a wrong approach to begin with. Which may be a reason why European Commissioner for Competition, Margrethe Vestager, is seeking for a voluntary artificial intelligence code of conduct to be adopted throughout the industry. Which, to us, sounds like a terrible idea as the industry would police itself without a proper set of incentives to do so.
There was, however, also lots of activity on the legislative sector and serious doubts were raised by members of the EU-Parliament on the feasibility of the proposed Privacy Shield 2.0 – casting dark shadows over EU-US data transfers.
More on all this in our report.
Members of the EU-Parliament vote against EU-US data transfers
Members of the European Parliament (MEPs) have adopted a – non-binding – resolution expressing their opposition to the European Commission granting an adequacy decision to the United States, which would deem its level of personal data protection equivalent to that of the EU. While acknowledging that the EU-US Data Privacy Framework (Privacy Shield 2.0) represents an improvement over previous frameworks, MEPs argue that it lacks sufficient safeguards. They highlight concerns about bulk collection of personal data, the absence of independent authorization for such collection, and unclear rules on data retention.
The resolution also criticizes the Data Protection Review Court (DPRC) established by the framework, stating that its decisions would be kept secret, infringing on citizens' right to access and rectify their data. MEPs express skepticism about the court's independence, as judges can be dismissed by the U.S. President and decisions can be overruled.
MEPs stress the need for a lawsuit-proof regime that provides legal certainty and can withstand challenges in court. Previous data transfer frameworks between the EU and the U.S. have been invalidated by the Court of Justice of the European Union. The MEPs see the danger that the new Privacy Shield 2.0 does not address these concerns sufficiently.
We at Engity agree.
Political Agreement on EU AI-Act reached in EU-Parliament
As reported in our last quarterly report, the draft EU AI-Act faced heavy scrutiny in EU institutions, not last because of the fast advances seen recently in AI development. At least withing the EU-Parliament an Agreement on the proposed legislation has been found.
The act, in the version agreed upon by the Parliament, defines prohibited practices that are deemed to pose an unacceptable risk, such as
- the use of emotion recognition in certain situations, or
- purposeful manipulation.
Furthermore, the act classifies AI application by risk and stipulates certain safeguards to be implemented depending on the risk classification.
EU-Commission designates "Very large Online Platforms" under Digital Services Act
The EU-Digital Services Act is a European Union regulation that aims, among other things, to create liability and security rules for digital platforms, services, and products. Some of its more comprehensive rules only apply to certain types of service providers with a particularly large number of users.
The Commission designated 17 "Very Large Online Platforms" (VLOPs) and two "Very Large Online Search Engines" (VLOSEs). Those platforms now have four months to comply with the requirements of the Digital Services Act and are subject to increased supervision. Among other things, the platforms will have to carry out risk assessments, store data on ads served to its users, and give data access to researchers.
Industry Privacy Initiatives and Failures
Meta introduces parental control over all its apps
Social Media and data protection have a rather uneasy relationship. It is often unclear – and not traceable – who accesses data in which manner. Meta, owner of Facebook, Instagram, and quite a few more apps, introduces parental control to at least add some control protection for minors.
While parents cannot read messages of their kids, they are at least being informed regarding privacy settings and contact lists. More features will be needed, and Meta seems committed to provide them.
Google makes the next privacy sandbox available
The days of cookies, at least of the third-party kind, may finally be numbered. Google – owner of many web services and the browser "Chrome" – made a new iteration of its privacy sandbox available for testing.
The privacy sandbox is aimed at reducing cross-site tracking of users, in particular by restricting the use of certain cookies and comparable technologies. Whether this will really be a win for the cause of privacy or if one surveillance technology will just be replaced by another one, remains to be seen.
Administrative Data Protection Initiatives
Model contractual clauses for controller-to-controller transfers proposed
The Council of Europe proposed a draft of a transfer tool for controller-to-controller transfers between member states of the Convention 108+.
The Convention 108+ was one of the first international instruments aiming at ensuring global data protection. It requires member states to shape their domestic legal framework in a way that horse the guidelines of the convention.
The model clauses use ideas similar to the ones found e.g. in comparable GDPR tools, contributing to a convergence of global privacy standards.
UK’s NCSC offers toolbox for managing cyber security risks
The National Cyber Security Centre of the UK has updated its toolbox of guidance and help for cyber security practitioners. The set does not only contain risk assessments methods but also offers help in how to approach the tasks and proposes best practices.
Danish DPA offers guidance on CCTV surveillance
Datatilsynet, the Danish Data Protection watchdog, offers helpful guidance on the use of surveillance cameras (CCTV) by private actors.
The challenge is that CCTV surveillance is a rather steep incursion on the privacy of the respective data subjects (read: all of us). At the same time, it is a very useful tool and there is a very legitimate interest in its use. To bring those conflicting interests in alignment is a rather challenging task, thus the help offered by Datatilsynet is very welcome
Court Decisions and Rulings
AI-powered CCTV cameras approved for Paris Olympics
France’s constitutional court approved the use of AI-controlled CCTV cameras during the Summer Olympics to be taking place in 2024 in Paris.
The argument of the court is that the surveillance will, ultimately, controlled by humans. The court ordered many safeguards regarding the employment of the new technologies such as the separation of databases and short retention periods.
Privacy advocates see this as a slippery slope towards full AI policing of events and, ultimately, the public square.
Dutch Data Protection Authority probes ChatGPT
AP, the Dutch Data Protection Authority, wants to know more about the handling of personal data at Open AI’s ChatGPT. It asked to provide more information about how personal data are used in training sets in particular.
Most AI firms are rather opaque regarding the details of their data processing, and it is poorly understood what exactly happens. As without transparency neither data subjects can enforce their rights nor regulators can take appropriate action, the idea of asking for clarification is a needed and welcomed one.
AP’s initiative is part of a broader action of European regulators. International privacy watchdogs, such as Japan's Personal Information Protection Commission, take similar action. At the same time, in the US – where the legal system often is more aggressive – first class actions are filed to probe into the data scraping performed by AI firms.
Other AI related actions and initiatives
Too many actions to name them all were taken in connection with AI, Open-AI and ChatGPT in particular. Just to mention a few:
- Italy’s Garante is still looking into ChatGPT, even though the first ban of the service in Italy has been lifted;
- Spain’s Agencia Española de Protección de Datos is looking into compliance of ChatGPT with the GDPR,
- The European Data Protection Board (EDPB) has created a task force to coordinate the exchange of information regarding possible actions of national privacy watchdogs regarding OpenAI.
Microsoft expects administrative fine of 425 million USD in Ireland
Microsoft reported that it has received a draft decision of the Irish Data Protection Commission (DPC) regarding target practices at LinkedIn, a social network owned by Microsoft. The decision outlines a fine of 425 million USD.
The details of the case are opaque as the draft is non-public. Nevertheless, Microsoft already considers all legal options available – as is to be expected from the notoriously litigious firm.
Avast fined with 13.7 million Euros in the Czech Republic
We have already reported on illegal data processing activities of Avast, where the company collected and sold private browsing data of its customers without notifying them or asking for consent. The Czech data protection authority Úřad pro ochranu osobních údajů fined Avast with 13.7 million Euros.
Avast’s actions were particularly distasteful as the company acts as a cyber security company and used to be a rather trusted brand. Now, after selling GPS-data and information about the sexual orientation of its customers, these days of trust must sure be over.
German bank fined for automated decision-making
The Berlin Commissioner for Data Protection and Freedom of Information handed an administrative fine to an – unnamed – bank. While the amount in question is modest with "only" 300.000 Euros, the case sheds some light on an often-overlooked problem: automated decision-making.
An automated decision is a decision made solely by an IT system based on algorithms, without human intervention. The General Data Protection Regulation (GDPR) imposes specific transparency obligations for such cases. Personal data must be processed in a way that is comprehensible to the individuals concerned. Individuals have the right to an explanation of the decision made after an assessment. If individuals request information from the data controllers, they must provide meaningful information about the logic involved in the automated decision.
The bank failed to give sufficient and transparent information but is determined to change its practices and make them privacy compliant.
Data transfers to Google illegal in Finland
The Ombudsman of Finland's Office for Data Protection ordered the Finish meteorological institute to cease transferring personal data to the US, namely Google. The institute was using the services of Google Analytics and Google's reCAPTCHA without a sufficient legal basis. Such basis, the regulator held, cannot be the EU-US Privacy Shield as that was invalidated by the Court of Justice of the European Union in its Schrems II ruling.
While no fine was handed out, the decision is very telling as now Finland joined the – many – EU-countries (see Q3/22 for Denmark, Q1 for Austria & France) that looked into that particular kind of data transfers. And all found them lacking.
Meta fined 1.200 million Euros for EU-US data transfers
On a similar topic – but a very much different scale – Meta was fined 1.2 billion Euros for unlawful intra company EU-US data transfers. Furthermore, Meta was ordered to suspend further transfers.
Not surprisingly, Meta will appeal the decision. This makes sense insofar as the US and EU are in advanced discussions regarding a Privacy Shield 2.0 which may make such transfers legal once again. Thus, Meta may play to win time.
However, if such new transfer mechanism will hold up under the scrutiny for the courts remains to be seen. Many data privacy advocates doubt it as fundamentally the legal situation has not changed much, and a new transfer tool may be subject of the same underlying issues that brought down the last two tools: Safe Harbor and Privacy Shield 1.0. This view seems to be shared by the European Parliament (see our report under "Legislative Action").
We at Engity hold that infrastructure which is to last a decade or more should not be built on the shaky foundation of a possible Privacy Shield 2.0 which may last only a few years, if it comes at all.