Engity's EU Data Protection Update Q1/2023

EU GDPR Bits and Bytes in Data Stream

The first quarter of 2023 may easily have been one of the most exciting quarters in privacy and cyber security – ever.

Table of Contents:

  • News on AI, ChatGPT and OpenAI
  • Update on Legislative Action: Privacy Shield 2.0, Data Act & EU Digital Identity
  • Industry Privacy Initiatives and Failures
  • Administrative Data Protection Initiatives
  • Court Decisions and Rulings
  • Data Breaches & Police Actions
  • Enforcements

From a Data Protection Standpoint, AI, ChatGPT, and OpenAI Were Controversially Discussed

The hottest topic of the quarter was, with no doubt, the rise of AI which seems to be everywhere with everybody all at once doing everything. Not surprisingly, the privacy implications are huge: Microsoft already implemented its AI system into the Bing-Browser making it even easier to collect user data and predict user behavior right at the source. At the same time, the way data are being aggregated by AI tools is poorly understood yet. Such concerns already led to the first ban of ChatGPT, one prominent AI system, in Italy.

Despite calls of well-known researchers and tech-people for a suspension of AI rollout, there will be no way to put the AI genie back into the bottle, and such approach seems flawed to us anyway. EU and US agreed on a joint AI-research roadmap and the incentives for the private sector are just too strong to be ignored. The better approach therefore seems to be to embrace and gently nudge AI as CNIL, France's data protection authority, seems to plan it with its new AI division.

The ramifications of AI on legislation are yet to be explored. Amidst the process of finalizing the AI Act, some voices within the EU are already calling for an amendment to the existing scope of the undertaking. Seen in the context of the aspirations of Brussels to be a global setter of legislative standards and a recent rapprochement of lawmakers on both sides of the Atlantic, those developments could easily lead to a rather quick – any maybe "quick and dirty" – legal answer to ChatGPT et al.

Italy's Garante blocks ChatGPT data collection; group calls to stop new releases

The Italian data watchdog Garante posed an "immediate temporary limitation" on the processing of personal data of Italian data subjects on OpenAI, the developer of ChatGPT, an AI platform. Garante is concerned about the complete lack of any information about the data processing on the platform and also could not find any legal basis for the processing whatsoever. The little information there was could not be matched to the facts. Or more bluntly: seems incorrect.

Garante's decision is the first widely discussed action concerning AI. On closer look, however, the Italian decision is less AI specific than it seems but relies on tried and tested principles of data protection. Which, of course, is encouraging insofar as it shows that those principles can be applied in a technology agnostic manner.

Europol report warns against criminal uses of generative AI

Even for people with tech knowledge and threat awareness it is sometimes very difficult to distinguish legitimate mails and online conversations from fraud and criminal activity. And with the rise of AI tools such as ChatGPT it may become even more difficult, as Europol warns.

The main concerns, Europol says, are that on one hand that the new AI tools are very convincing in mimicking human interaction, but what's more, they can gather background information and create context and even recreate the communication style of specific people. Thus it may be incredibly hard even for trained and situationally aware users to distinguish real people from predators.

Phishing and specifically spear phishing will flourish and it remains to be seen to which extend tech companies and law enforcement can keep up with those developments. Judging by past performance: we may face challenging times.

ChatGPT user data exposed

OpenAI, the vendor of ChatGPT faced a data breach. According to available reports, there was only a relatively short time window in which the data were being visible, yet the nature of the data was sensitive: payment details. Access to names, addresses, and emails was also possible in some cases.

All in all, however, such data breach is rather embarrassing for a company that is far ahead in technology. To be blunter: Can one really trust an AI company that cannot even properly run their own business?

Legislative Action: Privacy Shield 2.0, Data Act & EU Digital Identity

Not everything in Q1 was AI though. As we at Engity see data protection not as a roadblock to innovation but as a nudge towards a wholesome way of innovating. Therefore, we are pleased to report that there were also plenty of initiatives from both, public and private sector, to provide insights, tools, and applicable to-dos.

In terms of legislation in the first quarter of 2023, the headlines were mostly ruled by the ongoing rapprochement between the EU and the US regarding a possible Privacy Shield 2.0.

At the same time, there were equally interesting but less discussed developments in the field of an EU-wide digital identity and the EU trying to establish itself as the essential rule-maker in terms of technology.
Last not least, the rise of ChatGPT, Midjourney et al makes the EU re-think its almost ratified AI-Act.

Agreement on Data Act reached among EU member states

As a regulatory superpower (see next story) the EU also wants to lead in all field of digital regulation. To this end, the EU member states reached an agreement on the proposed Data Act.
The basic idea of the Data Act is to give users and businesses better rights and control over their data. This is to be archived by

  • Giving users of all kinds of devices such as smart home appliances or industrial machinery access to their data,
  • Preventing abuse of contractual imbalances,
  • Giving the public sector access to private data in emergency situations, and
  • Giving customers wide-reaching rights to switch between processors and services.

EU seeks to be recognized as standard setter in digital regulation

The "Brussels Effect" is not just an aspiration but a reality: the standard setting effect of the European Union as one of the largest single markets governed by one unified legal framework that everybody wanting access to do business needs to be compatible with. One example is the GDPR that, in law or contract, needs to be replicated if organizations want to transfer data to and from the Union.

In an attempt to formalize its status as a regulatory superpower, the EU now wants to influence the United Nation's Global Digital Compact to more or less make EU rules a worldwide standard. That initiative will make proposals regarding standards for legislation and rule-setting worldwide. The EU is actively trying to incorporate its ideas about AI, data protection, and human rights into such proposals.

EU Digital Identity is getting closer

EU institutions are finally starting negotiations on the European Digital Identity based on the "European Digital Identity Wallet Architecture and Reference Framework" issued by the EU commission.

The Interoperable European Digital Identity (EUDI) is essentially a wallet that allows EU citizens to identify themselves and sign documents electronically, but also to choose which of the documents stored in the wallet to share with authorities, banks, or companies, and to do so securely. The wallet also acts as a log, so it is able to track every interaction.

For this system to work in practice, many public sector administrative processes need to be digitized. The EU Commission has set itself the goal of enabling 8 out of 10 citizens to use the Digital Identity by 2030.

ChatGPT may trigger update on EU's AI-Act

The meteoric rise of generative AI tools such as ChatGPT which, in in its newest iteration moves rather fast, may break not just long held believes what AI can and cannot do, but also EU legislation.

The EU was more or less done with consultations regarding the AI Act but ChatGPT re-opens the debate. As the new AI tools are dual use – they can be used to further the development of mankind or to do malign work – a sensible but not stifling regulation is needed. It may therefore not be helpful to just label such technologies as "Hi-Risk" and with that more or less forbid work on it, as the current draft of the AI Act may stipulate. Instead, a sensible approach might be to manage risk and make the inner workings of the AI models more transparent.

Whether that is feasible, both in a legislative as in a technologic sense, remains to be seen.

Industry Privacy Initiatives and Failures

The first quarter of 2023 saw some interesting private initiatives to further data protection, some by amending existing frameworks, others by way of self-regulations and establishing industry standards.

Meta plans Opt-Out for targeted ads – but in the EU only

Meta is to allow users to opt-out of microtargeted ads and choose to be only be subjected to broad category ads – in essence: making the experience a bit less creepy. But only in the EU as the move is a reaction to a compliance order by Irelands DPC (Data Protection Commission) which asked to dial down data processing for "personalized services and behavioral advertising".

In essence, the discussion is about the interpretation of legal grounds for data processing under Art. 6 GDPR and in particular about whether or not the ground of "performance under a contract" is free to a subjective interpretation by the party using such ground or has objective components to be looked into by authorities and courts.

CSA starts Privacy Working Group

The Connectivity Standards Alliance (CSA) created a privacy working group to address data protection concerns regarding smart devices. And it is about time as most users do have not even a hazy idea which data are being collected by which of their smart appliances, where they are being transferred to and in which manner they are being processed.

The idea of the working group is to develop a master data privacy specification and to establish standards to provide information about how that data is used in a digestible manner.

As members of the CSA are mostly big tech - Apple, Google, Amazon, Samsung and the like – it remains to be seen how far these efforts will go.

The Digital Advertising Alliance (DAA) and other similar industry self-regulatory organizations launched a joint approach to privacy controls and user consent management for websites and mobile apps. The idea is to give clear interface guidelines and technical specifications on how to design consent management platforms. Platforms adopting the service can show their compliance by using a specifically designed icon.

As always with such initiatives in general and DAA ones in particular it will remain to be seen how far they go and if they manage to actually archive something worthwhile for data protection or just be a fig leave.

Administrative Data Protection Initiatives

As we at Engity like to focus on the constructive side of data protection and privacy, we are pleased to report that there were also numerous data protection initiatives by regulators and public bodies providing guidance and helpful tools.

ENISA launches cyber security tool for small and medium sized enterprises

The European Union Agency for Cyber security (ENISA) releases cyber security released an evaluation tool for SMEs aimed and evaluation and enhancing their cyber security. The tool is designed to provide a personalized action plan that focuses on people, technology, and processes.


Datatilsynet, gives guidance to identify cyberattacks

Norway's DPA Datatilsynet published a guide helping businesses to identify cyber threats and give practical advise on how to react and whom to notify. In six short but concise chapters this is a helpful too.

European Data Protection Board gives guidelines for data transfers

International data transfers can be a complicated affair are they to be done – as we hope – in a legal and compliant manner . Therefore, the European Data Protection Board's (EDPB) guide on such transfers comes handy.

In particular there are three guidelines:

  • A clarification on what transfer actually is an international one.
  • A guideline on the transfer tool.
  • A guideline giving practical recommendations on how to avoid deceptive design patterns on social media platforms.

Court Decisions and Rulings

The German Federal Constitutional Court (BVerfG) decided in a case regarding the use of software that features predictive algorithms by the police of several German federal states. Provider of the software was Palantir – notorious for working with or being backed by intelligence agencies such as CIA and FBI.

The plaintiffs argued that the software "facilitates predictive policing by using data to create profiles of suspects before any crime has been committed." The BVerfG saw merit in that argument and forbid use of the software as unconstitutional.

Data Breaches & Police Actions

To round things up, there was a typical flurry of data breaches, investigations, and handing out of administrative fines. While this all looks like business as usual, a closer inspection shows some unsettling trends such as attacks and data breaches having high breadth and concerning personal information of a very large number of data subjects. There is no sign of data breaches slowing down. In the opposite: It seems as if they gain not only traction but breadth as larger amounts of data subjects are affected. In one case data of more or less every citizen of a country was stolen. Not a particularly large country, but still concerning. But even that only makes second place as social networks these days can have a virtual population many times the size of, say, Austria. Thus, if such networks are attacked, the scale of the data breach may be, for lack of a better word, grand.

Servers of Ransomware group seized by FBI and EU partners

In a coordinated effort, the FBI and partners in the Netherlands and Germany seized servers belonging to the ransomware group "Hive". The group became notorious for targeting healthcare providers during the height of the Covid-19 pandemic. It is estimated that more than 100 million US$ in ransoms were paid: A rather large scale and highly profitable operation.

Hive was one of the "market leaders" of the ransomware-as-a-service model, selling basically subscriptions to malware and infrastructure to interested parties that, then, did the real dirty work.

The law enforcement efforts disrupted multiple ongoing attacks thus helped to prevent further description and damages.

780k Dutch railway passengers affected by data breach

780.000 customers of NS, the Dutch National Railway, may have been affected by a data breach in which a third party gained access to personal data including e-mail addresses, telephone numbers, and names.

On first look, such a breach may not seem overly dangerous. Yet when such data are combined with AI supported profiling and targeting (see our respective story above), a data breach with such breadth can lead to a significant increase in successful dragnet phishing.

Pretty much every citizen of Austria affected by data theft

Austria is not particularly large, yet it has Kaiserschmarrn and is a country. It is therefore astonishing that a data breach affecting personal data of "virtually every Austrian citizen" makes it only second place on our list.

A hacker offered a dataset containing full name, gender, complete address, and date of birth of the citizens for sale. As it stands, the data were obtained by hacking computers of authorities and copying the citizen's registration data.

This should serve a stark reminder that technical data protection is a task not just for the private but also for the public sector. Even more so as similar data breaches have occurred in Italy, the Netherlands, and Columbia.

235 million (!) accounts of Twitter users exposed

When discussing sheer numbers, the exposure of Twitter accounts takes the cake.

Personal data of a whopping 235 million Twitter users were exposed and offered for sale. This time, the hack seems to have focused on e-mail addresses. In an earlier hack, however, not only e-mail addresses but also phone numbers and Twitter handles were obtained.

Twitter has become notorious for rather sloppy organizational and technical security measures and went so far as in January 2022 to fire both of its top security officers at the same time leaving key positions in the org chart empty.

Data breaches on social networks are especially concerning as Twitter is being used to coordinate a lot of political activity worldwide and in jurisdictions that are not all overly friendly towards free speech. With great powers comes great responsibility – Twitter seems to be not always aware of that.


Last not least there was the typical enforcement action. While there were not big profile cases in the realm of big tech, the sheer consistency of regulatory works shows that the authorities are doing their part to make the data protection framework function in practice.

Avast fined 13.7 million Euros in by Czech data protection authority

Antivirus software fulfils an important job on our computers. We users may, however, not always be aware of what else it is doing while it is constantly running and monitoring in- and outgoing activity on our computer.

The Czech DPA has fined Avast as the company had not only collected but sold private browsing data of such a detailed nature that individual customers could be identified. And, as to be expected, Avast did not inform users of such data collection not given them information on the purposes of such.

Such ignorance regarding the most basic transparency principles is rather concerning in the case of a company we may entrust with the security of our computer.

French data protection watchdog CNIL fines TikTok, an app mostly used for silly dances and snooping data for the Chinese government, 5 million Euros over cookie consents using a dark pattern that "actually discouraged users from refusing cookies and encouraged them to prefer the ease of the 'accept all' button." Furthermore, users were not fully informed of the purposes of different cookies.

While TikTok is a favorite social media whipping boy, the administrative fine does not reflect the or a grand approach to regulate big tech but rather focusses on relatively mundane and technical aspects of running a web service. Yet those technical laws, too, are for everybody to keep. This includes the big fish.

Rental Scooter Company fined over collection of geolocation data

One of the things users of the ubiquitous rental scooters that stuff our cities may not necessarily aware of is that their data are being tracked. More specifically: their geolocation data. Where users have been for how long. French data protection authority CNIL found that users were not informed of such data collection and no other grounds could justify it.

German Federal Cartel Office takes issue with Google's data processing terms

The Bundeskartellamt, Germany's Federal Cartel Office, issued an official "Statement of objection" against data processing terms. While the assessment of the terms is only preliminary at this time, the findings are rather damning.

The Bundeskartellamt, due to the nature of its mission, accesses Google's data processing activities through the lens of competition law as "Google is of paramount significance for competition across markets".

In its probe the Cartel office found that users are not given sufficient choice as to whether and to what extent they agree to this far-reaching processing of their data across the numerous Google services. The choices offered so far, if any, are, in particular, not sufficiently transparent and too general. Furthermore, it is easier to simply consent to certain data processing activities than to reject them.

In plain words: Google uses dark patterns to trick users into accepting rather predatory terms.

The actions of the Bundeskartellamt show that data protection needs a multi-pronged approach. Player with a certain influence on markets will have to be held to higher standards than smaller businesses due to the interaction of privacy regulations and competition law.