Doctor holding a tablet with a holographic data protection button and EU flag as a symbol for data protection in healthcare

EU Cybersecurity & Data Protection Update Q2-2024

Which Cybersecurity and Data-Protection topics dominated the second quarter of 2024?

We often open our quarterly data protection and cyber security digest by stating that the space never seems to be boring. Never was this truer than in the last quarter.

On one hand we still see developments in the field of AI and discussion of how to align the principles of personal data protection with the data hunger of artificial intelligence. On the other hand, there is a large upswing in – often state sponsored and AI fueled – hacking and phishing activity, undermining our political as well as endangering our economic system.

With this background, new centralized projects like the European Health Data Space sound promising from a user perspective but also seem very hard to protect as they are a high-value single target.

General Developments

The new European Health Data Space is promising and scary

The most interesting development in data protection in the second quarter of 2024 is no doubt the agreement of EU institutions on the European Health Data Space (EHDS).

While the EHDS is not yet a law, EU institutions, including the parliament, have agreed on a general framework of how it will look like and what it will do.

The basic idea is to allow patients to access their own health related data in electronic format from all the member states. That, then, would enable doctors in one country to work with all the records stored elsewhere: A union-wide electronic file.

The European Health Data Space, however, has also secondary effects, as health data (in aggregated form) could be used for statistical, political, and of course scientifical purposes. How useful that could potentially be we have seen during the Covid-19 pandemic.

As nothing is as sensitive as health data, this requires very robust technical and organizational safeguards protecting such data. And as we may show in many examples in this quarterly report: there is reason to doubt such safeguards as attackers always seem to find a way to access and steal data.

In particular, the news about the Health Data Space come at a time when phishing and hacking is on the rise and ever more very sensitive health data is available for sale. Large providers of health care software have seen successful attacks and paid ransom in an – unsuccessful – attempt to prevent private information from leaking. We shall report in this article.

The never-ending saga of EU-US data transfers

Numerous times in our quarterly report we have voiced our concerns regarding transatlantic data transfers under the EU-US Data Privacy Framework (DPF). The DPF, on closer look, looks eerily like the previous transfer mechanisms, Safe Harbor and Privacy Shield, both of which were found wanting by the ECJ.

We are, of course, not alone in this assessment. In fact, many leading voices in the privacy community think that the DPF rests more on political will than on legal substance. Recently, however, there was a visible uptick in articles and discussions questioning the longevity of the framework.

The main argument is: while the DPF pretends that it has reigned in American surveillance activity regarding personal data of EU citizens, in fact nothing really has changed. The framework is a dress-up with little real-world substance.

We, at Engity agree and would not advise any business to base strategic decisions on a framework that may be built on quicksand.

Public Initiatives

CNIL publishes recommendations for AI

It is still not fully explored how privacy and AI align and what can be done to make them work together. AI systems need to be trained on lots of real world and reliable data. At the same time, data, at least the personal kind, need to be protected not just against misuse but against any non-authorized use.

Talk about squaring the circle.

France’s privacy authority CNIL gives at least some initial “first recommendations” of how to approach the issue in seven worksheets. The recommendations themselves may not really be surprising, but they make clear that the rules apply to AI, too. In that vein, CNIL recommends looking after the basics, just as defining a clear purpose and legal basis, carrying out an impact assessment, or designing AI systems with privacy in mind in the first place.

In the respective sheets themselves, the topics are being discussed a bit less lofty and more hands-on. No wonder, as CNIL worked together with numerous stakeholders in the AI space.

EU Commission gives overview on the Data Act

AI is not the only legislative front. The EU has also issued the Data Act to regulate and encourage data driven innovation while also ensuring fairness among the actors in the industry.

Reviews of the act are mixed, but it will come into force on 12.09.2025, leaving only a bit more than a year to prepare for the new legal framework. It is therefore laudable that the EU commission published an explainer on the Data Act and tried – laudable for a rather bureaucratic institution – to explain things in practical and real-world terms.

The Explainer covers topics that are – very un-ironically – interesting, such as data sharing in the context of IoT, unfair contractual terms, switching of data processing services, and unlawful access by third-party governments.

Nordic Privacy watchdogs cooperate in children’s data protection

A cooperation of the Nordic Data Protection Authorities (the Datatilsynets) looks at many pressing and current data protection issues from another angle, harmonizing approaches and enforcement. In the meeting end of May 2024, the specific topic was the protection of children’ data in the context of gaming and AI with a specific look at synchronization of administrative fines.

The results of this year’s meeting have in the meantime been published and they apply the principles of the GDPR in a very specific way to the topics discussed, giving practical examples, clear advice, and ideas for further reading.

Some of the key recommendations are for example to give children control over which data are processed and for how long, and to think about the least intrusive ways to archive the purposes of data collection for gaming purposes. A special emphasis is placed on age verification.

The Nordic DPAs have a long history of cooperation, dating back to the year 1988. The underlying idea is to de-fragment the supervision to create a more frictionless legal and commercial space. Exactly what the EU is all about, but with a distinctive “Nordic approach”.

Developments

The cloud is just another vector for data breaches

One of the promises of cloud computing was that it is more secure as the data centers are operated by professionals and hardware as well as software will be up-to-date and well kept.

While this is not wrong and we, at Engity, are firm believers in the superiority of cloud computing (in many cases), it needs also be said that the cloud is not immune to data breaches either. In fact, almost half of all organizations using the cloud have, at some point, experienced a cloud data breach, the 2024 Thales Cloud Security Study finds.

One of the reasons is simply the ubiquity of cloud application – many businesses using dozens of them, creating a huge attack surface. Another factor that we at Engity as IaaS-Providers find shocking: many organizations simply use outdated or unsophisticated authentication methods.

It you, dear reader, think this may also concern your organization: Let’s have a talk!

Identity theft number fall, but severity goes up

A recent report of the Identity Theft Resource Center (ITRC) has shed some light on scams and identity theft numbers.

The report finds that the numbers of attempted identity theft falls. And that sound good on the surface but read the fine print: the individual attempt to get credentials and other identity related data became much more sophisticated, turbocharged by generative AI. Add that to the already wide pool of “already available” stolen data and it is little wonder that the individual Identity Crime became much more severe.

It seems that we see a switch from dragnet fishing to spear fishing – and that is an alarming trend.

Police in Paris uses AI video surveillance

We are not the first ones to notice that every modern technology sooner or later seems to be used for surveillance purposes. The most recent example was set by the French police force in Paris that used AI fueled video surveillance in the context of Taylor Swift’s concerts in the French capital.

The technology does not identify individual people but rather monitors whole crowds constantly for anything unusual or potentially threatening. The idea is to detect events like terrorist attacks early.

On the downside, this means that the surveillance is seamless, unrelenting, and absolute. There is no more such thing as a private moment. Furthermore, people with deviating behaviors or cultural norms that are not mainstream may be discriminated against.

The technology will also be used in the context of the Olympics and Paralympics.

Stolen cookies might allow hijacking of user accounts

Much discussion regarding cookies has focused on the third-party kind. A recent study by NordVPN, a service providing virtual private networks, found that first-party cookies may, in a security sense, be even more harmful.

That is because such cookies may be set by web servers on user devices to identify such user after login credentials were already entered. If such a cookie can be stolen and the user session such cookie was used for is still open, the cookie thief may easily be able to identify themselves as the user, gaining access to potentially high-value data.

The study found billions of cookies, of which almost a quarter were still active. Some of the stolen cookies permitted access to sensitive data such as sexual orientation. A giant security hole.

For us here at Engity this shows that security does not end with authentication of the user but is a constant process.

The European Data Protection Board (EDPB) has looked into Meta’s “consent or pay” model. This model offers data subjects the choice to either consent to the processing of their personal data or pay for the use of a service. Meta, as a so called “very large online platform” is required to give users such a choice under the EU Digital Market Act.

The EDPB, however, finds that such “consent or pay” is exactly not a real choice. In most cases it will not even consider using a paid version of a service. Very large online platforms, the EDPB thinks, should offer a third and better choice.

To be fair towards Meta and comparable businesses: the EDPB has only rather hazy ideas of how such “real” alternative could look like in practice.

ChatGPT’s data still not transparent and often wrong

A recent report by the European Data Protection Board (EDPB) on ChatGPT found some improvements in the accuracy of the data supplied by chatbots – but overall, they still fall short. At the same time, the fact that the data may be wrong or at least skewed or biased, is not made transparent to the user. Everybody who has ever ChatGPT found hallucinating can attest to that.

The investigation takes place with the temporary bans of ChatGPT in some EU-countries, notably Italy. The current paper is a preliminary positioning, and the findings and recommendations may change, and, of course, the tech behind AI is still rapidly evolving.

It is, however, good to see at least a competent attempt to align the principles of data privacy with the emergent tech of generative AI.

Administrative guides

Norway’s Datatilsynet publishes guidance on third country data transfers

Norway’s DPA has a tradition of offering helpful guidance of the transfer of personal data to third countries – meaning such jurisdictions that have not adopted the GDPR. Such third country transfers remain a problem as on one hand they are often necessary in a connected world to facilitate business transactions or simply the use of online services, while on the other hand the level of data protection varies quite significantly between jurisdictions.

The guidance had to be updated in light of the adequacy decision of the EU Commission regarding the USA under the EU-US Data Privacy Framework.

Norway’s Datatilsynet (again!) gives guidance on Codes of Conduct

Datatilsynet, which its supportive approach, published not just the guidance discussed in the previous section, but also a paper discussing Codes of Conduct. Such Codes of Conduct can be adopted by associations and other bodies representing categories of controllers or processors, Article 40 Sec. 2 GDPR. While such Codes of Conduct are voluntary instruments, they can play an important role in defining privacy standards in an industry and may thus become quasi-binding. This is the reason why they may be approved and monitored by the competent supervisory authorities.

According to Norway’s data protection law, the control bodies in charge of the Codes of Conduct have to be accredited by the DPA. To achieve this, criteria such as independence, conflict of interest, mechanisms for review and redress, in-depth knowledge, established procedures and structures, transparency in complaint handling, and of course communication with the competent supervisory authority have to be met. The details are set forth in the guidance.

EDPB offer guides on data protection for small businesses

Many observers of the space lament – with some justification – that privacy and data protection regulations have become so complex that it is often hard for small and medium-sized businesses to observe or even know them all. Hence it is laudable that the European Data Protection Board (EDPB) issued a guide with specific that target group in mind.

The guidance explains in easy to follow and practical terms the basics of data protection, but also what to do to stay compliant and how to react when something does not work, e.g., in a data breach. The idea of the paper is to not only explain the why but also the how, to provide easy to follow checklists.

Well done, EDPB!

Cybersecurity and data breaches

Sellafield: cybersecurity nuked

There are areas where cybersecurity is nice to have. There are other areas where it is imperative. And then there are nuclear powerplants. They should better be safe.

So, it is a bit worrisome that the Sellafield facility, managing large stockpiles of Plutonium, pleaded guilty on all criminal charges connected to outright disastrous management of cybersecurity in its systems and failings to protect sensitive information.

It is unclear whether or not there were intrusions into Sellafield’s computer systems: some reports claimed that data was stolen by Russian and Chinese hackers; newer statements claim otherwise.

The defects came to light after external service providers wondered why it was so easy to access the facility’s systems.

A sentence hearing is scheduled for Q3 – we shall report.

Russian hacker group targets TeamViewer

Platforms that service multiple clients are an obvious attack vector for all kinds of bad actors as one successful hack may result in a treasure trove of data. So, it is little wonder that TeamViewer, a provider of remote software, found itself target of an attack carried out by a Russian state-sponsored actor.

TeamViewer stated that the attack concerned only the internal IT system and not the TeamViewer platform or customer data. All hostile activity seems to be detected and shut down.

If that is true, we all might have gotten off lightly. The incident shows, however, how single points of failure can endanger a whole ecosystem of organization or even whole industries.

Cyberattacks expected during Olympia – officials beef up cybersec

Not so much as a threat to whole industries, but as a widely visible signal to the world watching, official expect cyber-attacks during the Olympic games. The organizing committee expects attacks and has reportedly done their homework: set up bug-bounty programs, hardened systems, established monitoring procedures.

The plus side for the Olympics: at least they know the approximate time hackers will try to attack. A luxury that “normal” organizations do not enjoy.

Cyberattacks shuts down numerous French Municipalities

While we are in France: cyber-attacks of rather large scale have taken down government services in French municipalities in April. While it is still unclear if information have been stolen, it is reported that restoration of the computer systems could take months. Officials warn citizens to be alert of phishing attempts.

France is currently a “good” target for all kinds of attacks as many of the resources of the French law enforcement agencies are used in securing the Paris’ Olympics. Attacking other targets looks akin to a real-world DDoS-attack.

RansomHub sells sensitive data stemming from data breach

While the EU is establishing its Health Data Space, ransomware groups are already busy stealing – and selling – very sensitive health data.

Change Healthcare, a provider of revenue and payment cycle management software for the healthcare sector, has been hit hard by a ransomware attack. The fallout, it seems, was threefold: on one hand, the systems were partly shut down, leaving Change Healthcare’s customers – such as pharmacies and providers – unable to process bills. On the other hand, sensitive data were stolen and Change Healthcare paid millions to get them back. This only with limited success as it seems as if the patient’s data are now being sold to whomever is interested. So the situation went from bad to worse to outright nightmare.

Needless to say: the data concerned could hardly be more sensitive.

Social engineering and hacking: when a dinner comes with a backdoor

While we discussed hacking, malware and weak authentication systems, the most rewarding factor to compromise systems and gain access is still the human. Talking about social engineering.

And, of course, Russian state-sponsored actors, as so much nefarious activity these days is tied to geopolitics.

In particular, such actors have targeted German politicians with phishing emails containing links to sophisticated malware that can install all kinds of backdoors on affected systems.

The attack was quickly detected by the German BSI (“Bundesamt für Sicherheit in der Informationstechnik”, a cybersecurity agency). A wider investigation is under way.

The emails came as dinner invitations – an obvious way to interest politicians. Social engineering made easy.

Private Initiatives

After backslash: Slack sheds light on how it uses data for AI

Many proponents of AI do not get tired pointing out that all businesses have a treasure trove of data – all they need to do is use them. With the help of clever AI, of course. And even more so if those businesses are platforms handling tons of data of all their users.

Users, and especially businesses, are of course concerned: it is their data after all and they think it is unfair that they pay for the use of the platform and still their data is fed to clever algorithms to earn the platform provider additional income.

This is the background of a backslash against Slack, a collaboration platform owned by Salesforce. Slack, on its privacy page, posted a notice to inform users that their information would be analyzed by AI systems and used for machine learning purposes.

Slack had to explain themselves, pointing out that private message contents would not be accessed The activity concerned included mainly metadata or use of data in summarizing content for the user. This may or may not be the case, however, the lack of clear communication by Slack is concerning. This even more so, as ever more businesses warn their employees not to enter confidential information into generative AI tools.

Meta halts AI training in the EU

On a similar topic: Meta, mother company of services like Facebook and Instagram, has paused its plan to train LLM (large language models, the current hot stuff in AI) on data of European users.

This comes after the responsible regulator, the Irish Data Protection Commission (DPC), requested Meta to do so in order to first understand the ramifications of Meta’s activities in the field.

Meta, in a press release, expressed some disappointment, pointing out that it had already made strives into making its activities transparent and expressing concerns that Europe may loose competitiveness in the field of AI.

All that may, in face, be true. Yet Meta has become synonymous with unrestrained and outright appalling data handling practices, not least since the Cambridge Analytical Scandal. No wonder public, politics, and regulators want to know what they are doing.

Court Decisions

General Advocate at ECJ: Facebook may not use public data to target ads

Many business models of social media providers rest on the promise of being able to provide very targeted ads to specific people. More relevant ads may, so the theory, result in much better marketing, making them more valuable than, say, a TV ad directed at a large an inhomogeneous audience.

Austrian courts, on behest of data privacy activist Max Schrems, have asked the ECJ to decide that social networks such as Facebook cannot use publicly available data for such targeting. While the court has not ruled yet, the Advocate General has issued his opinion – most often the basis for a later verdict.

Max Schrem’s idea is that the use of public data for targeted ads means the data would be used outside of their original purpose. If, for example, somebody engages in a political discussion, then they may not expect such data to be used in a commercial context.

The Advocate General, however, sees this as a data minimization and retention issue: data, even public data, cannot be used forever, he argues.

It remains to be seen how the court will decide.