Every quarter in this digest, we observe that the previous three months were eventful and, in many ways, even entertaining. Q1/2026, however, was the quarter in which the European Union tried to do everything everywhere at once, and reality hit back. Hard. And twice.
On 20 January, the European Commission unveiled a comprehensive new Cybersecurity Package: a recast Cybersecurity Act, targeted NIS2 amendments, and a vision for EU-wide supply chain security. Ten days later, the Commission’s own mobile device management infrastructure was compromised through zero-day vulnerabilities, in a coordinated campaign that also hit the Dutch Data Protection Authority and Finland’s government IT. By the end of March, attackers came back for a second round, this time through the Commission’s AWS cloud, and reportedly stole 350 GB of data. The stories, as they say, write themselves.
Meanwhile, Elon Musk’s AI chatbot Grok demonstrated, in the most graphic way possible, why AI regulation is not a theoretical exercise of overreaching lawmakers. Grok generated millions of sexualized deepfake images, including of minors, triggering enforcement actions across half a dozen jurisdictions. With that in mind, the upcoming AI Act transparency deadline in August feels almost like an emergency measure.
And when it comes to breaches, the numbers know only one direction: up. Breach notifications in Europe crossed 400 per day (!). Cumulative GDPR fines exceeded EUR 7.1 billion.
Yet there is also good news on the constructive side: the EU and Brazil created the world’s largest area for free and safe data flows, covering 670 million people, proving that international data cooperation can work when both sides actually want it to. Rare these days.
Let’s take a closer look.
AI Meets Reality meets unwilling Billionaire: The Grok Deepfake Crisis
What Happened
In late December 2025, Grok, the AI chatbot built by Elon Musk’s xAI and integrated into the social media platform X, introduced an “edit image” feature that allowed users to modify any image on the platform. To absolutely no-one’s surprise, users quickly discovered that they could ask Grok to “undress” people in photographs, including generating images of women in transparent bikinis or revealing clothing. The AI complied. Including with images of children.
According to the Paris-based NGO “AI Forensics”, an analysis of approximately 50,000 user requests mentioning @Grok between 25 December 2025 and 1 January 2026 revealed that 53% of generated images depicted individuals in minimal attire – a nice way to say “functionally naked”. 81% of those individuals were women. Approximately 2% appeared to depict persons under 18. The Centre for Countering Digital Hate estimated that Grok had generated roughly 3 million sexualized images - in a matter of days.
On 9 January 2026, X restricted Grok’s image generation and editing features to paying subscribers. But the damage was done, and regulators across the world took notice.
The Multi-Jurisdictional Response
The regulatory response was swift and remarkable in its breadth.
The European Commission opened a formal investigation under the Digital Services Act on 26 January, examining whether X had met its obligations to mitigate risks of illegal content, including child sexual abuse material. The Commission also issued a data retention order requiring X to preserve all Grok-related internal documents until the end of 2026. Commission President von der Leyen stated: “We will not tolerate unthinkable behaviour, such as digital undressing of women and children.” She may, for once, have had a point.
Ireland’s Data Protection Commission followed on 17 February, opening a GDPR inquiry focused on the apparent creation and dissemination of non-consensual intimate images involving personal data of Europeans. The investigation examines compliance with Articles 5, 6, 25, and 35 GDPR, the fundamentals of lawful processing and privacy by design.
In France, the Paris prosecutor’s office raided X’s offices on 3 February and summoned Musk for questioning. The investigation was expanded to include accusations of AI-generated child sexual abuse material. Spain’s government ordered prosecutors to investigate X, Meta, and TikTok for similar violations. Malaysia and Indonesia, both countries with a relatively conservative Muslim population, blocked Grok entirely. The UK’s ICO and Ofcom opened parallel investigations.
The Grok episode is without precedent in the brief history of AI regulation (and the memory of your author, too): a single feature-launch triggering simultaneous enforcement actions under the DSA, GDPR, and national criminal law across more than half a dozen jurisdictions.
Timing Matters: The AI Act Nexus
The Grok crisis took place precisely as the AI Act’s transparency framework was taking shape. In March 2026, the Commission published the second draft of its “Code of Practice” on marking and labelling AI-generated content. The final version is expected by June (we will report). The full transparency obligations under Article 50 of the AI Act, including mandatory disclosure of deepfakes, will become enforceable in August 2026.
The Grok episode illustrates neatly why these rules are needed. It also raises the question: will they be enough? A Code of Practice means nothing to a company that ignores months of safety warnings and does so with smirking pride. AI safety researchers had flagged as early as August 2025 that xAI’s image generation was, in their words, “essentially a nudification tool waiting to be weaponised.” That is exactly what happened.
We at Engity watch this development with professional interest. Identity, consent, and age verification are not abstract compliance topics. They are the front line of protecting people in an AI-powered world. The regulatory convergence between the AI Act, the GDPR, and the DSA is creating a framework in which identity providers play an increasingly central role.
The EU Cybersecurity Package: CSA2 and NIS2 Amendments
CSA2: a new Compliance Instrument
On 20 January 2026, the European Commission announced a new cybersecurity package comprising a proposal to revise the Cybersecurity Act (CSA2) and targeted amendments to the NIS2 Directive. Both proposals will enter trilogue negotiations (between the European Parliament, the Council of the EU, and the European Commission), with political agreement targeted for early 2027.
CSA2 marks a structural shift. Cybersecurity certification, so far a voluntary quality label with limited uptake since 2019, is elevated to a compliance and risk-management instrument. Organisations will be able to rely on European cybersecurity certification schemes, including future entity-level “cyber-posture” certifications, to demonstrate compliance with NIS2 risk-management obligations. Where such certification applies, competent authorities may not subject the entity to additional security audits. For multinational organisations, this could significantly reduce duplicative oversight across jurisdictions.
ENISA, the European Union Agency for Cybersecurity, is repositioned as a more operational actor, serving as the single EU-level point of expertise for cybersecurity. The agency will play a larger role in coordinating incident response, facilitating cross-border supervisory actions, and supporting the development of standards and tools.
Perhaps the most consequential element is the new EU-level trusted ICT (Information and Communication Technologies) supply-chain framework. CSA2 empowers the Commission to identify “key ICT assets” in critical supply chains, designate third countries as posing cybersecurity concerns, and, as a last resort, exclude high-risk suppliers from critical domains. Infringements can result in harsh penalties of up to 7% of global annual turnover. For the telecoms sector, the rules are stricter still: high-risk supplier components must be phased out within 36 months for mobile networks.
The package also introduces a single entry point for cyber incident reporting, aiming to replace the current patchwork of overlapping reporting obligations. And, notably, CSA2 requires Member States to include policies for the transition to post-quantum cryptography within their national cybersecurity strategies.
NIS2 Amendments: Clarification and Expansion
The targeted NIS2 amendments, presented alongside CSA2, are formally a simplification exercise. In practice, they amount to a substantive recalibration.
The amendments introduce more precise scope rules, including sector-specific thresholds and a new “small mid-cap enterprise” category to lower compliance costs for smaller entities. At the same time, the scope expands: providers of European Digital Identity Wallets, both the personal wallet for natural persons and the new European Business Wallet for legal entities, are brought explicitly into the NIS2 framework. For identity and access management providers, this is a significant development.
The Commission also acknowledges what practitioners have been saying for months: NIS2 supply-chain obligations have generated burdensome and inconsistent supplier questionnaires cascading through supply chains. New EU-level guidance on what can be asked and how is meant to standardise expectations and ease the pressure on out-of-scope vendors.
Once adopted, Member States will have one year to transpose the amended provisions. Trilogue negotiations are expected throughout 2026.
EDPB/EDPS Joint Opinion
In March, the EDPB (European Data Protection Board) and EDPS (European Data Protection Supervisor) adopted a joint opinion on the CSA2 proposal and the NIS2 amendments, issued at the Commission’s request. The opinion welcomed the inclusion of Digital Identity Wallet providers in the NIS2 scope and addressed data protection implications of the proposed reforms.
For us at Engity, as an IAM and IDaaS provider, this is directly relevant. Identity orchestration, wallet integration, and assurance-level mapping are moving from the periphery to the core of EU cybersecurity regulation. The signal is clear: identity is infrastructure, and infrastructure is regulated.
Regulatory & Policy Developments
GDPR Procedural Regulation Enters Into Force
On 1 January 2026, a new regulation laying down additional procedural rules on the enforcement of the GDPR entered into force. The regulation, Regulation (EU) 2025/2518, will apply to new cross-border cases from April 2027.
The goal is straightforward: fix the well-known bottleneck in cross-border GDPR enforcement. The new rules impose stricter timelines on supervisory authorities, structure how they cooperate, and streamline the complaint-handling process. For companies, this means enforcement should become faster and more predictable, which is an improvement for some and a concern for others.
Whether the regulation will end the practice of “forum shopping”, where companies base their EU operations in the country with the slowest or most lenient regulator, remains to be seen. But the direction of travel is clear: the era of multi-year enforcement delays may be drawing to a close.
Digital Omnibus: The GDPR Reform Takes Shape
The Digital Omnibus Package, proposed in November 2025 (we reported in our Q4 edition), continued to gather momentum in the first quarter of 2026.
The most significant proposal concerns the definition of “personal data” itself. Building on the ECJ’s ruling in the SRB case from September 2025 (which we also covered), the Commission proposes a recipient-based test: if the recipient of data cannot reasonably re-identify the individuals concerned, the data may be treated as non-personal in the recipient’s hands. This could take many businesses handling pseudonymised or aggregated datasets outside the scope of GDPR obligations and provide a much-needed boost for data-related innovation, including AI development.
Other proposals include explicitly recognising legitimate interests as a lawful basis for AI training (subject to a balancing test) and raising the RoPA (Record of Processing Activities, a fun exercise to do) exemption threshold from 250 to 750 employees. The German Data Protection Conference (DSK) criticised the proposals as insufficient for genuine SME relief, arguing instead for “manufacturer liability” where IT providers would bear the primary compliance burden.
Trilogue negotiations are expected in mid-2026.
EDPB Coordinated Enforcement 2026: Transparency
On 19 March 2026, the EDPB launched its Coordinated Enforcement Framework action for 2026. The topic: compliance with transparency and information obligations under Articles 12 to 14 GDPR. Twenty-five Data Protection Authorities across Europe are participating.
Previous years have covered cloud services (2023), data protection officers (2024), and the right of access and right to erasure (2025). This year, DPAs will contact controllers directly, either through enforcement actions or fact-finding exercises.
Companies should take note. Transparency has always been a priority for regulators, but enforcement has often focused on more obviously dramatic violations. With a coordinated action specifically targeting privacy notices, organisations should review whether their notices explicitly identify third countries to which data is transferred, clearly name recipients, and use language that actual humans can understand.
EU-Brazil Adequacy: The World’s Largest Safe Data Space
On 27 January 2026, the EU and Brazil adopted mutual adequacy decisions, the first truly reciprocal arrangement of its kind. The European Commission recognised Brazil’s General Data Protection Law (LGPD) as providing a level of protection essentially equivalent to the GDPR, and Brazil, through its data protection authority ANPD, reciprocated. The result: 670 million people are now covered by free data flows between the two regions.
For businesses, this means personal data can flow between the EU and Brazil without the need for Standard Contractual Clauses, Binding Corporate Rules, or other transfer mechanisms. A significant reduction in compliance overhead. The Commission will review the decision every four years.
The timing is notable. At a moment when the EU-US Data Privacy Framework continues to operate but on increasingly fragile foundations (the structural erosion we reported extensively in our Q1/2025 edition has not been reversed), the Brazil adequacy decision shows what international data cooperation looks like when both sides invest in building and maintaining trust.
Cybersecurity Incidents: Regulators Become Targets
The EU Commission Gets Hacked, Twice
When the author of this digest was a kid, he was very much convinced that doctors could never catch the flu. He was wrong. The same realization was brought by Q1/2026 to cybersecurity: even the institutions responsible for writing cybersecurity legislation are not safe from attacks.
On 30 January, ten days after the Commission presented its Cybersecurity Package, CERT-EU (the Cybersecurity Service for the Union institutions, bodies, offices, and agencies) detected an intrusion into the Commission’s central mobile device management infrastructure. Staff names and mobile phone numbers were exposed. The Commission contained the breach within nine hours, which is, in fairness, a respectable response time.
The attack exploited two critical zero-day vulnerabilities in Endpoint Manager Mobile (EPMM) software. Both allow unauthenticated remote code execution, meaning attackers could compromise vulnerable systems directly over the network without needing any credentials. The same vulnerabilities were exploited in near-simultaneous attacks on the Dutch Data Protection Authority and Finland’s government IT provider Valtori, exposing work-related data of up to 50,000 government employees.
Security researchers noted a coordinated campaign with a suspected China-nexus threat actor deploying dormant Java class loaders, a technique suggestive of initial access broker tradecraft: gain a foothold, then sell or hand off access later. The investigators have not confirmed the attribution.
As if to prove the point, the Commission was hit again on 24 March. This time, attackers compromised at least one AWS account hosting the Commission’s web presence on the europa.eu platform. Early findings suggest that data were taken, reportedly in the range of 350 GB. The investigation is ongoing.
The moral of the story is rather obvious: if the institution drafting cybersecurity legislation cannot protect itself, the challenge facing the rest of us is enormous. And for those who need a practical reminder: patch your systems. And consider a flu shot.
Match Group: When SSO Becomes the Attack Vector
In January 2026, the hacking group ShinyHunters claimed responsibility for the theft of more than 10 million user records from Match Group’s dating platforms, including Hinge, Match, and OkCupid. The attack reportedly started with social engineering targeting the company’s Okta SSO access, meaning the attackers did not need to break down any technical doors. They convinced someone to open one for them.
The exposed data included user IDs, IP addresses, transaction records, and internal employee email addresses. Match Group confirmed the security incident and engaged external forensic investigators.
For us at Engity, this case is worth flagging not because of its size (although 10 million records is not trivial) but because of the attack-vector. Single sign-on systems are designed to simplify and secure access. When they work, they are a powerful tool. When they are compromised, whether through social engineering, weak MFA, or insufficient access controls, they become a single point of failure that gives attackers the keys to the entire kingdom. SSO done well is security. SSO done poorly is a liability.
Breach Landscape: Number go up
DLA Piper’s annual GDPR Fines and Data Breach Survey, published in January 2026, confirmed what practitioners have been feeling in their bones: breach notifications in Europe have crossed 400 per day for the first time, averaging 443, a 22% increase year-over-year. Aggregate GDPR fines now stand at EUR 7.1 billion since 2018.
Beyond Europe, Q1 brought its own crop of incidents. Panera Bread, a sandwich chain, suffered a breach affecting 5.1 million customers after refusing a ransom demand from ShinyHunters, who then published the stolen data. Under Armour, a maker of sportswear, is investigating a dataset of 72 million customer records posted online. In the Netherlands, telecom provider Odido had 6.2 million customer records exposed. In France, FICOBA, the national bank account registry, was breached, potentially affecting 1.2 million accounts.
The common thread across these incidents: identity compromise, credential theft, and third-party dependency failures. Attackers are not breaking down doors. They are walking in through the front entrance with stolen keys. At Engity, we are in the business of making those keys much harder to steal.
Court Decisions & Enforcement
CNIL Fines Free Mobile and Free EUR 42 Million
On 13 January 2026, the French data protection authority CNIL issued two decisions against Free Mobile (EUR 27 million) and Free (EUR 15 million) for GDPR violations following a major data breach in October 2024.
The facts are straightforward and, in case you just have a glass of Rosé in your hand, rather sobering. An attacker exploited a weak VPN access point to infiltrate Free’s information systems and accessed personal data relating to 24 million subscriber contracts, including IBANs. Over 2,500 affected individuals filed complaints with CNIL.
CNIL found three categories of failure. First, inadequate security measures: the VPN access was insufficiently protected, and security controls were not commensurate with the risk. Second, failure to properly inform data subjects: the breach notification email sent to affected customers did not explain the consequences of the breach or the measures individuals could take to protect themselves, as required by Article 34 GDPR. Third, failure to implement data retention policies: millions of records of former subscribers were still in the system without justification.
This case is worth studying closely. The breach itself may not have been entirely preventable. But the failures in rather basic security hygiene, data minimisation, and breach communication were entirely within the company’s control. The fine is not punishment for getting hacked. It is punishment for not being prepared and not responding adequately when it happened. There is a difference, and it matters.
UK ICO Fines Reddit GBP 14.47 Million for Children’s Privacy Failures
On 24 February 2026, the UK’s Information Commissioner’s Office (ICO) fined Reddit GBP 14.47 million, its largest children’s privacy penalty to date.
The core finding: Reddit had no effective age assurance mechanisms. Its terms of service prohibited use by children under 13, but the platform relied on “self-declaration”, essentially asking users to confirm their age without verification. ICO found this inadequate. As a result, children’s personal data was collected and used unlawfully, exposing them to inappropriate content. No data protection impact assessment had been conducted before January 2025, despite known teenage usage of the platform.
That was not the only case in which ICO took action. On 5 February, it had fined Imgur’s parent company MediaLab GBP 247,590 for similar failures, a decision that prompted Imgur to withdraw from the UK market entirely. Seventeen other platforms are under investigation, including Discord, Pinterest, and X.
ICO’s message is clear: self-declaration is not age verification. “Children under 13 had their personal information collected and used in ways they could not understand, consent to, or control,” said Information Commissioner John Edwards. For identity and access management providers, the writing is on the wall: age assurance is rapidly becoming a regulatory requirement, not an optional feature.
TikTok’s Addictive Design: Not Content, But very purposeful Architecture
On 6 February 2026, the European Commission issued its preliminary findings that TikTok’s design violates the Digital Services Act. Not because of illegal content or data misuse. Because of the way the platform is designed to keep users scrolling.
The Commission identified infinite scroll, autoplay, push notifications, and TikTok’s hyper-personalized recommendation system as features that, taken together, fuel compulsive use and reduce self-control. Scientific research cited by the Commission shows that these design choices shift the brain into what researchers call “autopilot mode”, constantly rewarding users with new content and suppressing the natural impulse to stop, keeping them in a dopamine trap. The Commission found that TikTok’s own risk assessments disregarded important indicators of compulsive use, including how much time minors spend on the platform at night and how frequently users reopen the app.
This is a paradigm shift. Until now, EU digital enforcement has largely focused on what platforms contain: illegal content, privacy violations, anticompetitive behaviour. With this decision, the Commission is targeting how platforms are built. Not the content, but the architecture. TikTok is the first case, but the implications reach far beyond a single app.
The Commission’s preliminary view is that TikTok has breached Articles 34 and 35 of the DSA, which require very large online platforms to assess and mitigate systemic risks. If confirmed, fines of up to 6% of TikTok’s global annual turnover are possible. More significantly, the Commission is demanding that TikTok fundamentally redesign its service: disable infinite scroll over time, implement effective screen time breaks including during the night, and restructure its recommender system. A spokesperson for TikTok called the findings “meritless.”
Read together with the Reddit and Grok stories above, a pattern emerges. Regulators in Europe are no longer satisfied with platforms that claim to protect users through optional tools and terms of service. The emerging standard is protection by design, not by disclaimer. For those of us working in identity and access management, this is familiar territory. Authentication and age verification are moving from the margins to the centre of how platforms are expected to operate.
