Hey friends,
A TikTok a day...
TikTokers are leaving in droves following an expanded privacy policy. This comes at a time when TikTok announced its new venture, which pushes majority ownership of the company to US-based businesses, allowing them to avoid a ban from operating in the US. The data collection practices that made headlines include TikTok's acknowledgement of the collection of “your racial or ethnic origin”, and “sexual life or sexual orientation, status as transgender or nonbinary, citizenship or immigration status, or financial information”. The reality? These same provisions existed in the archived August 2024 policy from TikTok - so TikTok has been collecting this sort of data on its users for years, just like Meta.
Digital Omnibus, next stop
Legislators in the EU are currently having hot discussions around the Digital Omnibus proposal - an attempt to “simplify” the EU digital rulebook, but one that digital rights groups understand more as “deregulation” or weakening of the regulations to appease the Big Tech and adtech industry. Last week, we published our position on this Digital Omnibus proposal.
Our submission outlines practical solutions to centralise user choice properly, embed competition safeguards, build the internet's “ozone layer”, and regulate the adtech industry. The comments we shared will allow for simplification without deregulation. Check it out and share your thoughts!
Meanwhile, Aura Salla, Meta’s former EU lobbyist, has been selected to lead the European Parliament’s work on the “Digital Omnibus” within the ITRE committee.
Salla is well known for strongly supporting fewer rules for tech companies. As the lead negotiator shaping the Parliament’s position on this proposal, she could strongly influence how strict the final regulations are.
Meta: Consent-or-Pay
Speaking of Meta: on December 8, 2025, the European Commission quietly announced that, to comply with the DMA, Meta would offer new options to EU users. Contrary to the Commission's announcement, Meta has not released such an option. Instead, in its January earnings report, Meta indicated it would roll out a new model in Q1 and, notably, claimed that it and the Commission had aligned on the business model.
In the absence of an official statement from the Commission, this claim is either an attempt to reassure concerned investors or a sign that the DMA may not be meaningfully enforced. The likelihood that Meta can introduce a model that both complies with the DMA and sustains its current growth trajectory is slim.
As a reminder, in April 2025, the EC fined Meta €200 million for breaching the DMA through its Consent-or-Pay model, which was found to exploit European consumers and unfairly disadvantage competing businesses. Meta dismissed the fine as a "multi-billion-dollar tariff", making this investigation a centrepiece of US-EU trade tensions. The revised model Meta rolled out in response to these findings was also obviously non-compliant, making this third attempt (expected by the end of March) necessary.
Do not pass go
In the midst of Google being declared a monopoly, Google has expanded its reach by acquiring Wiz, a major cloud security company. The deal was positioned as a strategic investment in cybersecurity — but its implications go far beyond that.
The European Commission was notified of Google’s proposed acquisition of Wiz, and they faced a pivotal decision: whether competition policy would meaningfully constrain Big Tech’s expansion. Ultimately, Google was given the green light for the Wiz deal.
This marks Google’s largest acquisition to date, approved at a time when market power is under the greatest scrutiny.
Wiz currently works across many cloud providers, giving customers an unbiased, independent view of cloud security risks. With ownership transferring to Google, that dynamic becomes more complicated. A cloud security company not only prevents data leaks, but also has access to see the weak spots in data storage.
With this change, security is now Google’s central talking point. The acquisition of Wiz is only one piece of a broader shift in narrative. Increasingly, as Dr. Lex Zard put it, the message has come across as "Have you considered the security consequences of sovereignty, break-ups, and privacy enforcement?". In other words, Google is suggesting that efforts to regulate the company, break it up, or strengthen digital sovereignty could actually make Europe less secure. At the same time, it is presenting itself as the trusted provider of “security as a service” for governments.
Security rhetoric can be powerful. The real question we are facing is not whether security matters, but who controls it — and on what terms.
Pull the lever, Grok — wrong lever!
Grok, Elon Musk’s chatbot, has been restricted from public use after a horrifying series of events where nonconsensual and highly sexualized AI-generated images of women and children that flooded X’s feed — uncovered by Bloomberg News.
In response to this, Ofcom in the UK and the European Commission in the EU launched an investigation into suspected violations of their online safety rules - the Online Safety Act (OSA) in the UK and the Digital Services Act (DSA) in the EU. Investigations, at least in the EU, likely also include the use of Grok in advertising contexts, which we are closely monitoring.
In response, X announced that the public version of Grok will “no longer allow the editing of images of real people in revealing clothing, for example, putting them in bikinis or lingerie.” While Grok hasn’t been axed completely, it is now only available for folks logged in to their X account, who, yes, can still produce explicit images using the chatbot.
Because of this, the European Commission is considering a ban on AI-powered apps that undress people, called "nudifiers". France has launched an investigation into such offenses, to include X, allegedly, being in possession or distribution of CSAM. This comes at a time when the UK's Information Commissioner's Office is also launching it's own probe into the personal data retention and processing in relation to Grok.
In a wild turn of events, despite the AI ACT, child sexual abuse material and non-consensual intimate imagery were not explicitly listed alongside other harms like emotional manipulation, or assuming criminality based on a person's appearance…that's because the regulators did not anticipate this type of behavior that honestly shouldn’t need to be communicated to be stopped. It should never have been a thing.
Chat soon!
The Check My Ads Team