January 29, 2025

EU Tech Policy in 2026: From Paperwork to Power Politics

Europe's digital rulebook enters a turbulent phase in 2026. After a year of concerted external attacks on these intricate legal frameworks,the Digital Services Act, Digital Markets Act, and AI Act, European Institutions now face a year of reckoning. The EU's Digital Fairness Act is expected to arrive in Q4 2026, accompanied by enhanced consumer protections. However, with enforcement of existing regulations still rolling out, strategic patience may prove wiser than adding complexity. The question is no longer whether Europe has good laws, but whether it can effectively enforce them against external pressures.

The Enforcement Reckoning

Commission enforcers signalled their intentions to platforms in 2025 through issuing financial penalties to platforms such as X for breaching content moderation rules and its chaotic approach to systemic risks under the DSA. Other examples included Meta and TikTok's persistent failures in transparency.

In 2026, the European Commission and national Digital Services Coordinators will intensify enforcement. Ongoing investigations will conclude, and new ones will launch, focused particularly on age assurance, verification, and risk mitigation. Substantial fines potentially reaching 6% of global annual turnover for major violations, are possible.

The EU launches 2026 with an impressive stack of investigations using both competition law and the Digital Markets Act. The most closely watched aspect is whether the EU will mandate divestiture of Google's adtech business. Such a measure would fundamentally alter the competitive landscape, enabling alternative ad exchanges to operate without competing against an integrated incumbent. Both the EU and the US are pursuing Google adtech breakups, but the US case follows what many antitrust experts consider disappointing remedies in the earlier search monopoly trial. Genuine uncertainty exists whether the EU will hold firm or soften to align with US outcomes. Under the DMA, the Commission is also examining cloud computing services' scope and launching fresh investigations into Google's search practices affecting publishers.

Artificial Intelligence Outpaces Its Regulators

The AI Act's carefully calibrated risk categories and governance structures were designed for a world of narrow AI applications, facial recognition, credit scoring, and hiring algorithms. But 2026's dominant AI reality involves far more complex challenges.

AI-generated information manipulation campaigns at scale are becoming increasingly sophisticated. Recent investigations exposed AI chatbots disseminating Russian state media alongside the spread of climate denialist content. Meanwhile, AI companies face a fundamental profitability dilemma, with both Google and Open AI recently introducing advertising in their AI tools Gender-based violence and child sexual abuse material are major regulatory focus areas, with recent investigations into X by OFCOM and ARCOM. AI chatbots have been implicated in grooming children, with cases of minors being sexually exploited by chatbots and tragic instances of minors committing suicide after chatbots encouraged self-harm. There's also the problematic way platforms are deploying AI for content moderation. In many instances, AI moderation removes content that doesn't breach terms of service or isn't illegal, threatening freedom of expression.

In response, the European Commission is developing the Code of Practice on Transparency of AI-Generated Content - a voluntary framework for labeling synthetic media. Expected between May and June 2026, it establishes industry standards ahead of binding AI Act obligations taking effect in August. Its classic EU approach: voluntary frameworks first to help industry adjust, then enforceable rules.

Washington's Regulatory Counter-Offensive

American opposition to European tech regulation has moved from disapproval to active challenge, rejecting European regulatory philosophy entirely. This might lead to a potential escalation with World Trade Organisation (WTO) complaints challenging the DSA's territorial scope, trade retaliation targeting European exports and diplomatic pressure on member states to dilute the impact of the DSA ahead of the 2027 review.

This pressure campaign is already having a chilling effect beyond policy circles. Research documenting information manipulation has experienced a significant global decline since Trump's 2024 election, with researchers and civil society organisations facing increased legal threats, funding pressures, and political attacks that make publishing findings on platform harms increasingly risky. Targeting researchers through visa bans and public intimidation campaigns isn't symbolic, it's altering the evidence base that informs regulatory action.

Brussels confronts an unpalatable choice: capitulate to preserve broader transatlantic cooperation, or treat American resistance as confirmation that aggressive enforcement is necessary precisely because it threatens entrenched power. The decision will define not just tech policy but Europe's geopolitical positioning for the next decade.

Regulatory Influence Without Technological Independence

Legislative efforts to regulate online ecosystems emerged in various geographies, including Brazil, India, Australia, and Canada, representing genuine regulatory influence. Yet Europe remains dependent on foreign technology infrastructure. Its critical infrastructure runs on American clouds, alongside devices containing chips manufactured abroad. Dominant platforms used in Europe are foreign-owned and foreign-controlled. Regulatory power without technological sovereignty creates strategic vulnerability. In any serious confrontation with Washington or Beijing, Europe's dependence becomes leverage against it. Building indigenous technological capacity would require industrial policy at unprecedented scale and ambition that European political systems are working to generate.

The High-Stakes Implementation

2026 will reveal whether European digital regulation can be meaningfully enforced in the face of these significant challenges. Success requires adequate resources for enforcers, political courage from European regulators and national governments, as well as unity among member states with divergent interests. It will also require willingness to impose penalties severe enough to genuinely alter corporate behaviour. Most fundamentally, it requires belief that Europe's regulatory model, prioritising fundamental rights over market efficiency, democratic accountability over innovation velocity, deserves defending even when defence proves costly.

The legislative phase was the foundation. Implementation is where regulatory ambitions meet hard geopolitical and economic realities. Europe's digital future depends on whether Brussels can navigate unprecedented challenges. The next twelve months will provide crucial answers.