Article

Failed State, Failed Market: Europe’s Bid to Reprice Social Media Harms


Europe’s social media crackdown is less about “speech wars” than a long-overdue attempt to price the public damage created by large platforms.

“Social media has become a failed state.”

When Spain’s prime minister uttered that line, he was not just offering a provocation for headlines. He was naming a governing problem. A system that shapes public life at enormous scale now operates with weak public discipline, private rulemaking, and incentives that reward escalation faster than they reward restraint. Most coverage frames this as a fight about speech: freedom versus censorship, parents versus platforms, Europe versus Silicon Valley. That frame, I believe, is too narrow. The more interesting story is the economic one: Europe is trying, however imperfectly, to correct a market failure that has been hiding in plain sight.

The attention economy is a two-sided market. Platforms match users and advertisers. In principle, there is nothing wrong with that structure. The distortion comes from price signals that leave major costs off the balance sheet. Users “pay” with time, data, and behavioral exposure, but they do not face the full downstream consequences of engagement-maximizing design. Advertisers pay for reach and targeting, but not for the social damage produced by the same optimization logic. Those costs are pushed outward into schools, families, clinics, courts, election systems, and public administration.

That is a classic example of an externality problem: Profits are private, harm is socialized.

Seen that way, youth restrictions on social media are less arbitrary than they appear. A ban on under-16 access may be blunt policy, but it is an attempt to force a recalculation in a segment where informed consent is weakest and vulnerability is highest. The point is not that minors are the only affected group. It is that minors make the cost-shifting especially hard to deny. Once governments try to enforce age thresholds, they collide immediately with the real political economy of platforms: identity systems, compliance costs, data retention, liability, the redistribution of risk and unobtrusive theft of privacy. This is why “protect the children” quickly becomes a battle over market design. Age checks can reduce harm in one dimension while creating new exposure in another, especially around privacy and cybersecurity. A serious welfare approach has to hold both truths at once: unchecked engagement systems impose real social costs, and careless enforcement can build a surveillance architecture that introduces new harms of its own.

There is also a competition angle that is often left implicit: teen participation is not only a social concern, it is part of ad inventory and future user lock-in. Measures that reduce youth access, or limit targeting around youth, can hit current revenue and also disrupt the long-horizon logic that justifies aggressive subsidization today: firms absorb losses or offer below-cost services now to secure durable behavioral lock-in and monetize users later. In practice, regulation here functions partly as competition policy by other means. It changes what data is worth, what kinds of business models remain viable, and which firms can absorb compliance burdens. That raises a hard question for regulators: are we disciplining concentration, or entrenching incumbents that can both subsidize longest and pay most for compliance? The answer depends on design. Rules that are technically sophisticated but institutionally naive can harden the dominance of the largest platforms. Rules that are clear, auditable, proportionate, and paired with interoperability and portability can open space for safer competitors.

A recurring defense from the largest firms is now geopolitical: any meaningful constraint on platform scale, data extraction, or AI deployment, they argue, risks “letting China win.” The move is strategically effective because it reframes domestic welfare regulation as national-security weakness, converting scrutiny of business models into a loyalty test. But the premise is thin. China itself regulates AI extensively, with rules on recommendation algorithms, deep synthesis, and generative AI services, alongside broad platform controls. In other words, strategic competition does not preclude regulation; it often intensifies it. In practice, this “China card” is less an argument against regulation than a bid for policy immunity: preserve market concentration, relax oversight, and socialize downstream harms in the name of competition. The irony is that democratic resilience is itself a strategic asset, and platforms that erode public trust, institutional legitimacy, and social cohesion weaken the very capacity they claim to defend.

The rhetorical strategy works in part because enforcement remains institutionally fragile. The French search of X’s office and broader DSA-era enforcement posture[1] point to another reality: regulation is not simply a statement of values, it is a state-capacity project. Governments, if not the European Union, can pass laws quickly but building the machinery to enforce them is slower and far more demanding. It requires technical expertise, legal precision, cross-border coordination, credible sanctions, and standards for evidence that can survive court challenge and political pressure.

Even basic fact-finding is legally and institutionally difficult. Researchers and regulators often cannot obtain standardized data on reach, amplification, or moderation outcomes without prolonged legal negotiation, and tech platforms can invoke privacy, trade-secret, and jurisdictional claims to narrow or delay disclosure. The result is an asymmetric system in which companies hold the evidence needed to evaluate social harm, while public authorities are asked to regulate with partial visibility. That gap turns enforcement into a contest over access to information before it can become a contest over substance.

This is where the analogy to financial regulation becomes useful. Finance taught us that private actors can take risks that are rational for firms and damaging for everyone else. We did not solve that through moral appeals, we tried to build prudential institutions: reporting requirements, stress tests, supervisory routines, and resolution tools. The digital sphere now shows similar features of systemic risk production. Algorithmic amplification can propagate shocks through populations at speed, reshape what people regard as credible, and strain democratic institutions that depend on shared informational ground. Europe appears to be moving, unevenly, toward a prudential logic for platforms. Not “content moderation” in the old sense, but risk supervision for social-technical systems that produce spillovers at scale. The challenge is measurement. In finance, regulators eventually developed standard metrics. In social media governance, we still lack robust, widely accepted measures of exposure, harm prevalence, and mitigation effectiveness. Without metrics, accountability drifts toward symbolism.

That is why the current moment matters beyond any single ban. Europe is testing whether democratic states can build institutions capable of governing markets that are technically complex, politically contested, and globally integrated. It is also testing a form of digital sovereignty. For years, US-based firms captured substantial rents from Europe’s information economy leaving many social costs to be borne by European publics and institutions. Australia has taken a parallel path, most visibly through its bargaining code and youth access restrictions, showing that middle powers can also force platform concessions when they align legal design with credible enforcement. Europe’s distinctive wager is scale: if it cannot dominate platform ownership, it can still shape platform rules, and given the size of its market, those rules often travel.

Critics call this censorship. Defenders call it overdue governance. Both labels miss part of the picture. What is really in play is a conflict between two economic models of the public sphere, filtered through legal traditions that are less timeless than they are often presented. In the United States, the current market-first media order is not an ancient constitutional constant. It is largely a post-1970s settlement, shaped by deregulation, concentrated political spending, and the rollback of New Deal-era public-interest constraints in broadcasting and communications. Europe followed a different path, with stronger postwar commitments to social protection and public stewardship of systemic harms, even when that has sat uneasily alongside civil-liberties claims. So the present clash is not just about content rules. It is about which political economy of speech, markets, and state capacity will govern digital life.

The key question here is not whether Europe is “tough” enough on tech. It is whether policy can internalize externalities without constructing a new surveillance regime, whether enforcement can be consistent rather than episodic, and whether rules can reduce social cost without locking in entrenched power. A credible welfare agenda would move in that direction through design obligations rather than pure access bans wherever possible, standardized risk assessment and independent auditing, and explicit cost allocation so firms that generate measurable social harms pay into mitigation and oversight. That is less theatrical than culture-war rhetoric, but more likely to work.

If Europe succeeds, it will not be because it found the perfect line in the speech debate. It will be because it did the harder economic work: repriced harms, built supervisory capacity, and changed incentives that currently reward damage. If it fails, the reason will likely be familiar from other policy domains: weak implementation, poor measurement, and tools that solve one failure by creating another.

The phrase “failed state” may have been political shorthand. Economically, it points to something precise: a market where the returns to harm exceed the returns to prevention. The central task now is to reverse that equation.


  • 1. At the EU level, the Digital Services Act (DSA) has become the backbone of this enforcement turn. In late January 2026, the European Commission announced a new formal investigation of X under the DSA, focused on whether X properly assessed and mitigated risks tied to Grok’s functionalities and the dissemination of illegal content.

Share your perspective