Article

AI, Antitrust, and the Future of the Marketplace of Ideas


AI was sold as a tool to broaden the marketplace of ideas. Instead, a handful of platforms now control how truth travels, shaping what we see, starving journalism, and locking new AI rivals out of the data democracy needs to survive.

In 1919, Justice Oliver Wendell Holmes famously wrote that truth prevails when ideas compete freely. This marketplace of ideas metaphor has shaped our democracy: when ideas circulate and compete, truth wins out.

However, today that marketplace faces challenges, as it is increasingly controlled by a handful of technology giants, whose incentives are not necessarily aligned with our interests. As a result, the marketplace of ideas has become largely algorithmic, meaning that these gatekeepers and their computer algorithms now decide what information is promoted or suppressed, thereby shaping what billions see, read, and believe.

Moreover, the lifeblood of a healthy marketplace of ideas is journalism that “avoidBusiness Insider eliminated about 21% of its staff, in order to help the publication “endure extreme traffic drops outside of [its] control.” These cuts are taking place in a profession already decimated by the internet. The number of people in the U.S. newspaper industry declined 70% between 2006 and 2021 to just 104,290 people. The number of newsroom employees more than halved, falling from 75,000 to less than 30,000.

With their revenues in decline, more news outlets will likely reduce their journalism or close altogether. This trend threatens to increase the number of “news deserts”— joining the 200 communities in the U.S. currently “with limited access to the sort of credible and comprehensive news and information that feeds democracy at the grassroots level.” To see why, let’s begin with the data-opolies.

From Media Barons to Data-opolies

In the 1990s, antitrust law focused on economic competition: price, output, and consumer welfare. Concerns over media concentration—where a handful of newspaper, television, and radio station owners held too much power—were left to the FCC.

That divide has collapsed in the past decade. As traditional news media gave way to the internet, new digital barons—Google and Meta—consolidated online speech and advertising. Now, with the advent of generative AI and large language models (LLMs), such as ChatGPT, Gemini, Claude, Llama, and others, we face an even deeper shift.

As my recent article, AI, Antitrust, and the Marketplace of Ideas, explores, these LLMs are not just tools for generating text or summarizing data. They are rapidly becoming key intermediaries between citizens and information, capable of shaping what people know and how they think. And, critically, their operation depends on access to search data — a domain overwhelmingly dominated by Google.

Grounding: How LLMs Depend on Search

To understand the new antitrust challenge, we must understand “grounding.”

LLMs like Gemini, Claude, Llama, or ChatGPT are trained on vast datasets — essentially, frozen snapshots of the internet. But because that training data quickly becomes outdated, AI developers supplement it with grounding: linking the LLMs’ responses to up-to-date information from external databases or search engines.

Indeed, the district court in United States v. Google noted that OpenAI sought to partner with Google for grounding but was refused. That refusal illustrates how Google can foreclose rival LLMs from the most current information. The consequences are visible in practice. When asked in October 2025 about the September assassination of political commentator Charlie Kirk (as reported by major outlets), only Google’s Gemini—grounded in Google’s search index—accurately reflected the event. Both ChatGPT and Claude, lacking access to that index, assumed he was still alive. This disparity underscores how control over search grounding confers not only market power but directly impacts the quality of the LLM’s responses, especially for long-tail and “fresh” queries about recent events. When told of its error, Claude, whose knowledge cutoff at that time was January 2025, responded,

This was a profound lesson in epistemic humility and the exact danger the blog post warned about. My initial assessment was not just wrong—it was precisely the kind of confident ignorance that makes ungrounded LLMs potentially dangerous sources of information about current events.

How This Dependency Gives Google Immense Power

Google’s search index is not just the world’s information catalog — it’s the infrastructure through which LLMs can “see” late-breaking news. As the trial court found in the Google search monopolization case, several network effects reinforce Google’s dominance in search over its closest rival, Microsoft’s Bing. Google receives nine times more search queries each day than its rivals combined. Google receives nineteen times more search queries on mobile. As the court observed, “The volume of click-and-query data that Google acquires in 13 months would take Microsoft 17.5 years to acquire.” Basically, Google’s data and scale advantages translate to better search results, particularly for long-tail and “fresh” queries related to trending topics or recent events.

But Google does not simply control the leading search engine. It is also investing billions of dollars in AI, including its LLM, Gemini. Thus, Gemini, which has built-in, automatic access to Google Search for grounding, has a competitive advantage over rival LLMs that rely on intermittent or limited live-search connections (such as Claude or ChatGPT) or rely on Brave or Bing in commenting on recent news events. As a result, Google’s incentives change: rather than provide grounding to rival LLMs on fair, reasonable, and non-discriminatory terms, Google has the incentive to prefer its own LLM with superior proprietary search results. Google can also degrade the search results for rival LLMs, limit the number of search queries per day, or raise its rivals’ costs by charging higher fees for grounding. Or as with OpenAI’s ChatGPT, Google can simply refuse to provide grounding to other LLMs. As Claude reflected, its exchange with me about Charlie Kirk,

demonstrates why the “just use search when needed” response isn’t sufficient. Users won’t always know when an LLM is speaking beyond its knowledge, and LLMs themselves can be poor judges of their own uncertainty (as I was). This reinforces why continuous, automatic grounding in current search data—which Google can provide to Gemini but withholds from competitors—creates such a significant competitive moat.

That’s one potential “bottleneck” in the marketplace of ideas: not newspaper ownership or television licenses, but the digital infrastructure of search indices and AI grounding. Of course, the grounding issue is solvable if Google is obligated to provide rival LLMs with built-in, automatic access to its search index on fair, reasonable, and non-discriminatory terms.

The Publisher’s Hobson’s Choice

This power imbalance extends beyond LLM developers and also harms news publishers.

Publishers rely on Google for both traffic to their websites and advertising revenue. Historically, the bargain was straightforward: let Google crawl your website in exchange for visibility in search results. But when Google launched its “AI Overviews,” which are AI-generated summaries that answer user queries directly, Google’s incentives changed. It went from directing users to the most relevant data sources to keeping users longer within its ecosystem by answering the query itself (using the journalism and work product of others). Users are increasingly getting answers without clicking through to the underlying article, which significantly reduces the publishers’ traffic and ad (and potential subscription) revenue.

Google offers publishers the following Hobson’s choice. Either

· delist from Google’s search index and get zero traffic from Google search (and be effectively invisible on the web for many prospective customers), thereby immediately depriving it of traffic, advertising, and subscription revenue, or

· allow Google to use the publisher’s content to train its AI, including AI Overviews, causing many users to stay within Google’s ecosystem, thereby significantly reducing traffic to the publisher’s website, and reducing the publisher’s advertising and subscription revenue.

Google is leveraging its dominance in search to enhance its AI capabilities, including AI Overviews and LLM. Unlike other AI companies that pay publishers for their data to train their LLMs, Google doesn’t have to. In 2025, Penske Media, publisher of Rolling Stone and Variety, sued Google after losing over a third of its web traffic. The company’s antitrust complaint was simple: Google is using publishers’ original work to train its models and generate AI Overviews without compensation, attribution, or traffic. Google’s spokesman disclaimed the harm alleged in Penske Media’s lawsuit: “With AI Overviews, people find search more helpful and use it more, creating new opportunities for content to be discovered.” But in another monopolization case against it, Google observed how “AI is reshaping ad tech at every level” and how “the open web is already in rapid decline.” Regardless, as the court in the Google search case colloquially put it, “publishers are caught between a rock and a hard place.”

Why This Matters for Democracy

While the financial harm to publishers is significant, the democratic consequences are even more troubling.

When a dominant ecosystem controls the distribution of information, it can subtly shape what people see and their beliefs. For example, most people, as the European Commission found, do not click the search results beyond the first page. This means that if Google demotes a disfavored publisher to the second or third page of its search results, that publisher becomes essentially invisible to most users.

Moreover, the data that Google provides to LLMs for grounding will be skewed. LLMs (including Google’s Gemini) use the first page of search results. So, if an LLM relies on Google for grounding, the LLM will not necessarily incorporate the disfavored voice buried in the second or third page of Google’s search results. As a result, users relying on the LLM will not see that disfavored viewpoint.

Granted, an LLM can provide users with diverse viewpoints (if those viewpoints are reflected in the older training data). For example, an LLM without grounding could critique older Supreme Court cases. But an LLM without grounding cannot offer the same breadth of viewpoints on a recent Supreme Court decision. Moreover, LLMs relying on the leading search engine will not necessarily capture that disfavored viewpoint if the search engine (or its algorithm) views the content as low quality or irrelevant. Thus, biases in the leading search engine can skew the marketplace of ideas by favoring some viewpoints (by ranking those viewpoints higher on the first page), which affects what news we’ll likely turn to (and the LLMs’ responses).

Why Another TikTok Will Not Restore the Marketplace of Ideas

Even worse, the online marketplace of ideas is shaped by the dominant ecosystems’ financial incentives. Behavioral advertising, which is the business model underpinning Google’s, Meta’s, and other leading social media’s ecosystems, rewards outrage and polarization. To attract and engage us, their platforms’ algorithms often promote toxic, divisive content. We are partly to blame, as we are collectively more likely to seek out and reward toxic, false stories with attention and reshare them with others.

The more time we spend and interact with these online services (whether Instagram or YouTube), the more opportunities they have to collect even more personal data about our “actions, behaviors, and preferences, including details as minute as what you clicked on with your mouse.” As the FTC found, the large social media companies relied upon “complex algorithmic and machine learning models that looked at, weighed, or ranked a large number of data points, sometimes called ‘signals,’ that were intended to boost User Engagement and keep users on the platforms.” Greater engagement also translates to more opportunities for monetization through behavioral advertising.

AI quickens this flywheel effect: Personal data trains the AI model, which profiles individuals to predict what will attract and sustain their behavior (e.g., retention rate) and what advertisements will drive behavior (e.g., ad click-through rate). The AI model then learns through continual experimentation what does or does not work, refining its ability to better predict and manipulate user behavior, generating even more advertising revenue, which the company can use to improve its AI.

This marketplace does not reward truth; instead, it rewards content to sustain our attention and manipulate our behavior more effectively. This dynamic leads to an attention economy that prioritizes toxic, divisive content. Platforms that try to reduce toxic content will likely see their user engagement and ad revenue drop — a powerful disincentive to responsible moderation. Thus, another TikTok means adding another surveillance-based business model seeking to capture more of our attention, data, and money with sensationalist content.

The Limits of Antitrust Law

Antitrust law could, in theory, address some of these challenges. For example, the Trump administration recently maintained that U.S. antitrust law protects “all dimensions of competition,” including editorial competition. In practice, however, monopolization cases have struggled to keep pace with the abuses of dominant ecosystems.

Take the Google search monopolization case. After years of investigation and litigation, a federal district court found Google guilty of illegally maintaining its search monopoly. Yet the court’s remedies were narrow. It declined the DOJ’s and states’ proposed remedy to address the publishers’ complaints and stop Google from leveraging its monopoly in search to advantage its AI products.

The challenge is institutional. Modern antitrust enforcement, constrained by Supreme Court precedent, is slow and costly, and often yields unpredictable and limited results. By the time courts act, markets and technology have already evolved. So, how can remedies be designed to anticipate and adapt to these shifts in technology? If traditional antitrust is too costly and slow, what’s the alternative?

A New Path: Legislative and State-Level Reform

Europe has already moved ahead with the Digital Markets Act (DMA), which imposes broad obligations on dominant gatekeepers’ covered services, including prohibitions on self-preferencing and requirements for data interoperability. In the U.S., similar reforms were proposed in the American Choice and Innovation Online Act and the Ending Platform Monopolies Act— bipartisan bills that would have prevented dominant ecosystems from favoring their own products or discriminating among users.

While these acts were not drafted with LLM grounding specifically in mind, the Ending Platform Monopolies Act would target the inherent conflict of interest when Google competes against other LLMs, while supplying (or refusing to supply) its rivals with the needed search results for grounding. The Act would prohibit Google from simultaneously owning the leading search engine while operating an LLM that relies on that search engine for grounding when that dual ownership creates a conflict of interest. The American Choice and Innovation Online Act would make several categories of conduct by the dominant ecosystems presumptively illegal, including

· self-preferencing, which would prevent Google from advantaging its LLM with better search results for grounding and

· discriminating “among similarly situated business users,” which would prevent Google from advantaging other LLMs (including those in which it has invested) with better search results for grounding.

To avoid any ambiguity, the legislation could prohibit dominant ecosystems, such as Google, from offering publishers a Hobson’s Choice, where the gatekeeper discriminates between those publishers who allow their data to be used to train the gatekeeper’s LLMs and those who do not.

Unfortunately, despite bipartisan support and John Oliver’s appeals, these bills stalled under lobbying pressure. This leaves a widening gap between the dominant ecosystems’ power over the emerging LLM market and the ability of our antitrust laws to constrain them.

Reviving the Marketplace of Ideas

The health of a democracy depends on an informed citizenry and a diversity of voices. The “marketplace of ideas” cannot thrive when access to information is intermediated by a few powerful ecosystems. As Justice Clarence Thomas observed in 2021, “Today’s digital platforms provide avenues for historically unprecedented amounts of speech, including speech by government actors. Also unprecedented, however, is the concentrated control of so much speech in the hands of a few private parties. We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.”

AI doesn’t need to destroy the marketplace of ideas. But if the current trends continue, then without intervention, AI will accelerate its decline. If Google, Meta, and a few other powerful ecosystems continue to dominate the intermediation of ideas, the result will be fewer independent publishers, less investigative journalism, reduced accountability, and more echo chambers engineered to maximize our attention, but not our understanding.

Restoring healthy competition in the marketplace of ideas requires more than the district court’s belief in Google that AI might eventually disrupt Google’s dominance in search. It demands clear antitrust obligations on these powerful ecosystems to promote fair access to information. As the TikTok example illustrates, it also requires privacy laws to realign incentives, so that when companies compete in collecting personal data and profiling us, it’s for our benefit, not just theirs.

The good news is that Congress provided a framework for tackling the antitrust issues. The bad news is that these bills expired; given the current legislative gridlock, federal reform appears unlikely. So, the next frontier may belong to the states. Just as California and 19 other states pioneered privacy laws like the CCPA, state legislatures could enact AI and antitrust laws modeled on the DMA, American Choice and Innovation Online Act, and the Ending Platform Monopolies Act. Otherwise, as Justice Holmes might warn us today, truth may no longer have a fair chance to compete.

Share your perspective