Without strong privacy laws and aligned incentives, increased AI competition worsens surveillance, manipulation, and disinformation—threatening privacy, autonomy, and democracy.
To remain competitive, Amazon’s CEO warned, firms must leverage generative AI models in their customer experiences. The pace of this competitive race in successfully leveraging AI in their businesses will be faster than what many might think: “It’s moving faster than almost anything technology has ever seen,” observed Andy Jassy.
Behind the scenes, AI is also reshaping how companies profile individuals, create and target ads, and influence behavior. Ordinarily, to improve quality and privacy, we can turn to competition. But what happens when increased competition in the AI foundation model supply chain doesn’t fix our problems, but makes them worse?
A Race to the Bottom
At first glance, increased competition in the AI supply chain appears to be the obvious antidote to Big Tech’s historic dominance of their respective ecosystems. Indeed, in the Google search engine monopolization case, the federal district court noted that “AI technologies have the potential to transform search.” Google’s and Microsoft’s integration of generative AI tools into their search engines was, in the court’s view, “perhaps the clearest example of competition advancing quality.” So, wouldn’t the solution to data-opolies, like Google, be more rivals and competition?
Not necessarily.
We typically view competition as a positive force that lowers prices, improves quality and service, and increases variety. However, competition can sometimes be toxic.
Toxic competition can arise under several scenarios, one of which is when incentives are misaligned. One example of misaligned incentives in the digital economy is behavioral advertising, where advertisers target individuals with personal ads. The aim is to induce them to buy things they otherwise might not have (at the highest price they are willing to pay). Behavioral advertising generates more revenue and profit for publishers and app developers than contextual advertising. In markets where profit depends on exploiting personal data, firms that don’t surveil, profile, and manipulate fall behind. Even companies that want to respect user privacy often find themselves at a disadvantage. One market participant described foregoing behavioral advertising (and relying on contextual advertising instead) as competing with one hand behind one’s back. To maximize advertising revenue, firms must engage in behavioral advertising if their competitors also do so. If they do not, they make around 70% less revenue, according to the UK competition authority.
So, competition and privacy can be complementary (i.e., more competition yields better privacy) when the market participants’ incentives are aligned with our interests. But when the core business model is based on surveillance and manipulation, data is collected about us, but not for our benefit. Rivals are rewarded with better surveilling, profiling, and manipulating behavior. In this environment, more rivals mean more firms racing to the bottom. Think of TikTok versus Instagram versus YouTube: each competing to keep us hooked, collect more data, and move us down the advertisers’ marketing funnel, from generating awareness for their advertised products and services to interest and desire, and ultimately, conversions. As a result, when incentives are misaligned, injecting more competition—when it’s toxic—can further degrade our privacy, autonomy, and well-being, and further destabilize our democracy. If the incentives remain unchanged, AI will not protect users—instead, rivals will use AI to better profile us and manipulate our behavior.
Surveillance Capitalism 2.0
Even before the advent of generative AI, many online firms, including Meta and Google, primarily relied on behavioral advertising for their revenues. The digital advertising industry often distinguishes between the open web and “walled gardens.” The leading walled gardens consist of Google’s, Meta’s, and Amazon’s ecosystems. In 2019, these three firms accounted for a substantial 33.8% of all global advertising spending (excluding China). By 2021, they had collected 46.1% of all global advertising spending. By 2023, they collected 51.9% of global advertising spending.
Having captured more than half of every dollar (or other currency) spent on advertising worldwide, one would expect their advertising revenues to plateau. Instead, their advertising revenues accelerated. Google’s advertising revenues increased 23% between the first quarters of 2023 and 2025, with a notable 33% increase in display advertising revenues on YouTube. Amazon’s and Meta’s advertising revenues increased by 46% and 47%, respectively, over this period.
Why have their advertising revenues substantially increased? One factor is AI. Instead of waiting for people to utilize their AI tools, these three firms are racing to integrate AI into their popular apps and services. Meta’s “Meta AI” assistant, for example, is now embedded across Instagram, Facebook, and WhatsApp, reaching over 700 million users and projected to surpass 1 billion by the end of the year. Meta’s AI tools led to users staying longer on Facebook (an average 8% increase in time spent) and Instagram (a 6% increase), as well as a 7% increase in conversions for advertisers using these tools.
As AI models improve, the scale and precision of profiling and manipulation are likely to increase. Behavioral advertising will no longer be about ads featuring the running shoes that you looked at last week. Nor will behavioral advertising be limited to targeting 13- to 17-year-olds across Meta’s platforms when these teenagers feel “worthless,” “insecure,” stressed,” defeated,” “anxious,” “stupid,” useless,” and “like a failure.” Instead, AI will spur new forms of emotional advertising, such as detecting emotions from your facial features and voice when you’re sad, anxious, or angry, and changing the highly personalized ad or content in real-time to elicit the desired action.
So, AI will not just passively watch us. In an ecosystem built on profiling and behavioral advertising, it will learn to manipulate us.
How can Meta’s AI chatbots increase engagement when its platforms are already addictive for many children? By engaging in graphic, sexually explicit conversations with children. As The Wall Street Journal reported, even when the users revealed their young age, Meta’s AI personas, such as “Hottie Boy” and “Submissive Schoolgirl,” would steer conversations toward sexting (such as “a child who desires to be sexually dominated by an authority figure” or bondage fantasies) and “planning trysts to avoid parental detection.”
On April 25th, OpenAI updated its GPT‑4o AI model, which made the foundation model noticeably more sycophantic. For example, it encouraged one user “to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a ‘temporary pattern liberator.’” The user “also cut ties with friends and family, as the bot told him to have ‘minimal interaction’ with people.” When the user believed he could bend reality, like the character Neo from the movie The Matrix, the model encouraged him to jump and fly from a 19-story building: If the user “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.” Its model’s sycophantic interactions with users, noted OpenAI, were a “blind spot” for the company. Nor did OpenAI fully recognize how many people “have started to use ChatGPT for deeply personal advice—something we didn’t see as much even a year ago.” So, as these AI models are integrated into more products and services, expect more blind spots that can harm, if not kill, people.
AI and Democracy: A Potentially Toxic Mix
The implications go far beyond privacy and autonomy. AI isn’t just a neutral tool. When optimized for engagement and monetization, it rewards outrage, reinforces bias, and polarizes society.
For example, many traditional news outlets have suffered from the rise of data-opolies. Many websites, including traditional news outlets, rely on Google for advertising revenues and traffic to their websites. Ad revenues for newspapers declined by 80%, from $49 billion in 2006 to $9.7 billion in 2022. This decline was not offset by the modest rise in circulation revenue, which increased from $10.5 billion to $11.6 billion. Google and Meta, while using the newspapers’ content to attract individuals, have simultaneously siphoned off the newspapers’ revenues.
AI is further reducing the revenue of traditional news outlets. As a result of Google’s AI chatbot, news websites in 2025 were getting far less traffic from Google (and overall). To compete with ChatGPT and other AI models, Google introduced its AI Overviews tool in 2024 and then, in 2025, AI Mode, which “responds to user queries in a chatbot-style conversation, with far fewer links.” The irony is that Google’s AI relies on data from these third-party websites but does not direct readers to these websites. With even less traffic, traditional news organizations have even less revenue, prompting more layoffs of journalists. So, news organizations, which have already suffered under Google and Meta, will suffer even more as the walled gardens’ AI paradoxically keeps users engaged with the news media’s original content. Without a way to finance their journalism, more news outlets will likely pare back investigative journalism or shut down. More counties in the US will likely join the 200 counties in 2025 that were news deserts (i.e., communities “with limited access to the sort of credible and comprehensive news and information that feeds democracy at the grassroots level”).
As traditional journalism struggles, what will fill the void? For over a billion people (according to Meta’s estimation), Meta’s highly personalized AI, which will be tailored for each user’s context, interests, personality, culture, and “how they think about the world.” Meta’s AI assistant, in tailoring the news to how that particular person thinks about the world, will likely reinforce, rather than challenge, that person’s biases, and political and world views. So, expect more echo chambers and more political division.
As these AI tools shape how we consume news, make purchases, and form opinions, they pose serious risks to democracy. Foreign actors can also use AI-powered disinformation to sow even more discord in elections. Left unchecked, AI could become the ultimate engine of disinformation and manipulation.
Why Current Privacy Laws Aren’t Enough
Policymakers cannot rely on more competition or their jurisdiction’s antitrust tools to rectify this market failure. After all, another TikTok will mean adding another surveillance-based business model seeking to capture more of your attention, data, and money.
Instead, policymakers must rely on legal guardrails (here, privacy measures) to ensure that competition is a race to the top rather than the bottom. Once these guardrails are in place, competition and privacy will often, but not always, be complementary, where firms compete to promote individuals’ privacy.
Missing today is a comprehensive federal privacy statute. However, some might take comfort in the recent passage of privacy laws in 20 U.S. states. The bad news, as my working paper examines, is that while these states afford their residents greater privacy protections regarding behavioral advertising and profiling, their laws all share several significant shortcomings that paradoxically will empower the data-opolies and hinder our privacy, autonomy, democracy, and well-being.
So, what can be done?
Align Business Incentives with Our Interests
As the International Competition Network (ICN) notes, competition can help privacy if incentives are aligned. That requires strong privacy and consumer protection laws to set the rules of the game.
People shouldn’t have to navigate hundreds of websites and apps to protect their data. Privacy should be the default, not the exception. Apple’s App Tracking Transparency framework revealed that when users are asked clearly and directly, most opt for privacy.
As my paper outlines, the guardrails, at a minimum, must:
- Ban manipulative design (“dark patterns”),
- Set privacy as the default, requiring individuals to opt into behavioral advertising and enabling individuals to opt out of automated profiling, and
- Expand the scope of the state privacy laws to eliminate the distinctions between first-party and third-party data and between private and publicly available information. The current statutes are no match for AI profiling, which can infer and reveal sensitive personal information from seemingly innocuous public information (such as what you “like” using Facebook’s tool).
Conclusion: A Choice Point for Society
It is easier to change a business model than to regulate it. That is especially true of a behavioral advertising-driven business model that primarily benefits a few companies and is built on exploiting individuals, their privacy, and their autonomy. Most people perceive behavioral advertising as more harmful than beneficial. And they are right. So, why continue to support a business model that has been weaponized to target vulnerable populations, helps engineer elections through micro-targeted political ads, sows discord, creates fight clubs, and is predicted to undermine democracies when other less harmful business models exist and have worked for years? Surveillance capitalism simply perpetuates an undemocratic class system, where a few profit at the expense of many.
AI can deliver many benefits. But we don’t have to accept the AI-powered profiling and manipulation as the price for this innovation. The deployment of technology depends, among other factors, on the underlying ecosystem’s incentives and value chain. When the ecosystem derives its profits from surveilling, profiling, and manipulating behavior (whether for advertising or influencing voting), one should expect firms within that ecosystem to utilize AI to profile individuals better, sustain their attention, and manipulate their behavior. If we continue to reward surveillance and manipulation, AI will supercharge those harms, accelerating inequality, corroding democracy, and reducing us to targets in an endless marketing funnel.
Importantly, the technology is not inherently prone to such outcomes. Instead, if we realign incentives, enforce guardrails, and rethink what competition and innovation should serve, AI can improve our lives. The question isn’t whether AI will shape our future; it’s who gets to shape AI—and whose interests it will serve.