Regulators propose democratizing data and encouraging competition to reign in Big Tech. But such moves won’t go far enough in protecting user privacy. New: A reply to critics
With the bustle of policy proposals and antitrust enforcement, it looks like the tech giants Google, Apple, Meta, and Amazon will finally be reined in. The New York Times, for example, recently heralded Europe’s Digital Markets Act (DMA) as “the most sweeping legislation to regulate tech since a European privacy law was passed in 2018.” As Thierry Breton, one of the top digital officials in the European Commission, said in the article, “We are putting an end to the so-called Wild West dominating our information space. A new framework that can become a reference for democracies worldwide.”
So, will the DMA, along with all the other policies proposed in the United States, Europe, Australia, and Asia make the digital economy more contestable? Perhaps. But will they promote our privacy, autonomy, and well-being? Not necessarily, as my latest book Breaking Away: How to Regain Control Over Our Data, Privacy, and Autonomy explores.
Today a handful of powerful tech firms – or data-opolies – hoard our personal data. We lose out in several significant ways. For example, our privacy and autonomy are threatened when the data-opolies steer the path of innovation toward their interests, not ours (such as research on artificial neural networks that can better predict and manipulate our behavior). Deep learning algorithms currently require lots of data, which only a few firms possess. A data divide can lead to an AI divide where access to large datasets and computing power is needed to train algorithms. This can lead to an innovation divide. As one 2020 research paper found: “AI is increasingly being shaped by a few actors, and these actors are mostly affiliated with either large technology firms or elite universities.” The “haves” are the data-opolies, with their large datasets, and the top-ranked universities with whom they collaborate; the “have nots” are the remaining universities and everyone else. This divide is not due to industriousness. Instead, it is attributable, in part, to whether the university has access to the large tech firms’ voluminous datasets and computing power. Without “democratizing” these datasets by providing a “national research cloud,” the authors warn that our innovations and research will be shaped by a handful of powerful tech firms and the elite universities they happen to support.
When data is non-rivalrous, that is when use by one party does not reduce its supply, many more firms can glean insights from the data, without affecting its value. As Europe notes, most data are either unused or concentrated in the hands of a few relatively large companies.
Consequently, recent policies, such as Europe’s DMA and Data Act and the U.S.’s American Choice and Innovation Online Act, seek to improve interoperability and data portability and reduce the data-opolies’ ability to hoard data. In democratizing the data, many more firms and non-profit organizations can glean insights and derive value from the data.
Let us assume that data sharing can increase the value for the recipients. Critical here is asking how we define value and value for whom. Suppose one’s geo-location data is non-rivalrous. Its value does not diminish if used for multiple, non-competing purposes:
- Apple could use geolocation data to track the user’s lost iPhone.
- The navigation app could use the iPhone’s location for traffic conditions.
- The health department could use the geolocation data for contact tracing (to assess whether the user came into contact with someone with COVID-19).
- The police could use the data for surveillance.
- The behavioral advertiser could use the geolocation data to profile the individual, influence her consumption, and assess the advertisement’s success.
- The stalker could use the geolocation data to terrorize the user.
Although each could derive value from the geolocation data, the individual and society would not necessarily benefit from all of these uses. Take surveillance. In a 2019 survey, over 70% of Americans were not convinced that they benefited from this level of tracking and data collection.
Over 80% of Americans in the 2019 survey and over half of Europeans in a 2016 survey were concerned about the amount of data collected for behavioral advertising. Even if the government, behavioral advertisers, and stalkers derive value from our geo-location data, the welfare-optimizing solution is not necessarily to share the data with them and anyone else who derives value from the data.
Nor is the welfare-optimizing solution, as Breaking Away explores, to encourage competition for one’s data. The fact that personal data is non-rivalrous does not necessarily point to the optimal policy outcome. It does not suggest that data should be priced at zero. Indeed, “free” granular personal datasets can make us worse off.
In looking at the proposals to date, policymakers and scholars have not fully addressed three fundamental issues:
- First, will more competition necessarily promote our privacy and well-being?
- Second, who owns the personal data, and is that even the right question?
- Third, what are the policy implications if personal data is non-rivalrous?
As for the first question, the belief is that we just need more competition. Although Google’s and Meta’s business model differs from Amazon’s, which differs from Apple’s, these four companies have been accused of abusing their dominant position, using similar tactics, and all four derive substantial revenues from behavioral advertising either directly (or for Apple, indirectly).
So, the cure is more competition. But as Breaking Away explores, more competition will not help when the competition itself is toxic. Here rivals compete to exploit us by discovering better ways to addict us, degrade our privacy, manipulate our behavior, and capture the surplus.
As for the second question, there has been a long debate about whether to frame privacy as a fundamental, inalienable right or in terms of market-based solutions (relying on property, contract, or licensing principles). Some argue for laws that provide us with an ownership interest in our data. Others argue for ramping up California’s privacy law, which the realtor Alastair Mactaggart spearheaded; or adopting regulations similar to Europe’s General Data Protection Regulation. But as my book explains, we should reorient the debate from “Who owns the data” to “How can we better control our data, privacy, and autonomy.” Easy labels do not provide ready answers. Providing individuals with an ownership interest in their data doesn’t address the privacy and antitrust risks posed by the data-opolies; nor will it give individuals greater control over their data and autonomy. Even if we view privacy as a fundamental human right and rely on well-recognized data minimization principles, data-opolies will still game the system. To illustrate, the book explores the significant shortcomings of the California Consumer Privacy Act of 2018 and Europe’s GDPR in curbing the data-opolies’ privacy and competition violations.
For the third question, policymakers currently propose a win-win situation—promote both privacy and competition. Currently, the thinking is with more competition, privacy and well-being will be restored. But that is true only when firms compete to protect privacy. In crucial digital markets, where the prevailing business model relies on behavioral advertising, privacy and competition often conflict. Policymakers, as a result, can fall into several traps, such as when in doubt, opting for greater competition.
Thus, we are left with a market failure where the traditional policy responses—define ownership interests, lower transaction costs, and rely on competition—will not necessarily work. Wresting the data out of the data-opolies’ hands won’t work either – when other firms will simply use the data to find better ways to sustain our attention and manipulate our behavior (consider TikTok). Instead, we need new policy tools to tackle the myriad risks posed by these data-opolies and the toxic competition caused by behavioral advertising.
The good news is that we can fix these problems. But it requires more than what the DMA and other policies currently offer. It requires policymakers to properly align the privacy, consumer protection, and competition policies, so that the ensuing competition is not about us (where we are the product), but actually for us (in improving our privacy, autonomy, and well-being).
A Reply to my Critics in the Naked Capitalism Comments Section
Many thanks to Yves Smith for reposting my INET blogpost on Naked Capitalism and the many commentators for their thoughtful questions and debate. As my book, Breaking Away, explores, the policy proposals, such as the Digital Markets Act (along with the current antitrust enforcement against these data-opolies) will help promote competition. Many would also agree that the policy proposals are better suited for targeting anti-competitive practices in the digital economy than the current antitrust legal framework. No one seriously disputes that fighting monopolies and preserving privacy can be compatible. But one cannot simply assume that the competition and privacy policies are inherently compatible. They will, at times, conflict especially when the prevailing digital ecosystems primarily rely on behavioral advertising.
Here is one example. Google and Facebook have made the digital advertising market, which is already complex, even more opaque. As a result, advertisers can’t effectively determine whether the services they purchase offer “value for money.” The opacity harms the publishers, who cannot determine whether the ad tech platforms with whom they contract are the most efficient and whether the ad tech tax is fair or too high. The lack of transparency leads to worse outcomes for advertisers and publishers while increasing Google’s and Facebook’s profits and power as unavoidable trading partners.
Adding to the problem are the multiple conflicts of interest. Because Google represents most sellers and buyers, controls the leading exchange, and competes against the sellers with its own inventory, Google, in this intentionally opaque advertising ecosystem, can influence which ads are served on its exchange and at which price; and which inventory is bought on behalf of advertisers. Since the data-opolies profit from the status quo, they have less incentive to reform it.
But the biggest fundamental problem is that competition will not fix it. Suppose Google had to divest YouTube, its online ad exchange, and either its buy- or sell-side ad tools. Suppose Instagram and WhatsApp were spun off as separate companies. Also, suppose the ensuing competition reduced the ad tech tax for display ads on third-party websites and apps from 35% to 10%. Would we be better off?
In some ways, we would. Newspapers, for example, might recover more ad revenue that could be invested in investigative reporting.
Nevertheless, the underlying competition would remain toxic. With or without these data-opolies, it remains a classic race to the bottom. That is why so many apps collect far more data about us, including our movements than what’s necessary for the apps to work. Our geolocation data, like us, are for sale—whether to advertisers, Wall Street banks, or governmental agencies seeking to spy on us. The toxic competition has already advanced from predicting to manipulating behavior.
So even if we went beyond the DMA and broke up these data-opolies, that alone will not end the toxic competition. We would just have more companies competing to exploit us. Thus, the policymaker’s handy tool—increasing competition—will not work when the market participants’ incentives are misaligned with our privacy interests.
Contrary to one commentator, the Digital Markets Act will not prohibit the commercialization of user data. If it did, behavioral advertising would be dead in a few months, and the market capitalization of Google, Facebook, and to a lesser extent Amazon would have already plummeted. (The point about Amazon not appropriating data from its business users to produce a copycat product relates to a different, albeit important, issue.)
To correct this market failure, we need to realign incentives, where data is collected about us and for us. One way is to give us greater control over our personal data, which leads us to the next fundamental issue. What privacy framework should we employ, and will it be compatible with our competition policies? One cannot assume that the privacy policies (especially when they promote data minimization) will automatically synchronize with competition policies that seek to democratize the data by circulating it more widely through the digital economy.
Thus, we cannot assume that the proposed antitrust policies are good for privacy. We may get more competition for our data and attention, but it will not necessarily be a race to the top. Nor has Europe’s GDPR halted this race to the bottom. To fix the problems, we need to fix incentives. And none of the current proposals do that. This will require carefully synchronizing our competition, consumer protection, and privacy policies so that firms compete to promote our interests, not theirs.