Antitrust and Privacy

↓ Download Chapter

ABSTRACT

This Chapter discusses the theories behind the call to incorporate privacy into antitrust and identifies some potential legal and economic hurdles to their application. Chief among them are (1) the extent to which privacy is an important dimension of competition; (2) identifying the underlying anticompetitive conduct that gives rise to a reduction in privacy; and (3) understanding that the benefits and costs of data collection are inexorably intertwined, with the net impact of a reduction of privacy on consumer welfare depending on heterogenous tastes for privacy and customization that are likely to be correlated in complex ways. Further, this Chapter also addresses potential First Amendment issues raised by using antitrust to condemn certain types of data collection and use, as well as problems that may arise to the extent that incorporating privacy into antitrust renders liability standards more uncertain.

Author’s Note*

Introduction

It should come as no surprise that the Venn diagram defining the scopes of antitrust and privacy have begun to overlap. Firms that charge nothing for consumers to use their services, but rely on consumer data to generate revenue, are ubiquitous; the cliché, “if you’re not the customer, you’re the product” appears to fit for a large portion of the digital economy. Of course, the collection and use of consumer data has long been the domain of consumer protection. For example, in the U.S., the Federal Trade Commission (FTC) has used its broad authority to police unfair and deceptive acts and practices to go after firms that lie about their data practices or otherwise collect or use consumer data in a way that is harmful. In the EU, the General Data Protection Regulation (GDPR) lays out specific regulations for the processing of consumer data, including robust notice and consent requirements.[1] But given that some of the largest firms in today’s economy are Internet platforms that rely on consumer data, there are increasing calls from both policy makers and the academy to incorporate the collection and use of consumer data into antitrust analysis.

This Chapter discusses the theories behind the call to incorporate privacy into antitrust and identifies some potential legal and economic hurdles to their application. Chief among them are (1) the extent to which privacy is an important dimension of competition; (2) identifying the underlying anticompetitive conduct that gives rise to a reduction in privacy; and (3) understanding that the benefits and costs of data collection are inexorably intertwined, with the net impact of a reduction of privacy on consumer welfare depending on heterogenous tastes for privacy and customization that are likely to be correlated in complex ways. Further, this Chapter also addresses potential First Amendment issues raised by using antitrust to condemn certain types of data collection and use, as well as problems that may arise to the extent that incorporating privacy into antitrust renders liability standards more uncertain.

I. Background

The idea of incorporating privacy into antitrust analysis is not new—nearly 15 years ago, this union was first proposed in the context of the Google-Double Click merger.[2] Generally, scholars and policy makers have attempted to incorporate privacy into antitrust through two broad channels. First, and most directly, some have argued that antitrust agencies and courts should consider negative impacts on privacy as harms remediable by the antitrust laws.[3] The second channel focuses on privacy as a dimension of competition, therefore fitting more comfortably into antitrust law.

At the outset, it is important to define some terms. First, “privacy” is a capacious concept, broadly relating to the ability to control information about oneself and encompassing such notions as isolation from outside stimuli, freedom from observation, autonomy, and anonymity.[4] For the purposes of this Chapter, “privacy” used as a metric measuring quality or price generally will refer to a firm’s collection and use of information about a consumer. Second, although this Chapter may touch on some of the same concerns raised in the discussion around “big data and antitrust,”[5] given the focus on privacy, the analysis here necessarily is centered on data about consumers, as opposed to “data” more broadly. Finally, while privacy regulations can impact competition by limiting the flow of consumer data, that discussion is beyond the scope of this Chapter, which focuses on the role of privacy in antitrust analysis.[6]

A.  Privacy as a Direct Goal of Competition Law

Some have argued that because privacy is a fundamental value, antitrust should also consider how unilateral or joint conduct directly impacts privacy. For example, in reaction to the Google/DoubleClick merger, a consortium of consumer advocacy groups petitioned the FTC to take direct account of privacy considerations in its review of the transaction.[7] They asserted that privacy was a “personal and fundamental right in the United States,” which is affected adversely by the “collection, use, and dissemination of personal information.”[8] After alleging that the transaction “will give one company access to more information about the Internet activities of consumers than any other company in the world,” the groups asked the FTC to prevent the merging of Google’s and DoubleClick’s data, and to impose additional restrictions on data use and collection on the merged companies.[9]

More recent acquisitions by tech platforms have drawn similar petitions from privacy advocates. For example, in a complaint filed with the FTC, two leading privacy advocacy organizations argued the combination of WhatsApp’s and Facebook’s consumer data was both an unfair and deceptive trade practice.[10] Although the parties pressed for the FTC to investigate this combination under its normal consumer protection authority, they also asked the FTC to use its “authority to review mergers” to withhold approval of the transaction “[u]ntil the issues identified in this Complaint are adequately resolved.”[11] Likewise, these parties also took the FTC to task for clearing the Google-Nest merger without addressing “the significant privacy concerns” it raised.[12]

There are serious legal hurdles to this approach. The Supreme Court has been clear that antitrust is about fostering competition on “[t]he assumption that competition is the best method of allocating resources in a free market [because it] recognizes that all elements of a bargain—quality, service, safety, and durability—and not just the immediate cost, are favorably affected by the free opportunity to select among alternative offers.”[13] In National Society of Professional Engineers v. United States (“NSPE”),[14] for example, a trade group of engineers had adopted an ethics policy prohibiting competitive bidding on the grounds that price competition would lower quality to unacceptable levels. The Supreme Court roundly rejected this as a justification in a rule of reason inquiry, explaining “the inquiry is confined to a consideration of impact on competitive conditions.”[15] If NSPE stands for the proposition that the presence of a positive impact in a non-competition dimension does not count on the plus side of the antitrust calculus, it also stands for the dual: negative impacts to a non-competition value will not count against a party in an antitrust inquiry. Thus, absent amendment of the antitrust laws or serious departure from stare decisis—which some have urged[16]—a plea to use antitrust analysis to condemn otherwise procompetitive or benign conduct that results in lower levels of consumer privacy is unlikely to succeed.[17]

B.  Privacy as a Dimension of Competition

The second general approach to bringing privacy concerns into the ambit of antitrust law is to treat privacy as a core dimension of competition. This approach rests on the related notions that consumers either “pay” for services with their personal data, or that privacy is a dimension of quality over which firms compete. These premises are used interchangeably, and for antitrust purposes they are essentially equivalent: under either approach, more intrusive levels of personal data collection and use result in consumers paying a higher privacy-adjusted price (PAP). For example, one can define the PAP of a product as the nominal price divided by some metric of privacy protection. Holding the nominal price constant, lower levels of privacy protection increase the PAP (by reducing the denominator). Equivalently, privacy could be incorporated directly into the numerator price, with more intrusive data collection representing a higher “price” paid by consumers and non-privacy elements of quality held constant.

Identifying privacy as a key dimension of competition appears to be born out of the search for a metric by which to judge the consumer welfare effects of potential market power enjoyed by large online platforms.[18] Accordingly, dominant digital platforms exercise market power not by charging a higher price—most offer their services for free—but by offering consumers lower levels of privacy than they would receive in a competitive market.[19] For instance, a recent report by the UK’s Competition Market Authority (CMA) concludes:

In a more competitive market, we would expect that it would be clear to consumers what data is collected about them and how it is used, and crucially, the consumer would have more control. We would then expect platforms to compete with one another to persuade consumers of the benefits of sharing their data or adopt different business models for more privacy conscious consumers. Platforms may reward consumers for their data through their products and services, perhaps serving fewer ads or offering rewards or additional services.[20]

Although they have yet to block a merger or challenge conduct based on privacy considerations, antitrust agencies in the both the U.S. and EU appear to have recognized the role that privacy might play in a competition analysis. For example, the FTC cleared the Google/DoubleClick merger based on traditional antitrust considerations of price impacts on advertisers and publishers, but nonetheless considered how the merger might impact consumer privacy.[21] Likewise, the EU explicitly took privacy into account when evaluating the Microsoft/LinkedIn merger, noting for example that privacy in personal social networks “is an important parameter of competition.”[22] It also examined how Facebook’s acquisition of WhatsApp might impact privacy in the market for mobile consumer communications apps.[23]

Some scholars have suggested that a lack of competition on privacy could facilitate strategic foreclosure arising from indirect network effects.[24] Under this theory of harm, a lack of competition over privacy imposes a short-run harm on consumers for the reasons discussed above, but also has a longer-term impact because lower levels of privacy allow the dominant firm to entrench its position by collecting increasing amounts of consumer data. Greater access to consumer data, so the argument goes, allows the dominant firm to gain an advantage over potential rivals due to the ability to, for example, hone machine learning algorithms.[25] What is more, according to this argument, there are “feedback effects:” increasing amounts of consumer data allows the platform to increase its dominant position on both the consumer and advertising side of the platform, permitting it to further impose weak privacy protections on consumers, thus increasing its access to consumer data, and so on.

The recent Bundeskartellamt’s (BKA) case against Facebook is perhaps the best real-world example of this approach.[26] The BKA found that Facebook enjoys market dominance due to strong direct network effects (Facebook has the largest number of registered and active users among other network platforms) and the difficulties associated with switching to another social media account (“lock-in” effect).[27] This dominance allegedly allows Facebook to condition access to Facebook’s social network on the user’s agreement to Facebook’s terms of service, which stipulate that Facebook collects and processes user data through not only Facebook’s companies, but also through third-party websites with embedded Facebook Business Tools.[28] But such conditioning violates the GDPR because (1) there is no effective consent on the user’s side (due to Facebook’s dominance),[29] and (2) it is not necessary for Facebook to process data from third-party sources to the current extent.[30] The GDPR violation, in turn, “is a manifestation of Facebook’s market power.”[31] In particular, Facebook’s data practices “impede[] competitors because Facebook gains access to a large number of further sources,” which provide it a “competitive edge . . . and increased market entry barriers, which in turn secures Facebook’s market power toward end customers.”[32] Thus, according to the BKA, market dominance allowed Facebook to impose onerous privacy terms on consumers, allowing it to collect more data from its users. What is more, the BKA contends that the increased access to consumer data improves Facebook’s ability to customize content and target advertisements, further entrenching its dominance.[33]

* * *

Although competition agencies or judges are unlikely to treat consumer privacy as a direct goal of antitrust absent substantial changes to existing law, the notion that lack of competition can become manifest in suboptimal levels of privacy has intuitive appeal and deserves more serious consideration. In the next section, we consider some of the legal and economic complexities attendant to such an approach.

II. Considerations

Viewing privacy as a dimension of non-price competition has the advantage of nesting it squarely within the bounds of antitrust law. Nonetheless, the analogy between privacy and quality is not perfect, and problems arise when the comparison is stretched too far. Below we identify some complications that arise when privacy harms are viewed through a competition law lens.

A.  Competitive Significance

Treating privacy as a metric of the competitiveness of a market rests on the assumption that consumers provide information about themselves in exchange for access to services provided by digital platforms. Although valuations for privacy vary across the population, all else equal, it is probably reasonable to assume that consumers will prefer more privacy to less.[34] Consumer willingness to switch products in response to higher levels of privacy, however, is a necessary predicate to any attempt to treat privacy as a dimension of non-price competition addressable by competition law.[35] If, however, consumer demand is generally unresponsive to changes in privacy, the upper bound on consumer harm that could arise from less competition over privacy is likely to be small—even if joint or unilateral conduct results in a platform (or combination of platforms) offering a lower level of privacy, it is unlikely to result in much welfare loss as output effects are likely to be negligible.[36] In this case, privacy is probably not a dimension of competition that demands a large degree of antitrust attention by courts or agencies.

Empirically, the significance of privacy to consumers’ marketplace choices is, at best, uncertain. In what has come to be known as the “privacy paradox,” consumers profess to care deeply about privacy in surveys, but revealed preference—data from actual choices—suggests otherwise.[37] For example, a recent Pew poll finds that 79 percent of consumers are “very” or “somewhat” concerned about how companies use their data, and 81 percent say that the privacy risks associated with companies’ data collection outweigh the benefits.[38] At the same time, empirical research finds that only a tiny percentage of consumers actually choose to opt out from online tracking,[39] and in a survey of the economics of privacy, Acquisti, Wagman, and Taylor conclude, “If anything, the adoption of privacy-enhancing technologies (for instance, Tor, an application for browsing the Internet anonymously) lags vastly behind the adoption of sharing technologies (for instance, online social networks such as Facebook).”[40] Experimental research, moreover, generally finds that consumers are only willing to pay a small amount to avoid surveillance.[41] Additionally, there is little evidence to suggest that marketing campaigns promoting privacy-protective search and email platforms have engendered much of a consumer response.[42]

It is unclear what drives the privacy paradox. It simply may be that when faced with an actual opportunity cost for privacy, consumers choose to consume less privacy than they say they will in surveys. At the same time, many have observed that due to its complexity, markets for privacy are unlikely to function well, casting doubt on whether revealed preference actually expresses consumer preferences given the difficulty for consumers to make informed tradeoffs.[43] Whether the gap between stated and revealed preference is driven by rational choice, asymmetric information, or behavioral biases, however, is irrelevant to whether privacy should be considered as a relevant metric of non-price competition.[44] Clearly, if fully informed and rational consumers choose not to alter their behavior in response to changes in privacy, then there appears to be little role for antitrust in examining firms’ privacy choices. If, on the other hand, the lack of consumer response to privacy is due to firms’ inability to credibly convey information about their privacy policies—perhaps because it is simply too costly for consumers to evaluate privacy promises—then no amount of competition will improve market outcomes.

Joe Farrell illustrates this point in a simple model of a firm with market power choosing the profit maximizing level of privacy.[45] He shows that when consumers perfectly understand the level of privacy chosen by the firm, consumer and firm interests are aligned because the firm internalizes any privacy harm its data policies visit on consumers. For example, if a firm reduces privacy protections to increase revenue streams (for example, by allowing third party ad networks to place cookies on visitors’ browsers in order to target ads)—consumer demand will fall by an amount equal to the privacy harm this new revenue stream causes. As a result, the firm will reduce its privacy protections only if the revenue stream from increased consumer information is greater than the privacy harm, which is efficient: firm profits are increased while consumers enjoy lower prices (or richer features) due to the increased revenue, both of which lead to an increase in total welfare.[46]

When consumers are unable to evaluate firms’ privacy choices, however, firms rationally may adopt net harmful privacy practices because they assume that they can hide the costs from consumers—that is, firms will enjoy the additional revenue streams from consumer data without paying the concomitant price of lower demand from privacy-sensitive consumers. What is more, unless firms are able to provide observable privacy policies—in the sense that consumers readily can comprehend the promises that are being made with regard to the collection and use of their data—and credibly commit to following them, consumers rationally will come to expect that firms always will adopt harmful data practices regardless of what firms profess. In this textbook “lemons” equilibrium, firms that would adopt more stringent privacy practices if they could convince consumers to believe them, will not do so because they will not be rewarded by consumers.[47]

The above suggests that a consumer protection regulation that holds firms to their privacy promises or that imposes some baseline level of privacy may improve market outcomes by helping foster an environment in which firms fully internalize their privacy decisions. But it is important to note that steep informational costs, not lack of competition, is what makes obscuring potentially unpopular data practices in dense privacy policies or through the use of “dark patterns,” and reneging on privacy promises the dominant strategy. Thus, if the root of any failure of markets to meet consumers’ privacy preference is informational, increased antitrust scrutiny is likely to be an ineffective remedy.

B.  Identifying the Underlying Anticompetitive Conduct

If privacy is to serve as a metric of competition, then there must be some causal link between diminished privacy and anticompetitive conduct. Stated differently, the lack of privacy must be the (or a) mechanism through which the illegal creation or maintenance of monopoly power becomes manifest to consumers. Absent some underlying anticompetitive conduct driving the reduction in privacy, there would be no distinction between antitrust and direct privacy regulation. For example, although Google’s acquisition of Nest presented no competition concerns and was quickly cleared,[48] some contended that this transaction should be blocked solely because it raised privacy concerns by allowing Google to merge its troves of consumer data with that generated by Nest’s smart thermostat in ways contrary to consumer expectations.[49] But any privacy concerns arising from this transaction were completely divorced from the competitive effects of the merger. That is, simply because a reduction in privacy arises in the context of a merger (or other horizontal or vertical arrangement), does not convert a pure consumer protection issue into an antitrust problem.

Below, we examine how joint and unilateral conduct might give rise to an antitrust claim based on reduced privacy competition.

1.     Horizontal Conduct

Mergers may be the easiest case to conceptualize conduct that reduces competition over privacy. [50] Indeed, to date, the agencies have only ever publicly considered privacy as a dimension of competition in the context of mergers; as discussed above, the FTC examined privacy competition in the context of the Google/DoubleClick merger, and DG Comp explicitly considered how the Facebook/WhatsApp and LinkedIn/Microsoft transactions would impact competition over privacy.[51]

As a threshold matter, the parties must be competitors in a product space to trigger any privacy concerns based on competition. This is because, in order for the privacy harm to trigger antitrust scrutiny, it must flow from a reduction in competition between the firms over privacy. If the consumers do not view the merging parties’ products as close substitutes, they simply do not compete in any dimension.[52]

Although the competition agencies have yet to block a transaction based on its potential impact on privacy, there is nothing to suggest from the decisions that, given the appropriate facts—perhaps documents and data suggesting that consumers view privacy as one of the most important dimensions of competition between the parties and that the parties were close substitutes for consumers along this dimension—they would not be willing to block a transaction.[53]

Horizontal agreements to restrict competition on privacy additionally could run afoul of the antitrust laws under the same theory that would condemn a merger that reduces competition over privacy. It is black-letter law that a naked agreement among competitors to limit any dimension of competition is condemned per se.[54] Thus, evidence of an agreement among competitors to limit—or increase—privacy could be condemned as a per se violation of Section 1 of the Sherman Act.[55] Similarly, an agreement over privacy that was reasonably ancillary to some sort of legitimate horizontal collaboration would be analyzed under the rule of reason. [56] For example, a joint venture between two streaming services to create a bundled offer that also includes an agreement between the firms on the collection and use of consumer data could be condemned if detrimental consumer impacts flowing from any reduction in competition over privacy was not outweighed by countervailing benefits.[57]

2.     Unilateral Conduct

Identifying a plausible unilateral antitrust theory of privacy harm is more challenging. First, it is a bedrock principle of U.S. antitrust law that merely charging a high price, or otherwise exercising lawfully acquired monopoly power is not actionable.[58] Courts have had a longstanding aversion to becoming price regulators under the auspices of the Sherman Act, which springs from two sources. First, is the fear of discouraging firms from the type of innovative behavior that benefits consumers. As the Court explained in Trinko:

the opportunity to charge monopoly prices-at least for a short period-is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth. To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.[59]

The second reason courts do not involve themselves in micromanaging monopolists’ pricing and quality decisions is administrability. Such an endeavor threatens to turn federal courts into utility regulators, a job for which generalist judges are ill-equipped.[60]

The upshot of the judicial reluctance to pass judgment on monopolists’ unilaterally determined price and quality levels is that one must be able to identify some type of anticompetitive conduct that caused a reduction in privacy for it to be actionable under the antitrust laws. That is, one must be able to point to conduct by a dominant firm that amounts to “the willful acquisition or maintenance of monopoly power as distinguished from growth or development as a consequence of a superior product, business acumen, or historical accident.”[61] To qualify as exclusionary under Section 2, conduct must “harm the competitive process and thereby harm consumers,” and also have “the requisite anticompetitive effect.”[62] These longstanding limitations on monopolization actions would appear to foreclose monopolization theories that rest only on a dominant platform illegally exercising monopoly power by collecting “too much data,” advanced by some commentators.[63] The data practices of a dominant platform simply are of no moment to the antitrust laws unless they bespeak some type of conduct that harmed the competitive process.

If a monopolization claim cannot be predicated on data practices alone, one must be able to point to some type of unilateral conduct that limited competition over privacy. While exclusionary theories are myriad,[64] that a firm could acquire or maintain monopoly power by lying about its policies regarding the collection and use of consumer data is the one most germane to the current discussion. For instance, in a recent paper, Dina Srinivasan argues that competition from social media platforms, such as MySpace, Friendster, and Orkut, initially tempered Facebook’s ability to degrade user privacy. [65] Once these threats were gone, according to Professor Srinivasan, Facebook was able to increase its collection and use of consumer data without fear of losing market share. Of course, this exercise of market power alone would be an insufficient basis for an antitrust action, but Professor Srinivasan contends that Facebook’s promises regarding consumer privacy—on which it ultimately reneged—facilitated its rise to dominance by attracting privacy conscious consumers, and thus form the predicate act required to make out a monopolization claim.[66]

As a threshold matter, it is important to distinguish between the direct harm attributable to competitors and consumers from deception, and the indirect impact on consumers due to any market power effects of deception. For example, when a dominant company lies about the beneficial attributes of its product, or disparages its competitors’ products, consumers who shift their sales from rival firms due to the deception clearly are harmed.[67] Further, rival firms lose revenue to the dominant firm from fooled consumers.[68] But neither of these outcomes are anticompetitive effects of deception, they are welfare losses due to informational problems. To trigger the antitrust laws, the shift in consumer purchasing patterns caused by the deception would have to durably limit the competitive pressure faced by the dominant firm, either creating or maintaining its ability to raise price, reduce output, or, in the present context, reduce privacy.[69]

Courts generally have been hesitant to allow antitrust claims to proceed on theories of deception.[70] As the leading antitrust treatise explains, “The key problem here is the difficulty of assessing the connection between any improper representations and the speaker’s monopoly power.”[71] Further, there are other laws that address the direct welfare consequences of deception: business torts and consumer protection laws.[72] Consumers can sue firms for deception under the common law and state consumer protection acts,[73] and businesses can sue deceiving firms from whom they have lost business under the Lanham Act and state business tort laws.[74] Further, both the state attorneys general and the FTC have consumer protection jurisdiction to police against unfair and deceptive acts and practices.[75] As the Supreme Court explained in Trinko, when there is another regulatory regime aimed directly at the ill alleged to violate the antitrust laws, the “additional benefit to competition provided by antitrust enforcement will tend to be small, and it will be less plausible that the antitrust laws contemplate such additional scrutiny.”[76]

Of the few instances in which courts have allowed deception to form the basis for a Section 2 claim, many concern lies to bodies—either governmental or private—that enjoyed the power to exclude competitors. For example, Broadcom Corp. v. Qualcomm Inc.[77] and Allied Tube & Conduit Corp. v. Indian Head, Inc.,[78] concerned deceiving private standard setting bodies, and Walker Process Equip. Corp. v. Food Mach. & Chem. Corp.,[79] involved fraud in a patent application. Further, the FTC’s case against Intel involved, among many other allegations, Intel’s failure to disclose that its compiler software would slow down programs run on competitors’ chips.[80] Thus, the lie was not about the difference in performance between Intel and competitors’ chips—the performance differences were true. Instead, Intel was concealing the fact that its compiler, not competitors’ underlying chip technology, had caused the performance differences. Similarly, the deception-related conduct in Microsoft involved Microsoft fooling software developers into writing programs that would work only on the Windows operating system.[81] Again, Microsoft had not lied to consumers about the interoperability of software—the lack of interoperability was a fact—but rather it tricked developers into creating the lack of interoperability. Because the deception involved concealing the cause of an actual degradation of a competing product, the scenarios in Intel and Microsoft are more likely to have direct and durable impacts on competition than ones in which a firm merely lies to consumers about the relative attributes of its products.[82]

All told, a Sherman Act § 2 claim that deception involving privacy policies helped to create or maintain a firm’s dominance would appear to be limited to quite narrow circumstances. The core problem lies with causation: a plaintiff pursuing such a theory would have to show that the lies about privacy were “reasonably capable of making a significant contribution” to the firm’s monopoly power.[83] That is, there must be evidence linking the deception to the accretion or maintenance of monopoly power that is exercised through a reduction in privacy, not merely the direct impact of deception on a firm’s market share or price. For example, a plaintiff would have to show that that the lie was material—that is, but for the deception about privacy, a significant mass of consumers would not have used the product in question; without moving a significant number of consumers away from rivals and to the dominant firm, there can be no distortion of competition.[84] Such a showing is likely to be difficult in light of the empirical evidence suggesting that privacy does not appear to be a particularly important dimension of competition for most consumers, as discussed above in section 2.A. This showing is likely to be all the more difficult if substantial product improvements were occurring at the same time as the alleged privacy-related deception.[85] Further, the shift in demand due to the deception would have to be durable enough to create a non-transitory limit on competitors’ ability to constrain the dominant firms’ privacy decisions.[86]

C.  Intrinsic Benefits to Collection of Data

Anticompetitive theories that analogize a firm’s privacy to quality implicitly assume that all other attributes of price and quality are held constant when a firm alters its privacy choices. Thus, when privacy goes up, consumers are unambiguously better off, and vice-versa when it goes down. Granting that this analogy has facial appeal, it breaks down under close inspection because of the role that consumer data play in firms’ production functions. Although skimping on quality can directly increase a firm’s profits at the expense of all consumers, consumer data merely take up space on a server unless they are used in a way that is likely to benefit at least some consumers.

To see why, consider how a hypothetical car maker benefits by reducing the quality of the tires on its cars, while holding price and all other dimensions of quality constant. To make the math simple, suppose that the car sells for $100 and the marginal cost of production with good tires is $50, and $40 with bad tires. If the car maker holds price constant and switches bad tires for good, it enjoys an immediate per-car profit increase of $10. All consumers who stay in the market are unambiguously worse off, because they are still paying $100 for a car that has less value.[87]

Next, consider the chain of events that occur when a platform reduces its privacy—for example, by placing tracking cookies on visitors’ browsers. First, unlike the case of the car market, this reduction in quality has not lowered the platform’s costs. In fact, to the extent that a platform expends resources on coding or server space, collection and use of additional consumer data may actually increase costs. Second, unlike the car maker that now has an extra $10 in profit from each customer, the platform only has additional stores of consumer information on its server. This is where the privacy-quality analogy breaks down: the platform must take some action to convert reduced privacy into a revenue stream, and these actions typically benefit at least some consumers.[88] This nuance distinguishes reductions in privacy from the more generic case of reducing quality while holding price constant—increasing access to consumer information may reduce quality along the privacy dimension, but the necessary monetization of these data increases quality in other dimensions.[89] Thus, a reduction in privacy could increase or decrease consumer welfare depending on how these quality changes net out.

How does the conversion of consumer data into revenue take place, and how does it benefit consumers? Most directly, a platform can use consumer data to improve its offerings. For example, it can create more seamless logons, and more closely tailor content to personal interests by customizing reading and viewing lists, or shopping suggestions, or use consumer data to help prevent fraud.[90]

Another common use of consumer data by a platform is to sell display advertisements based on user-specific information, or so-called interest-based advertisements (IBA). IBA delivers two potential benefits to consumers. First, IBA provides consumers with more relevant information than contextual advertising—advertising shown based on the content of the website, not the interests of the consumer. For example, in a world without IBA, a tennis player is visible as such only when visiting a website correlated with such an interest (e.g., Tennis.com), which may be rare in relation to her overall Internet browsing. IBA allows consumers to broadcast this interest to relevant providers of tennis-related goods and services regardless of where they are online, providing them with greater access to relevant information. The marginal value of this information is evidenced by the fact that IBA sells for a substantial premium over contextual advertisements due to its higher conversion rates.[91] Second, an indirect benefit from IBA over contextual ads is that IBAs generates more revenue for content providers to subsidize content, which digital platforms often provide to consumers for free.[92]

In addition to using the consumer data it collects, a firm may sell it to third parties. But selling data to third parties creates an additional revenue stream for the platform, again subsidizing the production of content. Further, third parties typically purchase consumer data for the same reasons the first party collects it—either to customize offers or advertisements to their consumers. Further, consumer data also may be used by a competitor to facilitate entry into a market, again benefiting consumers.

None of the above is meant to say that a firm could never exercise market power by reducing privacy in a way that reduces consumer welfare. Rather, the only point is that the net impact on consumers from a reduction in privacy is much more complicated than the simple example of a manufacturer replacing high quality parts with low quality parts; it depends on the distribution of preferences for privacy and data-driven quality improvements, and how these preference distributions are correlated.[93] Some consumers will find that the utility loss from privacy intrusions swamp any gains in customized content and advertisements, while others will find that the collection and use of their data is on net beneficial. The important takeaway is that the benefits and costs of data collection are inexorably intertwined, and consumer tastes for privacy, data-driven customizations, and targeted ads are heterogenous and correlated in potentially complex ways. Thus, unlike the case in which a firm profitably exercises market power by reducing quality while holding price constant, which unambiguously harms all consumers (albeit to different degrees), an increase in the collection and use of consumer data may be net harmful or beneficial. Put another way, when a firm reduces privacy, the privacy-adjusted price for some consumers will rise, while it will fall for others. The net impact on consumer welfare depends on the relative size of these two effects.

D.  First Amendment

The First Amendment and the Sherman Act are no strangers. Courts have been called on repeatedly to determine how the Sherman Act’s prohibitions on certain conduct interact with the Constitution’s protection of the right to petition, assemble, and speak.

In what has come to be known as the NoerrPennington doctrine, the Supreme Court has interpreted the Sherman Act in light of the First Amendment right to petition in a way that sketches out a general rule that legitimate attempts to secure government action—legislative, regulatory, and judicial—are exempt from antitrust scrutiny.[94] In NAACP v. Claiborne Hardware Co., the Court also had occasion to consider the application of the antitrust laws in light of the First Amendment’s protection of speech and association.[95] The plaintiffs alleged that a boycott of white businesses violated the Mississippi antitrust laws by diverting business from white-owned to black-owned stores.[96] The Supreme Court rejected this claim, holding that “[t]he right of the States to regulate economic activity could not justify a complete prohibition against a non-violent, politically motivated boycott . . . .”[97]

The First Amendment, however, will not protect speech when it involves an agreement among competitors to restrain competition. In FTC v. Superior Court Trial Lawyers Association,[98] the Supreme Court had no trouble finding that a concerted refusal by attorneys to take cases unless higher compensation was offered was not protected by the First Amendment.[99] The Court explained that the objective of the joint activity was not to urge a government-imposed restraint of trade or to vindicate a fundamental right, but rather “to increase the price that they would be paid for their services.”[100] Thus, the First Amendment will not subtract from antitrust’s power to prevent conduct that has a direct anticompetitive effect, even if that conduct happens to be speech. As the Court explained in Giboney v. Empire Storage & Ice Co.:

[I]t has never been deemed an abridgement of freedom of speech or press to make a course of conduct illegal merely because the conduct was in part initiated, evidenced, or carried out by means of language, either spoken, written, or printed. Such an expansive interpretation of the constitutional guaranties of speech and press would make it practically impossible ever to enforce laws against agreements in restraint of trade as well as many other agreements and conspiracies deemed injurious to society.[101]

Courts are “empowered to fashion appropriate restraints on [the defendant’s] future activities both to avoid a recurrence of the violation and to eliminate its consequences, but must be mindful of how a remedy ‘may impinge upon rights that would otherwise be constitutionally protected.’”[102] For example, in NSPE, the Supreme Court had no difficulty finding no First Amendment problems with an order against the professional association, which enjoined it from publishing ethical opinions that called into question competitive bidding.[103] At the same time, the D.C. Circuit in NSPE had modified the district court’s order in light of First Amendment concerns to strike a provision requiring the Society to affirmatively state that it did not find price competition to be unethical.[104]

If privacy were incorporated into antitrust, liability determinations and remedies that center around the collection and use of consumer data potentially could raise some First Amendment issues. For example, an antitrust theory similar to that in the BKA’s case against Facebook that premises liability directly on the collection of data and crafts a remedy that limits such collection,[105] or an order preventing two merging firms from combining their data to target advertising both appear to implicate First Amendment values.[106]

First, applying antitrust to the collection and use of consumer data may unduly burden the publisher’s commercial speech rights. Beginning with Virginia State Board of Pharmacy v. Virginia Citizens Consumer Council, Inc.,[107] the Supreme Court has developed the “commercial speech doctrine,” which can come to hold that restrictions on commercial speech will be upheld only if the law directly advances a substantial interest and that the measure is drawn to achieve that interest.[108] Of course, merely finding an impact on commercial speech, however, will not automatically doom government action. For instance, privacy-based restrictions on the use of consumer financial data have survived commercial speech inquiries. In Mainstream Marketing Services v. FTC[109] the Tenth Circuit found that although the FTC’s “Do Not Call” list clearly impinged on telemarketers’ commercial speech rights, the asserted government interest in protecting consumers’ privacy interests was substantial, and the regulatory program was sufficiently tailored toward its end.[110]

Second, using antitrust to modify a firm’s collection and use of consumer data could directly intrude on its First Amendment rights. That is, irrespective of its effect on commercial speech, courts may find a direct First Amendment interest in the collection and use of consumer data. Although some scholars have expressed skepticism that laws restricting the collection and use of consumer data should raise First Amendment concerns,[111] others have made persuasive arguments to the contrary.[112] For example, Jane Bambauer contends that if we accord Constitutional protection to the right to receive information, it should make little difference whether we receive it from a “speaker” or directly from our observations of the world.[113]

Sorrell v. IMS Health Inc. provides some support for this idea.[114] Sorrell involved a challenge to a Vermont statute that prohibited pharmacies, hospitals, and other health care entities from selling or disclosing prescriber-identifying information for marketing purposes, and prevented pharmaceutical companies from using this data for marketing purposes.[115] Although the Court held that heightened scrutiny was appropriate because the law imposed speaker- and content-based restrictions on pharmaceutical companies’ speech, it ultimately disposed of the case under a less stringent commercial speech inquiry.[116] Importantly for the application of antitrust to privacy, the Court held that the sale, transfer, or use of prescriber-identifying information was protected speech.[117]

Whether data collection and use are protected directly or enjoy protection due to their impact on commercial speech is germane to the level of protection they are afforded.[118] Of course, such a distinction may be meaningless if the Supreme Court continues to interpret the Sherman Act in light of the First Amendment, rather than directly impose First Amendment strictures on the Sherman Act. But the stronger the First Amendment value at stake, the more likely the Court would be willing to interpret the Sherman Act in a way that limits its application to privacy practices.[119]

E.  Subjectivity

In addition to raising serious conceptual issues, incorporating privacy as a dimension of competition would inject a large degree of additional subjectivity into antitrust analysis. When the law is fairly well established, one is left primarily to argue that the facts place the conduct under scrutiny on one side or another of the line between legality and illegality. For example, consider the case of a naked horizontal agreement to fix prices or to allocate markets. If the facts are ambiguous, parties will try to convince a court that there was no agreement, or that if there were one, the agreement was reasonably ancillary to efficiency enhancing conduct[120] No reasonable legal argument, however, could be advanced that the alleged conduct, if shown, is not per se illegal.[121] Similarly, a party could not advance with a straight face an argument to condemn above-cost pricing by a small firm. Save for some nuances around the margins, the discretion afforded courts and enforcers in these circumstances largely is confined to interpretation of the facts; the law is clear.

This circumstance changes, however, when one injects a subjective metric like privacy into the inquiry. For example, consider the BKA case against Facebook, in which liability turned on whether Facebook’s data collection was done without consent due to unequal bargaining power and whether Facebook took more data than it needed for “efficiency and advantages of personalized service.”[122] How does a firm predict how a court or an enforcer will answer questions like, was this data necessary to provide a service? Or, did users actually consent to these terms? Objective answers to these queries are elusive at best. Increased subjectivity means enhanced regulatory discretion, and hence less certainty over legal standards.[123]

The key cost associated with subjectivity is over-deterrence. It is a standard result in the economics of accidents literature that when parties can only estimate the legal standard with error, potential violators take too much precaution.[124] This is because the potential costs from taking too much care to avoid liability are generally far lower than the costs of being liable. What does this mean in the context of antitrust? To take too much care in antitrust means to avoid business practices where the line between legal and illegal behavior is blurred,[125] and the magnitude of these error costs depends on exactly which business practices firms are choosing to forego.[126] If privacy were to enter into antitrust considerations, there is a risk that firms would limit beneficial data collection and analysis to avoid the possibility of an antitrust suit.

A second cost to subjective liability standards is dissipative expenditures to obtain favorable government action. When government actors have the power to make decisions that affect the distribution of resources, private parties rationally spend money in an attempt to effect a favorable distribution.[127] Accordingly, as long as antitrust regulators and courts can prohibit certain business practices, companies rationally will spend money in an attempt to persuade them to redistribute wealth in their favor.[128] The inclusion of a subjective metric like privacy into antitrust analysis will further exacerbate this tendency by blurring the line between legal and illegal conduct.

Imagine a merger with no anticompetitive overlaps (e.g., firms in two unrelated markets). Under standard merger analysis, the law is clear, and this transaction likely would be cleared without a second request. Once privacy enters the discussion, however, regulators have an additional hook with which to potentially scuttle the deal; with an enlarged regulatory field of play, rivals will find it worth their while to expend resources to convince regulators that privacy concerns should doom the transaction. At the same time, the merging parties will feel compelled to defend their transaction on privacy grounds. This is not to denigrate lobbying expenditures—they are an important and correctly protected avenue to express views to a government that can take actions that impact property. Further, lobbying can provide government with improved information to make more efficient decisions, that increase welfare. But to the extent that resources are spent merely to trigger a government action—here, either approval or denial of a merger—that transfers wealth from one party to another, they are dissipative.

Conclusion

Large online platforms that rely on consumer data play a central role in our economy and our lives, so it should come as no surprise that there is an increasing call to use antitrust to address perceived privacy issues. Although some have urged policy makers to pursue privacy as a direct goal of antitrust, absent major legislative changes or judicial willingness to cast aside decades of precedent, this is unlikely. Another approach is to incorporate privacy into antitrust by recognizing it as a dimension of non-price competition. On its face, there is nothing to foreclose such an approach. Given the lack of empirical support for privacy being a meaningful dimension of competition and the complexities involved in assessing the consumer welfare effects of increased data collection and use, however, its practical application would appear to be limited.

Footnotes

* Antonin Scalia Law School, Program on Economics & Privacy.

[1] European Union General Data Protection Regulation (GDPR): Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), OJ 2016 L 119/1.

[2] See Complaint, In re Google & DoubleClick, Inc., No. 71-0170 (F.T.C. April 20, 2007), http://epic.org/privacy/ftc/google/epic_complaint.pdf.

[3] See infra notes 7–12 and accompanying text.

[4] Daniel J. Solove, Taxonomy of Privacy, 154 U. Pa. L. Rev. 477 (2006); Alessandro Aquisti et al., The Economics of Privacy, 54 J. Econ. Lit. 442, 443 (2016) (common to most definitions of privacy is that they “pertain to the boundaries between the self and others”); Richard A. Posner, Economics of Privacy, 71 Am. Econ. Rev. 405, 405 (1981) (privacy concerns “peace and quiet”, “freedom and autonomy”, and “concealment of information.”). See also James C. Cooper, Separation Anxiety, 21 Va. J.L. & Tech. 1 (2017) (dividing benefits from privacy into two components: strategic concealment of information to obtain better terms in a commercial relationship; and intrinsic benefits due to preferences for elements of privacy); Teseary Lin, Valuing Intrinsic and Instrumental Preferences for Privacy (Aug. 24, 2020) (empirically distinguishing between intrinsic value of privacy and instrumental value of privacy, which derives from concealing one’s type to obtain better terms), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3406412.

[5] Allen P. Grunes & Maurice E. Stucke, No Mistake About It: The Important Role of Antitrust in Big Data, 14 Antitrust Source (2015); Anaj Lambrecht & Catherine E. Tucker, Can Big Data Protect a Firm From Competition?, Competition Pol’y Int’l (2017); John M. Yun, The Role of Big Data in Antitrust, in The GAI Report on the Digital Economy (2020).

[6] See, e.g., Jian Jia et al, The Short Run Effects of GDPR on Technology Venture Investment (Nat’l Bureau of Econ. Research Working Paper No. 25248, 2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3278912; Noah Phillips, Comm’r, Fed. Trade Comm’n, Should We Block This Merger? Some Thoughts on Converging Antitrust and Privacy; Prepared Remarks at Stanford Law School (Jan. 30, 2020), 30-20.pdf ; Alex Marthews & Catherine Tucker, Privacy Policy and Competition, Brookings (2019), https://www.brookings.edu/wp-content/uploads/2019/12/ES-12.04.19-Marthews-Tucker.pdf.

[7] See Complaint, In re Google & DoubleClick, Inc., No. 71-0170 (F.T.C. April 20, 2007) [hereinafter “Google & DoubleClick Complaint”], http://epic.org/privacy/ftc/google/epic_complaint.pdf. These arguments are akin to those made by some that antitrust investigations involving media companies should consider not only the price that advertisers pay, but also how conduct affects such non-economic goals as “diversity of opinion.” See, e.g., Maurice E. Stucke, Reconsidering Antitrust’s Goals, 55 B.C. L. Rev. 551, 617 (2012); Maurice E. Stucke & Alan P. Grunes, Antitrust and the Marketplace of Ideas, 69 Antitrust L.J. 249 (2001).

[8] Google & DoubleClick Complaint, supra note 7, at ¶ 7.

[9] See id. at ¶¶ 54, 56-59.

[10] See EPIC/CDD Complaint, In re WhatsApp, Inc. (F.T.C. Mar. 6, 2014) at ¶ 1 [hereinafter “EPIC/CDD Compl.”], https://epic.org/privacy/ftc/whatsapp/WhatsApp-Complaint.pdf.

[11] EPIC/CDD Compl., supra note 10, at 14.

[12] See EPIC/CDD Supp. Compl., In re WhatsApp, Inc. (F.T.C. Mar. 21, 2014) at ¶ 37, https://epic.org
/privacy/internet/ftc/whatsapp/WhatsApp-Nest-Supp.pdf.

[13] Nat’l Soc’y of Prof’l Eng’rs v. United States, 435 U.S. 679, 695 (1978).

[14] 435 U.S. 679 (1978).

[15] Id. at 690.

[16] See Maurice E. Stucke, Should We Be Concerned About Data-Opolies?, 2 Geo. L. Tech. Rev. 275, 283–85 (2018) (arguing that the Sherman Act was designed to consider broader issues than consumer welfare and should incorporate the harms posed by large tech platform that use data, including privacy). See also Associated Press v. United States, 326 U.S. 1, 19-20 (1945) (suggesting that antitrust should consider how competition in media markets affects diversity of viewpoints).

[17] The EU appears to similarly cabin consideration of privacy impacts in competition law. See Samson Y. Esayas, Privacy-as-a-Quality Parameter: Some Reflections on the Skepticism, Stockholm Univ. Research Paper No. 43 at 3-4 (2017) (citing Case C-238/05 Asnef-Equifax v. Association de Usuarious [2006] ECR I-11125, para 63; Case M 7217 Facebook/WhatsApp (2014)), https://papers.ssrn.com/sol3/papers.cfm?
abstract_id=3075239#.

[18] See Stucke, supra note 16, at 284 ( “The currency for online platforms is in many cases, data.”); John M. Newman, Antitrust in Zero Price Markets: Foundations, 164 U. Pa. L. Rev. 149, 166 (consumer information is “surrendered (i.e., paid) by customers in exchange for the object sought”); Id. at 167 (“customers frequently surrender information as payment in exchange” for online platform services).

[19] See, e.g., Stucke, supra note 16, at 287 (“[T]he collection of too much data can be seen as the equivalent of charging an excessive price.”); Howard A. Shelanski, Information, Innovation, and Competition Policy for the Internet, 161 U. Pa. L. Rev. 1663, 1689 (“One measure of a platform’s market power is the extent to which it can [use information in ways that benefit the firm but that consumers do not like] without some benefit to consumers that offsets their reduced privacy and still retain users.”).

[20] U.K. Competition Market Authority, Online Platforms and Digital Advertising ¶6.26 (2020) (hereinafter “CMA Report”); See also id. at ¶¶ 6.31-6.32 (“We would expect that platforms, notably social networks, that faced a competitive constraint would not be able to rely on ‘take-it-or-leave-it’ terms that mean consumers have to share their data to use the service, and have no real option to leave the service because their family and friends use it. Moreover, in a more competitive market, we would expect platforms to innovate an develop new ways to deliver advertising that meets the targeting needs of advertisers using less consumer data, thus protecting consumer privacy to a greater extent.”).

[21] See Statement of the Federal Trade Commission Concerning Google/DoubleClick, Fed. Trade Comm’n 2-3 (Dec. 20, 2007) (“we investigated the possibility that this transaction could adversely affect non-price attributes of competition, such as consumer privacy. We have concluded that the evidence does not support a conclusion that it would do so. We have therefore concluded that privacy considerations, as such, do not provide a basis to challenge this transaction.”). The Horizontal Merger Guidelines explicitly allow for the consideration of non-price elements of competition such as quality:

Enhanced market power can also be manifested in non-price terms and conditions that adversely affect customers, including reduced product quality, reduced product variety, reduced service, or diminished innovation. Dep’t of Justice & Fed. Trade Comm’n, Horizontal Merger Guidelines at 2 (2010).

[22] See Commission decision of 6 December 2016, Case M.8124 – Microsoft/LinkedIn at ¶ 350 & n.330.

[23] See Commission decision of 3 October 2014, Case M.7212 – Facebook/WhatsApp at ¶ 87 (noting that privacy was an important area of functionality over which consumers choose messaging apps); see also id. at ¶102 (evaluating how closely the parties compete over privacy).

[24] See Stucke, supra note 16, at 282; Grunes & Stucke, supra note 5, at 3; Newman, supra note 18, at 166.

[25] See, e.g., Shelanski, supra note 19, at 1681 (discussing consumer information as a strategic asset).

[26] See Bundeskartellamt Initiates Proceeding Against Facebook on Suspicion of Having Abused its Market Power by Infringing Data Protection Rules, Bundeskartellamt (Mar. 2, 2016), https://www.bundeskartellamt.de
/SharedDocs/Meldung/EN/Pressemitteilungen/2016/02_03_2016_Facebook.html

[27] Bundeskartellamt 6th Decision Division, decision of 6 February 2019, ref. B6-22/16 – Facebook, para. 374-387.

[28] Bundeskartellamt 6th Decision Division, decision of 6 February 2019, ref. B6-22/16 – Facebook, para. 88.

[29] “[I]n view of Facebook’s dominant position in the market, users consent to Facebook’s terms and conditions for the sole purpose of concluding the contract, which cannot be assessed as their free consent within the meaning of the GDPR.” Facebook, Exploitative Business Terms Pursuan to Section 19(1) GWB for Inadequate Data Processing, Bundeskartellamt, at 10. (Feb. 15, 2019) (Summarizing the BKA’s decision in its data processing case against Facebook, B6-22/16 – Facebook), https://www.bundeskartellamt
.de/SharedDocs/Entscheidung/EN/Fallberichte/Missbrauchsaufsicht/2019/B6-22-16.pdf?__blob=
publicationFile&v=3.

[30] Id. (“It cannot be substantiated that the service has to process data to the extent that has been determined the course of the examination for reasons of efficiency and advantages of a personalized service.”).

[31] Id. at 11. The BKA notes that “it is sufficient to determine that [the GDPR violation and market dominance] are linked by causality which is either based on normative aspects or outcome.” Id. “Normative causality” is satisfied because Facebook’s dominant position “is clearly linked” to consumers restricted right of “self-determination.” Id. “Outcome causality” is satisfied through indirect network effects, through which increased access to data provide Facebook a “competitive advantage” and erect entry barriers. Id.

[32] Id. at 11.

[33] Id.

[34] Ginger Zhe Jin & Andrew Stivers, Protecting Consumers in Privacy and Data Security: A Perspective of Information Economics, (draft at 2) (2017). As discussed in Section 2.C, infra, if other dimensions are not held constant, competitive effects are not as clear cut.

[35] See Shelanski, supra note 19, 1691 (“[if] competition promotes improved services and privacy policies, anticompetitive conduct diminishes both of these consumer benefits.”)

[36] See Terrell McSweeny, Roundtable: Discussing the Big Picture on Big Data, Antitrust Source (Dec. 2018) (“It just can’t be assumed that competition on privacy is actually occurring. There must be some evidence of it. . . You can’t create competition and privacy features and services where none exist, even if you think it would be good to have it . . . .”).

[37] See CMA Report, supra note 20, at ¶4.47 (“in surveys, consumers will report that they are very concerned about their privacy but they then behave in a way that contradicts this clearly stated preference by, e.g., not taking advantage of privacy controls that are available to them.”); Acquisti et al., supra note 4 at 476. Experiments have attempted to see if education will reduce the gap between revealed preference and stated preference and have found no impact. See, e.g., Lior Strahilevitz & Matthew B. Kugler, Is Privacy Policy Language Irrelevant to Consumers?, 45 J. Legal Stud. 69 (2016)).

[38] Pew, Americans and Privacy: Concerned, Confused, and Feeling a Lack of Control over Personal Information (Nov. 15, 2019), at file:///Users/JamesCooper/Downloads/Pew-Research-Center_PI_2019.11.15_Privacy_FINAL
.pdf

[39] Garrett A. Johnson et al., Consumer Privacy Choice in Online Advertising: Who Opts Out and at What Cost to Industry?, 39 Mktg. Sci. 33, 40 (2020) (finding that .23% of display advertising impressions are served to consumers who have opted out of online tracking through the AdChoices program).

[40] Acquisti et al., supra note 4, at 476.

[41] See id. at 479; Athey et al., The Digital Privacy Paradox: Small Money, Small Costs, Small Talk, (Nat’l Bureau Econ Research Paper 2017), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2916489.

[42] See Marthews & Tucker, supra note 6, at 5–6. See also Is Microsoft’s Scroogled Campaign Working? Not if Gaining Consumers is the Goal, Marketing Land (Oct. 16, 2013), https://marketingland.com/microsoft-scroogled-campaign-61887.

[43] See Acquisiti et al., supra note 4, at 477-78; CMA Report, supra note 20, at ¶¶4.49-4.56.

[44] See Marthews & Tucker, supra note 6, at 6 (“If consumers do not make choices which accord with their stated privacy preferences and instead choose small convenience benefits or monetary benefits over privacy, then a firm that offers superior privacy protections is unlikely to attract many consumers by virtue of its superior privacy protections.”).

[45] See Joseph Farrell, Can Privacy Be Just Another Good?, 10 J. Telecom & High Tech L. 251 (2012).

[46] See id. at 255. Farrell models the revenue stream from increased access to consumer information as an equivalent reduction in marginal cost. If the marginal cost falls more than the value that consumers place on the product (due to reduced privacy), prices fall and output increases on net, increasing welfare.

[47] Id. 257.

[48] Fed. Trade Comm’n, Early Termination Notices, 200140457: Nest Labs, Inc., and Google, Inc., (Feb. 4, 2014), http://www.ftc.gov/enforcement/premerger-notification-program/early-termination-notices/20140457.

[49] Electronic Privacy Info. Ctr., Letter to the Fed. Trade Comm’n (Feb. 20, 2019), https://epic.org/privacy/ftc/google/EPIC-FTC-Nest-Google.pdf.

[50] See Shannon Liao, Ireland is Questioning Facebook’s Plan to Merge Messenger, Instagram, and WhatsApp, Verge. (Quoting Rep. Ro Khanna (D-CA)) (“Imagine how different the world would be if Facebook had to compete with Instagram and WhatsApp. That would have encouraged real competition that would have promoted privacy and benefited consumers.”); Maurice E. Stucke & Allen P. Grunes, Debunking the Myths Over Big Data and Antitrust, CPI Antitrust Chronicle 5, (2015) (“Data-driven mergers, like Facebook’s acquisition of WhatsApp, for example, can potentially lessen non-price competition in terms of the array of privacy protections offered to consumers.”).

[51] See Statement Fed. Trade Comm’n Concerning Google/Double Click, FTC File no. 071-0170 (Dec. 20, 2007); Commission decision of 3 October 2014, Case M.7212 – Facebook/WhatsApp; Commission decision of 6 December 2016, Case M.8124 – Microsoft/LinkedIn.

[52] It could also be possible that in a vertical merger the parties would not be direct competitors, but upstream or downstream foreclosure effects from the merger could limit competition over privacy between one of the parties to the merger and non-merging parties who compete in the relevant market. Importantly, any impact on privacy must flow from an impact on competition to trigger antitrust concerns.

[53] See Barry Nigro, Roundtable: Discussing the Big Picture on Big Data, Antitrust Source 18 (Dec. 2018) (“If there isn’t relatively strong evidence in the documents and the testimony that it’s an issue, you’re likely not going to see a case. The question is whether there’s evidence that it’s a meaningful dimension of competition between the firms, whether, as a result, they’re closer competitors than the other firms in the market and the competitive significance of it. I don’t think there are likely to be many cases like that, but who knows?”).

[54] Palmer v. BRG of Ga., Inc., 498 U.S. 46 (1990).

[55] See Nat’l Soc’y of Prof’l Eng’rs v. United States, 435 U.S. 679, 695-96 (1978) (the reasonableness of the object of an illegal agreement among competitors is not a defense in an antitrust case). See also Thomas Krattenmaker, Per Se Violations in Antitrust: Confusing Offense with Defense, 77 Geo. L.J. 165 (1988).

[56] See Broadcast Music, Inc. v. Columbia Broadcasting System, Inc., 441 U.S. 1 (1979); see also California Dental Assn. v. FTC, 526 U.S. 756 (1999).

[57] Note, that this was also the case for mergers.

[58] See Verizon Commc’ns Inc. v. Law Offices of Curtis V. Trinko, LLP, 540 U.S. 398, 407 (2004) (“The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system.”).

[59] Trinko, 540 U.S. 398 at 407.

[60] See id. at 415 (explaining that one reason antitrust generally eschews forced sharing is “because an antitrust court is unlikely to be an effective day-to-day enforcer of these detailed sharing obligations.”). See generally Michael R. Baye & Joshua D. Wright, Is Antitrust too Complicated for Generalist Judges?, 54 J. L. & Econ. 1 (2011).

[61] United States v. Grinnell Corp., 384 U.S. 563, 571 (1966).

[62] U.S. v. Microsoft, 253 F.3d 34, 58-59 (2001).

[63] For example, although Stucke does not point to conduct by digital platforms, he suggests that harms related to data collection—“looking beyond the ‘free’ price”—lead to the identification of several “significant potential antitrust harms.” Stucke, supra note 16, at 284–85. See also id. at 287 (arguing that “the collection of too much data can be the equivalent of charging and excessive price”); id at 286 (“A data-oploist . . . has the incentive to reduce its privacy protections below competitive levels and collect personal data above competitive levels.”); id. at 294 (“the personal data collected may be worth far more than the cost of providing the ‘free’ service”).

[64] See, e.g., C. Scott Hemphill & Tim Wu, Nascent Competitors, __ Penn. L. Rev. __ (forthcoming 2020) (discussing the use of Sherman Section 2 against dominant platforms’ acquisitions of potential competitors).

[65] Dina Srinivasan, The Antitrust Case Against Facebook: A Monopolist’s Journey Towards Pervasive Surveillance in Sprite of Consumers’ Preference for Privacy, 16 Berkeley Bus. L.J. 39, 44–45 (2019) (the competitive market “enjoined Facebook’s ability to initiate commercial surveillance,” while “the exit of competitors [allowed Facebook] to add the condition of surveillance to its mandatory terms.”).

[66] Id. at 90 (“Facebook’s course of misleading conduct resulted in precisely the type of harm that antitrust law concerns itself with—the exit of rivals and the subsequent extraction of monopoly rents in contravention to consumer welfare.”).

[67] The consumer harm is the difference between the price and the marginal value the fooled consumers would place on the dominant firm’s product absent the deception. See James C. Cooper & Bruce Kobayashi, Equitable Monetary Relief Under the FTC Act: Room for a Marginal Improvement, __ Antitrust L.J. __ (forthcoming 2020).

[68] It is important to note that consumer and firm harm are not distinct. They are merely different ways of measuring the same harm. Consumer protection actions would recover revenue consumers spent on the fraudulent product, while a Lanham Act action would recover the portion of the revenue spent on the defendant’s fraudulent product that was diverted from the plaintiff’s firm.

[69] More formally, deception benefits the defrauding firm by shifting its demand curve out, increasing revenue through increased output and, if the firm enjoys market power, higher prices. Much of this increased revenue will be due to consumers diverted from their preferred firm, although the deception may have caused new consumers to enter the market. Anticompetitive effects would occur only if the deception ultimately limited the rival firms’ ability to constrain the dominant firm’s pricing. This effect would manifest in a lower price elasticity of demand, allowing the dominant firm to charge a higher price (or offer a lower level of quality) for any given level of demand, resulting in lower market wide output, not merely lower output for rival firms.

[70] See Areeda & Hovenkamp, Antitrust Law ¶782b (4th ed. 2020).

[71] Id. Areeda & Hovenkamp suggest that courts consider deception as presumptively de minimus, but suggest defendants could overcome this presumption with evidence the representations (1) were clearly false (2) clearly material, (3) clearly likely to induce reasonable reliance, (4) made to buyers without knowledge of the subject matter, (5) continued for prolonged periods, and (6) not readily susceptible of neutralization or other offset by rivals. Id.

[72] See Hillary Greene & Dennis A. Yao, Antitrust as Speech Control, 60 Wm. & Mary L. Rev. 1215, 1244 (2019) (“The existence of [the Lanham Act, the FTC Act, and related state consumer protection statutes] may partially explain the reluctance of the courts to recognize disparagement as an independent antitrust claim.”). See Areeda & Hovenkamp supra note 70.

[73] See Areeda & Hovenkamp, supra note 70.

[74] See id.

[75] 15 U.S.C. § 45(a)(1) (2018).

[76] Trinko, 540 U.S. 398 at 412.

[77] 501 F.3d 303 (3d Cir. 2007).

[78] 486 U.S. 492, 499 (1988).

[79] 382 U.S. 172 (1965). The FTC also brought antitrust actions against pharmaceutical companies for lying the FDA about patents that covered their drugs, which triggered a regulatory barrier to generic entry. See e.g., Decision and Order, In re Bristol-Meyers Squibb Co., F.T.C. No. C-4076 (Apr. 14, 2003), https://www.ftc.gov/sites/default/files/documents/cases/2003/04/bristolmyerssquibbdo.pdf

[80] Complaint, In re Intel, FTC Case no. 9341, at ¶¶ 58-67. Notably, the FTC complaint also charged that this conduct was an unfair and deceptive act or practice under the FTC’s consumer protection jurisdiction and constituted an unfair method of competition under the FTC’s antitrust jurisdiction, which is broader than Section 2.

[81] United States v. Microsoft, Corp., 253 F.3d 34, 76-77 (D.C. Cir. 2001).

[82] In addition, deception played relatively small roles in the overall exclusionary conduct at issues in Intel and Microsoft.

[83] Microsoft, 253 F.3d at 79.

[84] A similar showing would be necessary to make out an attempted monopolization claim, which requires in addition to exclusionary conduct and specific intent to monopolize, a “dangerous probably of achieving monopoly power.” Spectrum Sports, Inc. v. McQuillian, 506 U.S. 447, 456 (1993).

[85] See Section II.C., infra, for a discussion of how increased collection and use of data are intertwined with consumer benefits.

[86] Some have suggested that markets subject to network effects may be more vulnerable to deception as an anticompetitive strategy. See, e.g., Srinivasan, supra note 65, at 91–92. A plaintiff would bear the burden of providing evidence that the market is characterized by sufficiently strong network effects that the measured impact from the deception was likely to tip the market in the dominant firm’s favor and make a “durable contribution to the defendant’s market power.” Areeda & Hovenkamp, Antitrust Law, ¶782.

[87] We will assume that even though the lower quality will result in lower demand, that the car maker’s market power makes this reduction in quality a profit-maximizing decision.

[88] See, e.g., James C. Cooper, Privacy and Antitrust: Underpants Gnomes, The First Amendment, and Subjectivity, 20 Geo. Mason L. Rev. 1129 (2013).

[89] Shelanski identifies three possible ways that consumer data provides value to a firm:

First, customer information can be an input of production that enables a business to improve its service offerings and increase its returns. Second, customer data can be a strategic asset that allows a platform to maintain a lead over rivals and to limit entry into its market. Third, customer information can be a valuable commodity, which the firm could sell to other businesses that cannot collect the data themselves.

Shelanski, supra note 19, at 1679. As discussed in the text, it is clear to see how the first and third uses of data benefit consumers. Using data as a strategic asset, however, is just a restatement of the first benefit to consumers, cast as a competitive advantage to the firm collecting it. As Shelanski puts it, “[t]his larger information set might enable the leading firm to make information-dependent product improvements that smaller rivals will be unable to replicate.” Shelanski. at 1681. Note, that the competitive advantage to the collecting firm comes from the ability to create a better product through better access to data—a clear benefit to consumers.

[90] See, e.g., CMA Report, supra note 20, at ¶¶4.28-4.29.

[91] See Garrett Johnson et al., Consumer Choice in Online Advertising: Who Opts Out and at What Cost to Industry?, 39 Mktg. Sci. 33 (2020); Avi Goldfarb & Catherine E. Tucker; Privacy Regulation and Online Advertising, 57 Mgmt Sci. 57 , 68 (2011); Howard Beales & Jeffery A. Eisenach, Putting Consumers First: A Functionality Based Approach to Online Advertising (Navigant Econ. Working Paper 2013), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2211540. Shelanski, supra note 19, at 1680. The difference in value between contextual ads and IBA is likely to be larger for general audience websites (e.g., CNN.com) than niche websites (e.g., Tennis.com), as visiting a general audience website provides far less information about a consumers’ interests. For example, an online seller of tennis equipment can infer that a visitor to Tennis.com is likely to be interested in its products, whereas absent IBA, it has no ability to determine if a visitor to CNN.com is a tennis player. In addition to identifying consumers who are likely to be interested in a product, IBA also allows advertisers to measure the effectiveness of their ad campaigns by, for example, tracking conversions or other actions in the online or offline world. See CMA Report, supra note 20, at ¶5.61 (noting that Google’s first-party data gives it an advantage in conversion attributions); see also id. at ¶6.18 (noting that Google’s and Facebook’s higher revenue per user can be attributed in part to better targeting of advertisements toward relevant consumers and the ability to monitor consumers’ subsequent actions.).

[92] The recent FTC case against YouTube for violations of the Children’s Online Privacy Protection Act (COPPA) highlights this tradeoff. To comply with the consent decree, YouTube has prohibited the use of IBA for any content directed at children. Consent Decree at 10, FTC v. Google, Inc., Case No. 1:19-cv-02642 (D.D.C. 2019). This move has created an uproar on the part of content providers, who are concerned about the inability to generate sufficient revenues to continue creating content. See, e.g., Julia Alexander, YouTube Officially Rolls Out Changes to Children’s Content Following FTC Settlement, The Verge (Jan. 6, 2020) (“YouTube has said kid-focused channels will see ‘a significant business impact’ due to reduced ad revenue”), at https://www.theverge.com/2020/1/6/21051465/youtube-coppa-children-content-gaming-toys-monetization-ads; Julia Alexander, YouTubers Say Kids’ Content Changes Could Ruin Careers, The Verge (Sept. 5, 2019), at https://www.theverge.com/2019/9/5/20849752/youtube-creators-ftc-fine-settlement-family-friendly-content-gaming-minecraft-roblox. There is also some evidence that the reduction in revenue to creators from ad blocking technology has had a negative impact on online content. See Benjamin Schiller et al., The Effect of Ad Blocking on Website Traffic and Quality, 49 RAND J. Econ. 43 (2018) (finding evidence that the use of ad blockers reduces website quality by reducing revenue available for content creation).

[93] As O’Brien & Smith, illustrate, if some consumers find reductions in privacy accompanied by concomitant product quality increases on net beneficial, changes in privacy lead to shifts and rotations in demand. The direction and size of the rotation (clockwise or counterclockwise), and hence the net impact on welfare, depends on the correlation of the distributions of preferences for privacy and quality improvements. See Daniel P. O’Brien & Douglas Smith, Privacy in Online Markets: A Welfare Analysis of Demand Rotations, (Fed. Trade Comm’m Bureau of Economics Working Paper 2014), https://www.ftc.gov/system/files
/documents/reports/privacy-online-markets-welfare-analysis-demand-rotations/wp323.pdf.

[94]See United Mine Workers of Am. v. Pennington, 381 U.S. 657, 670 (1965); E. R.R. Presidents’ Conference v. Noerr Motor Freight, Inc., 365 U.S. 127, 135 (1961). See Allied Tube & Conduit Corp. v. Indian Head, Inc., 486 U.S. 492, 499 (1988).

[95] 458 U.S. 886 (1982).

[96] Id. at 892.

[97] Id. at 914.

[98] 493 U.S. 411 (1990).

[99] Id. at 426–28.

[100] Id. at 427.

[101] See Giboney v. Empire Storage & Ice Co., 336 U.S. 490, 502 (1949) (citation omitted).

[102] Id. at 697–98. See also Hillary Greene & Dennis A. Yao, Antitrust as Speech Control, 60 Wm. & Mary L. Rev. 1215, 1223 (2019) (“Assuming a finding of antitrust liability, if the remedy to the anticompetitive conduct involves a restriction on speech, this restriction must be sufficiently tailored to meet the appropriate level of scrutiny.”).

[103] See Nat’l Soc’y of Prof’l Eng’rs v. United States, 435 U.S. 679, 696–97 (1978). See also FTC v. Superior Court Trial Lawyers Association, 493 U.S. 411 (1990).

[104] Nat’l Soc’y of Prof’l Eng’rs v. United States, 555 F.2d 978, 984 (D.C. Cir. 1977) (“To force an association of individuals to express as its own opinion judicially dictated ideas is to encroach on that sphere of free thought and expression protected by the First Amendment.”). See also ES Dev., Inc. v. RWM Enter., Inc., 939 F.2d 547 (8th Cir. 1991) (holding that an order in a Sherman §1 conspiracy case that prevented defendants from making certain communications for “the indefinite future” was an “inappropriate . . . restriction upon appellants’ individual exercise of their constitutionally protected rights of commercial speech”).

[105] See Bundeskartellamt 6th Decision Division, decision of 6 February 2019, ref. B6-22/16 – Facebook.

[106] See Commission decision of 3 October 2014, Case M.7212 – Facebook/WhatsApp.

[107] 425 U.S. 748 (1976).

[108] See Bd of Trs. of the State Univ. of N.Y. v. Fox, 492 U.S. 469, 480 (1989); Cent. Hudson Gas & Elec. Corp. v. Pub. Serv. Comm’n of N.Y., 447 U.S. 557, 566 (1980). The trend, however, has been for greater scrutiny to be applied under the commercial speech inquiry. See Neil Richards, Reconciling Data Privacy and the First Amendment, 52 UCLA L. Rev. 1149, 1207 (2005).

[109] 358 F.3d 1228 (10th Cir. 2004).

[110] Id. at 1250–51 (“Do not call” regulation survives intermediate scrutiny);see also Trans Union LLC v. F.T.C., 295 F.3d 42, 46, 53 (D.C. Cir. 2002) (regarding an FTC regulation pursuant to Graham-Leach-Bliley restricting the ability of financial institutions to disclose private information to third parties survives intermediate scrutiny); Trans Union Corp. v. F.T.C., 245 F.3d 809, 818–19 (D.C. Cir. 2001) (regarding FTC rules restricting use of credit reports under Fair Credit Reporting Act survives intermediate scrutiny).

[111] See Richards, supra note 108, at 1182–90 (detailing myriad rules that affect the use and collection of data that are treated as laws restraining conduct, not speech); see also Ashutosh Bhagwat, Sorrel v. IMS Health: Details, Detailing, and the Death of Privacy, 36 Vt. L. Rev. 855 (2012); Shubha Ghosh, Informing and Reforming the Marketplace of Ideas: The Public-Private Model for Data Production and the First Amendment, 2012 Utah L. Rev. 653, 705–06 (2012).

[112] See Jane Yakowitz Bambauer, Is Data Speech?, 66 Stan. L. Rev. 58, 73 (2013); Fred H. Cate & Robert Litan, Constitutional Issues in Information Privacy, 9 Mich. Telecomm. & Tech. L. Rev. 35, 49, 57 (2002); Eugene Volokh, Freedom of Speech and Information Privacy: The Troubling Implications of a Right to Stop People from Speaking About You, 52 Stan. L. Rev. 1049, 1051–52 (1999).

[113] Bambauer, supra note 112, at 23.

[114] 131 S. Ct. 2653 (2011).

[115] Id. at 2662–63.

[116] Id. at 2667.

[117] Id. at 2666. The Court spoke of the “rule that information is speech,” and explained that “[t]his Court has held that the creation and dissemination of information are speech within the meaning of the First Amendment . . . Facts, after all, are the beginning point for much of the speech that is most essential to advance human knowledge and to conduct human affairs.” Id. at 2667 (emphasis added) (citation omitted). Professors Bambauer and Bhagwat reach similar conclusions. See Bambauer, supra note 112, at 79; Bhagwat, supra note 111, at 862.

[118] Bambauer contends that although collecting consumer data is done by a business and often linked to advertising, because the right to collect data is so intertwined with the right to speak it should not necessarily be subject to lower levels of scrutiny associated with commercial speech or speech involving a private, rather than public, concern. See Bambauer, supra note 112, at 101–05.

[119] See, e.g., E. R.R. Presidents Conference v. Noerr Motor Freight, Inc., 365 U.S. 127, 137 (1961); Sosa v. DIRECTV, Inc., 437 F.3d 923, 931 (9th Cir. 2006) (analyzing the Supreme Court’s approach to the First Amendment and the Sherman Act in Noerr). See F.T.C. v. Superior Court Trial Lawyers Ass’n, 493 U.S. 411, 424 (stating that the Court in Noerr was “[i]nterpreting the Sherman Act in the light of the First Amendment’s Petition Clause”); see also Prof’l Real Estate Investors, Inc. v. Columbia Pictures Indus., Inc., 508 U.S. 49, 56 (arguing that the Court in Noerr interpreted the Sherman Act, in part, to avoid imputing “‘to Congress an intent to invade ‘the First Amendment right to petition.’”). The recent application of Noerr principles to the National Labor Relations Act (“NLRA”) provides additional insight into the role that the First Amendment plays in defining the scope of Noerr protection. See BE & K Constr. Co. v. NLRB, 536 U.S. 516, 525 (2002). As in Noerr, the Court in BE & K turned to statutory construction to avoid the constitutional question, holding that the NLRB’s standard was invalid because there was nothing in the relevant statutory text to suggest that it “must be read to reach all reasonably based but unsuccessful suits filed with a retaliatory purpose.” Id. at 536. In light of the BE & K decision, the Ninth Circuit recently concluded that the Noerr doctrine “stands for a generic rule of statutory construction, applicable to any statutory interpretation that could implicate the rights protected by the Petition Clause . . . Under the NoerrPennington rule of statutory construction, we must construe federal statutes so as to avoid burdening conduct that implicates the protections afforded by the Petition Clause unless the statute clearly provides otherwise.” Sosa , 437 F.3d at 931 (citations omitted).

[120] See, e.g., Polygram Holding, Inc. v. FTC, 416 F.3d 29, 32–33 (D.C. Cir. 2005).

[121] See Palmer v. BRG of Ga., Inc., 498 U.S. 46, 49-50 (1990) (per curiam).

[122] Bundeskartellamt 6th Decision Division, decision of 6 February 2019, ref. B6-22/16 – Facebook.

[123] See Maureen K. Ohlhausen & Alexander P. Okuliar, Competition, Consumer Protection, and the Right [Approach] to Privacy, 80 Antitrust L.J. 121, 151–52 (2015).

[124] See Steven Shavell, Foundations of Economic Analysis of Law 224-27 (2004). See also Richard Craswell & John E. Calfee, Deterrence and Uncertain Legal Standards, 2 J. L. Econ. & Org. 279 (1986).

[125] David S. Evans & A. Jorge Padilla, Designing Antitrust Rules for Assessing Unilateral Practices: A Neo-Chicago Approach, 72 U. Chi. L. Rev. 73, 73-74 (2005).

[126] See id. at 84–85.

[127] Resource distribution can be accomplished through both rent extraction and rent creation. See Fred S. McChesney, Money for Nothing: Politicians, Rent Extraction, and Political Extortion 2 (1997).

[128] See, e.g., Gordon Crovitz, Google’s $25 Million Bargain, Wall St. J., (Jan. 14, 2013); Gordon Crovitz, Silicon Valley’s ‘Suicide Impulse, Wall St. J. (Jan. 28, 2013); Tony Romm, How Google Beat the Feds, Politico.com (Jan. 3, 2013, 5:20 PM). This is why the “rectangle” costs associated with government-created market distortions are often thought to be larger than the “triangle,” or deadweight loss, costs. See McChesney, supra note 127, at 12–13.

Back to Top