Error Costs in Digital Markets

↓ Download Chapter

ABSTRACT

Legal decision-making and enforcement under uncertainty are always difficult and always potentially costly. The risk of error is always present given the limits of knowledge, but it is magnified by the precedential nature of judicial decisions: an erroneous outcome affects not only the parties to a particular case, but also all subsequent economic actors operating in “the shadow of the law.” The inherent uncertainty in judicial decision-making is further exacerbated in the antitrust context where liability turns on the difficult-to-discern economic effects of challenged conduct. And this difficulty is still further magnified when antitrust decisions are made in innovative, fast-moving, poorly-understood, or novel market settings—attributes that aptly describe today’s digital economy.

Introduction[1]

Legal decision-making and enforcement under uncertainty are always difficult and always potentially costly. The risk of error is always present given the limits of knowledge, but it is magnified by the precedential nature of judicial decisions: an erroneous outcome affects not only the parties to a particular case, but also all subsequent economic actors operating in “the shadow of the law.”[2] The inherent uncertainty in judicial decision-making is further exacerbated in the antitrust context where liability turns on the difficult-to-discern economic effects of challenged conduct. And this difficulty is still further magnified when antitrust decisions are made in innovative, fast-moving, poorly-understood, or novel market settings—attributes that aptly describe today’s digital economy.

Rational decision-makers will undertake enforcement and adjudication decisions with an eye toward maximizing social welfare (or, at the very least, ensuring that nominal benefits outweigh costs).[3] But “[i]n many contexts, we simply do not know what the consequences of our choices will be. Smart people can make guesses based on the best science, data, and models, but they cannot eliminate the uncertainty.”[4] Because uncertainty is pervasive, we have developed certain heuristics to help mitigate both the direct and indirect costs of decision-making under uncertainty, in order to increase the likelihood of reaching enforcement and judicial decisions that are on net beneficial for society. One of these is the error-cost framework.

In simple terms, the objective of the error-cost framework is to ensure that regulatory rules, enforcement decisions, and judicial outcomes minimize the expected cost of (1) erroneous condemnation and deterrence of beneficial conduct (“false positives,” or “Type I errors”); (2) erroneous allowance and under-deterrence of harmful conduct (“false negatives,” or “Type II errors”); and (3) the costs of administering the system (including the cost of making and enforcing rules and judicial decisions, the costs of obtaining and evaluating information and evidence relevant to decision-making, and the costs of compliance).

In the antitrust context, a further premise of the error-cost approach is commonly (although not uncontroversially[5]) identified: the assumption that, all else equal, Type I errors are relatively more costly than Type II errors. “Mistaken inferences and the resulting false condemnations ‘are especially costly, because they chill the very conduct the antitrust laws are designed to protect.’”[6] Thus the error-cost approach in antitrust typically takes on a more normative objective: a heightened concern with avoiding the over-deterrence of procompetitive activity through the erroneous condemnation of beneficial conduct in precedent-setting judicial decisions. Various aspects of antitrust doctrine—ranging from antitrust pleading standards to the market definition exercise to the assignment of evidentiary burdens—have evolved in significant part to constrain the discretion of judges (and thus to limit the incentives of antitrust enforcers) to condemn uncertain, unfamiliar, or nonstandard conduct, lest “uncertain” be erroneously identified as “anticompetitive.”

The concern with avoiding Type I errors is even more significant in the enforcement of antitrust in the digital economy because the “twin problems of likelihood and costs of erroneous antitrust enforcement are magnified in the face of innovation.”[7] Because erroneous interventions against innovation and the business models used to deploy it threaten to deter subsequent innovation and the deployment of innovation in novel settings, both the likelihood and social cost of false positives are increased in digital and other innovative markets. Thus the avoidance of error costs in these markets also raises the related question of the proper implementation of dynamic analysis in antitrust.[8]

I. The Error-Cost Framework

A.  Uncertainty, Ignorance, and Evolution

Uncertainty in the context of statistical decision theory[9] (from which error-cost analysis is derived) implies more than merely risk.[10] Risk implies that the potential outcomes are known, but that they occur only with a certain probability. Maximizing benefits (minimizing error) under these conditions is fairly straightforward, and readily reducible to a mathematical formula.

Under uncertainty, the possible consequences (costs) of a decision are known, but not the likelihood of any given outcome. This presents a much more difficult maximization problem for which judgment (flawed as it is) is required. It is also, unfortunately, far more common, as probabilities are rarely known with any degree of precision.

More troublingly, however, a disturbingly large share of the time in judicial decision-making we know neither the probabilities nor the consequences of decisions. “In such cases the uncertainty . . . is even more daunting than uncertainty in decision theory’s technical sense; it is in fact a deep form of ignorance.”[11]

Antitrust decision-making is most commonly undertaken in this state of ignorance. The stark reality for most antitrust adjudication is that the same conduct that could be beneficial in one context could be harmful in another.[12]

But it is virtually never known what the likelihood of either outcome is in the case of novel business conduct (i.e., the sort that makes its way to litigation).[13] To make matters worse, the magnitudes of the potential harm (if anticompetitive) and benefits (if procompetitive) are also essentially never knowable: in the best-case scenario the estimation of effects must be cabined to render evaluation remotely tractable, and inevitably static estimates will miss broader (and potentially more significant) dynamic effects.[14]

A further complication is the precedential nature of judicial decisions:

[I]n contrast to private decision makers, courts also have concerns about optimal deterrence. That is because a decision by a court will not only bind the litigation parties, but will also serve as precedent by which future conduct will be judged. In antitrust, for example, over-deterrence might involve deterring welfare enhancing cooperation or innovations by firms that fear a finding of liability even when their conduct does not reduce consumer welfare.[15]

The consequences of erroneous decision-making are thus considerably more significant than even the already curtailed estimates in any given case. As the Microsoft court put it:

Whether any particular act of a monopolist is exclusionary, rather than merely a form of vigorous competition, can be difficult to discern: the means of illicit exclusion, like the means of legitimate competition, are myriad. The challenge for an antitrust court lies in stating a general rule for distinguishing between exclusionary acts, which reduce social welfare, and competitive acts, which increase it.[16]

The case-by-case, common-law approach to antitrust is itself a form of error-cost avoidance. It is well known that specification of detailed, ex ante rules will ensure costly, erroneous outcomes where conduct is not clearly harmful, our understanding of its effects is indeterminate, or technological change alters either the effects of certain conduct or our understanding of it: “An important cost of legal regulation by means of rules is thus the cost of altering rules to keep pace with economic and technological change.”[17]

By contrast,

[o]bsolescence is not so serious a problem with regulation by standard. Standards are relatively unaffected by changes over time in the circumstances in which they are applied, since a standard does not specify the circumstances relevant to decision or the weight of each circumstance but merely indicates the kinds of circumstance that are relevant.[18]

Despite occasional assertions to the contrary, it is clear that the antitrust laws were drafted as imprecise standards, necessarily leaving to the courts the job of more detailed rulemaking. In this it reflects a common and well-understood legislative choice:

The legislature’s choice whether to enact a standard or a set of precise rules is implicitly also a choice between legislative and judicial rulemaking. A general legislative standard creates a demand for specification. This demand is brought to bear on the courts through the litigation process and they respond by creating rules particularizing the legislative standard.[19]

The cost of this approach, however, is that deterrence by standard is less effective, and administration more expensive. At the same time, however, the common-law approach is readily amenable to Bayesian updating, and as more information is gleaned (both through experience and the development of economic science), the common law approach incorporates it into the analysis—first through the basic operation of stare decisis, but also through concrete doctrinal changes that can amplify particular circumstances to more general cases. In this way, the process of antitrust adjudication develops along with economic learning to reduce the risk of error as more information is available.[20]

B.  The Basics of Error-Cost Analysis

The application of decision theory to judicial decision-making seems to have originated with Ehrlich and Posner’s 1974 article, An Economic Analysis of Legal Rulemaking:

The model is based on a social loss function having, as its principal components, the social loss from activities that society wants to prevent, the social loss from the (undesired) deterrence of socially desirable activities, and the costs of producing and enforcing statutory and judge-made rules, including litigation costs. Efficiency is maximized by minimizing the social loss function with respect to two choice variables, the number of statutory rules and the number of judge-made rules.[21]

There the particular focus was on the specificity of legal proscriptions and the choice between standards and rules: “a theory of the legal process according to which the desire to minimize costs is a dominant consideration in the choice between precision and generality in the formulation of legal rules and standards.”[22]

Professors Joskow & Klevorick introduced decision-theoretic analysis to antitrust in their development of a framework for assessing predatory pricing.[23] As they note, uncertainty is inherent in the assessment of predatory pricing (although the same assessment applies to a great deal of antitrust analysis, much of which is similarly forward looking, and all of which is tasked with inferring anticompetitive effect from limited information):

Such an enterprise, no matter how carefully it is done, is inherently uncertain and involves the possibility of error both because the actual effects of any kind of observable short-run behavior on long-run outcomes are themselves uncertain and because our methods of predicting those effects are imperfect.[24]

The decision-theoretic framework employed by Joskow & Klevorick to assess the propriety of a general rule applicable to predatory pricing cases

directs that we choose the policy that would minimize the sum of the expected costs of error and the costs of implementation that would result if the policy were applied to the market we are considering. . . . [O]ur decision-theoretic evaluative mechanism reveals that no single rule will be best for all market situations; if a predatory pricing rule is formulated with one particular market in mind, we cannot be sure that it should be applied to other market situations.[25]

It was Judge Frank Easterbrook who generalized the approach for antitrust, and offered the clearest exposition of the error-cost approach:[26]

The legal system should be designed to minimize the total costs of (1) anticompetitive practices that escape condemnation; (2) competitive practices that are condemned or deterred; and (3) the system itself.[27]

The role of presumptions and other doctrinal elements of the process of antitrust review—“filters” in Easterbrook’s terminology—is central to the effectuation of the error-cost framework:

The third is easiest to understand. Some practices, although anticompetitive, are not worth deterring. We do not hold three-week trials about parking tickets. And when we do seek to deter, we want to do so at the least cost. A shift to the use of presumptions addresses (3) directly, and a change in the content of the legal rules influences all three points. . . .

. . . The task, then, is to create simple rules that will filter the category of probably-beneficial practices out of the legal system, leaving to assessment under the Rule of Reason only those with significant risks of competitive injury.[28]

Error-cost analysis applies a Bayesian decision-theoretic framework designed to address problems of decision-making under uncertainty. In antitrust, decision-makers are tasked with maximizing consumer welfare.[29] The problem, of course, is that it is never clear in any given case—particularly those that make their way to actual litigation—what decision will accomplish this objective.[30]

Given this uncertainty, we can recharacterize the effort to maximize consumer welfare in antitrust decision-making as an effort to minimize the loss of consumer welfare from (inevitably) erroneous decisions.[31] The likelihood of error decreases with additional information, but there is a cost to obtaining new information. So, the error-cost framework seeks to minimize error for a given amount of information as well as to determine what amount (and type) of information is optimal.

In evaluating investment in information, the benefit of additional information is that it may reduce the likelihood of making a costly erroneous decision. In this sense, the decision to consider additional information can be seen as a tradeoff between two types of costs—error costs on the one hand and information costs on the other. A rational decision maker will try to minimize the sum of the two types of costs. This is the second key insight of the decision theoretic approach.[32]

Crucial to the application of the error-cost framework in the judicial or regulatory context is that the costs (benefits) of an erroneous (correct) decision are not limited to the immediate consequences of the conduct at hand. Because judicial determinations establish precedent, and because regulatory rules are applied broadly, antitrust decision-makers must also consider the risk and cost of over- and under-deterrence resulting from erroneous decisions.[33] These dynamic, long-term consequences of antitrust decision-making are likely the most significant source of cost from erroneous decisions.

Applying this approach, the decision-maker (e.g., regulator, court, or legislator) holds a relatively uninformed prior belief about the likelihood that a particular business practice is anticompetitive. These prior beliefs are updated either with new knowledge as the theoretical and empirical understanding of the practice evolves over time, or with new evidence specific to the case at hand. Knowledge regarding the likely competitive effects of business conduct is never perfect, but each additional piece of information may improve the likelihood of accurately predicting whether the conduct is harmful or not (although obtaining it may be costly, and it also may increase the cost of accurate decision-making). The optimal decision rule is based on the updated likelihood that the practice is anticompetitive by minimizing a loss function measuring the costs of Type I and Type II errors.

The key policy tradeoff is between Type I (“false positive”) and Type II (“false negative”) errors. Table 1 presents a two-by-two matrix laying out the types of errors that occur in antitrust litigation.[34]

Easterbrook’s operationalization of the framework entails three key, underlying assumptions:[35]

  1. Both Type I and Type II errors are inevitable in antitrust cases because of the difficulty in distinguishing efficient, procompetitive business conduct from anticompetitive behavior;
  2. The social costs associated with Type I errors are generally greater than the social costs of Type II errors because market forces offer at least some corrective with respect to Type II errors and none with regard to Type I errors: “the economic system corrects monopoly more readily than it corrects judicial [Type II] errors;”[36] and
  3. Optimal antitrust rules will minimize the expected sum of error costs subject to the constraint that the rules be relatively simple and reasonably administrable.[37]

The inevitability of errors in antitrust cases is a function of two related, but distinct, knowledge problems. The first is rooted in the limits of the underlying economic science which provides the guidance by which decisionmakers attempt to identify anticompetitive conduct and specify the rules relating to that conduct. Because economic science is constantly evolving (to say nothing of inherently imperfect) and imperfectly translated into judicial decision-making, rules will always be imperfectly specified.[38] Economists, who supply the crucial input of economic science, tend not to advance their analyses in realistic institutional settings (in part a function of the need for simplification in economic models to ensure their tractability) and thus regularly “avoid incorporating the social costs of erroneous enforcement decisions into their analyses and recommendations for legal rules.”[39] They also have divergent incentives and ulterior motives that may make them less likely to do a good job.[40] Meanwhile, lawyers, judges, and enforcers, for their part, are often limited in their ability to apply the relevant economic science to complicated and imperfect facts, and to adduce the optimal legal rules.[41] The net result is that it is a fundamentally difficult task to identify illegal, anticompetitive conduct and distinguish it from legal, procompetitive conduct in any specific case:

The key point is that the task of distinguishing anticompetitive behavior from procompetitive behavior is a herculean one imposed on enforcers and judges, and that even when economists get it right before the practice is litigated, some error is inevitable. The power of the error-cost framework is that it allows regulators, judges, and policymakers to harness the power of economics, and the state-of-the-art theory and evidence, into the formulation of simple and sensible filters and safe-harbors rather than to convert themselves into amateur econometricians, game-theorists, or behaviorists.[42]

The second knowledge problem leading to the inevitability of errors stems from the lack of precision in legal rules generally. As Easterbrook notes, “one cannot have the savings of decision by rule without accepting the costs of mistakes.”[43] Because the application of economic science to any given situation is imperfect, comprehensive proscriptions cannot often be specified in advance. At the same time (and for much the same reason), the case-by-case, ex post determination of antitrust liability through the rule of reason process is costly and difficult to administer accurately. The result is that there are relatively few simple rules (e.g., safe harbors and per se rules) in place: where there aren’t, adjudication is costly and imperfect; where there are, errors are inevitable.

C.  The Implementation of the Error-Cost Framework in Antitrust Adjudication

The knowledge problem confronting antitrust decisionmakers is somewhat ameliorated by the imposition of intermediate, simplifying procedures that impose categorization and filters at various stages in the process to improve the efficiency of decision-making given the cost of litigation (including, e.g., time costs, burdens on judicial resources, and discovery costs). Standing rules, for example, are classic error-cost minimization rules. The availability of standing turns on certain indicia that correlate with the expected likelihood that a plaintiff in a given position will have a justiciable case. Where that likelihood is identifiably low, it is more efficient to curtail adjudication before it even begins by denying standing, even though occasionally this will erroneously prevent the adjudication of meritorious cases.

At the overarching, substantive level, the choice between, on the one hand, engaging in a full-blown rule of reason analysis and, on the other, truncating review under the per se standard is a manifestation of the error cost framework.[44] In simple terms, truncated review costs less. When it is apparent to a court that challenged conduct is almost certainly anticompetitive, the risk of erroneously condemning that conduct under a truncated analysis is low, and the administrative cost savings comparatively high.

The dividing line between per se and rule of reason turns on information and probabilities: the extent to which the court has knowledge that the type of case presented is always or almost always harmful. Thus the Court has noted that the per se rule should be applied (1) “only after courts have had considerable experience with the type of restraint at issue” and (2) “only if courts can predict with confidence that [the restraint] would be invalidated in all or almost all instances under the rule of reason” because it “‘lack[s] . . . any redeeming virtue.’”[45]

Of course, precisely because certain knowledge about the competitive effects of most conduct is not available, condemnation under the per se standard is rarely appropriate. As a result, the error cost framework leads naturally to a preference for rule of reason analysis for most types of conduct.[46]

Although less often discussed,[47] but of no less importance, the error-cost framework also helps inform procedural as much as substantive liability rules. Many procedural rules serve as filters to eliminate the costly consideration of conduct that is unlikely to lead to consumer harm (costly both in terms of direct, administrative costs, as well as the risk of erroneous condemnation). Thus, antitrust procedure has a number of hurdles a plaintiff must overcome before a case is “proven.” Failure to overcome any of these hurdles could lead to a dismissal of the case, as early as a motion to dismiss before discovery.[48] Courts also dismiss cases at the summary judgment stage when there is no economic basis for the claims.[49] Similarly, antitrust assigns burdens of proof and adopts certain evidentiary presumptions within a burden-shifting framework, aimed at putting a thumb on the scale where economic knowledge warrants it.[50] The combination of procedural rules and burdens of proof helps to assure—in an environment of substantial uncertainty—that conduct harmful to consumers, and only such conduct, is condemned under a rule of reason analysis.

Antitrust injury and standing are among the first procedural hurdles a plaintiff faces.[51] Much like the per se standard, the doctrines of antitrust injury and standing serve as a filter meant to minimize the cost of adjudicating likely meritless claims. Importantly, in order to perform this function effectively, they must also reflect the underlying substantive knowledge of the conduct in question.[52]

Plaintiffs must also define the relevant market in which to assess the challenged conduct, including both product and geographic markets.[53] Particularly where novel conduct or novel markets are involved and thus the relevant economic relationships are poorly understood, market definition is crucial to determine “what the nature of [the relevant] products is, how they are priced and on what terms they are sold, what levers [a firm] can use to increase its profits, and what competitive constraints affect its ability to do so.”[54] In this way market definition not only helps to economize on administrative costs (by cabining the scope of inquiry), it also helps to improve the understanding of the conduct in question and its consequences.

Evidentiary burdens and standards of proof are particularly important implementations of the error cost framework. As noted, presumptions and burdens place an evidentiary “thumb on the scale” of antitrust adjudication, ideally in a manner reflecting underlying economic knowledge and its application to the specific facts at hand.[55] A plaintiff need not prove anticompetitive harm with certainty, or “beyond a shadow of doubt”: such a standard would, in most circumstances, not reflect the inherent uncertainty of conduct challenged under the antitrust laws. Under a “preponderance of the evidence” standard, by contrast, a plaintiff need adduce evidence sufficient only to demonstrate that challenged conduct is “more likely than not” to have anticompetitive effect. Plaintiffs in most civil litigation in the US, including antitrust litigation, are held to this standard.[56] Rebuttable presumptions are sometimes employed as a cost-saving substitute for direct evidence when economic theory predicts a relatively high probability of competitive harm.[57]

The choice of evidentiary standard—that is, the amount and kind of information supportive of the plaintiff’s claims she must produce, and the degree of certainty that evidence must engender in the court for it to decide in her favor—is crucial to the error-cost analysis which is, after all, a decision-theoretic device.

[A]ntitrust policy [is] a problem of drawing inferences from evidence and making enforcement decisions based on these inferences. . . . Using Bayes’ rule, we can write the policy maker’s belief about the relative odds that a given practice is anticompetitive as a function of his prior beliefs about the practice, and the relative likelihood that the evidence observed would be produced by anticompetitive conduct.[58]

But in an error-cost framework, it is by no means certain that a preponderance of the evidence—“more likely than not”—standard will generally minimize error costs. “[T]he decision theoretic approach. . . would not apply [the preponderance of the evidence] standard across the board. Instead, it would base decisions on expected error cost, not just the likelihood of prevailing.”[59] A preponderance of the evidence standard “would treat prospective errors in the direction of excessive enforcement as equally costly as prospective errors in the direction of lenient enforcement.”[60] Thus such a standard will optimize error costs only when the costs of Type I and Type II errors are the same.[61] But this will not always be the case.

While some have advocated reducing evidentiary burdens through presumptions of harm in certain situations,[62] Professors Mungan and Wright have persuasively argued that the preponderance of the evidence standard tends towards too many Type I errors, and should, in fact, be strengthened:[63]

The intuition behind this result is that while the optimal standard in other contexts is that which maximizes the deterrence of a single, bad conduct, the optimal standard of proof in antitrust must be set to both deter bad conduct and incentivize innovative and procompetitive conduct.[64]

In other words, in addition to uncertainty about what act is committed, there is uncertainty about the social desirability of each act which may have been committed. . . . [T]hese peculiar concerns in the field of antitrust law push the optimal standard of proof towards being stronger than in other contexts when Easterbrook’s priors hold, i.e. the beneficial impact of procompetitive behavior exceeds the impact of anticompetitive behavior. This finding suggests that courts which take Easterbrook’s priors as given can achieve the goals of antitrust not only by crafting substantive legal rules to impact behavior, but also by using standards of proof which are stronger than preponderance of the evidence.[65]

The Supreme Court has, in fact, adopted heightened evidentiary standards in some antitrust contexts. For instance, in Matsushita, after enunciating the summary judgment standard,[66] the Court went on to apply the error cost framework,[67] and came to the conclusion that coordinated predatory pricing was extremely unlikely under the facts presented.[68] In such situations, the Court required greater evidence to survive a motion for summary judgment.[69]

D.  The Normative Error-Cost Framework: Why False Positives are More Concerning Than False Negatives

Crucial to Easterbrook’s conception of the error-cost framework are two normative premises: first, both Type I and Type II errors are inevitable in antitrust because distinguishing conduct with procompetitive effect from that with anticompetitive effect is an inherently uncertain and difficult task. Second, Type I errors are more costly than Type II errors, because self-correction mechanisms mitigate the latter far more readily than the former.[70] As a result, writes Easterbrook, “errors on the side of excusing questionable practices are preferable.”[71]

This version of the error-cost framework is not supported by all antitrust scholars, however, and there is a concerted effort today to condemn as unsupported, too permissive, and overly ideological the bias against enforcement that Easterbrook’s error-cost approach counsels.[72] As one recent account has it:

Given the Chicago assumption that markets tend to be self-correcting, type two errors—where the court fails to see anticompetitive conduct that actually exists—are not really problematic because the market itself will correct the situation. By contrast, false identification of harmful monopoly tends not to be self-correcting because a court blocks the efficient conduct for a long time. . . .

. . . If we reverse the premise and assume that markets tend more naturally to situations of market power, then the opposite presumption is warranted. Economic theory and evidence developed over the last forty years strongly support the reversed premise.[73]

There are several problems with this assessment, however.

1.     The Weakness of the Evidence on Market Power and its Alleged Harms

First, it is surely correct that evidence to support Easterbrook’s presumption is not easy to come by—if it were there would be no need for the decision-theoretic approach in the first place. But the absence of evidence to support the claim is insufficient to condemn it: evidence to the contrary is just as unavailable. Indeed, as I discuss below, the unavailability of that knowledge is precisely one of the factors that supports the presumption.

According to Hovenkamp and Scott Morton, the “evidence developed over the last forty years [that] strongly support[s] the reversed premise” consists of the following:

The United States well overshot the mark in reducing antitrust enforcement after the late 1970s. Markups have risen steadily since the 1980s. The profit share of the economy has risen from 2% to 14% over the last three decades. The economic literature has come down solidly against the key early assumption of the Chicago thinkers that markets will self-correct. To the contrary, the evidence demonstrates that eliminating antitrust enforcement likely results in monopoly prices and monopoly levels of innovation in many markets.[74]

Beyond the studies cited by Hovenkamp and Scott Morton, there is a widely reported literature that has documented increasing national product market concentration.[75] That same literature has also promoted the arguments that increased concentration has had harmful effects, including increased markups and increased market power,[76] declining labor share,[77] and declining entry and dynamism.[78]

But there are good reasons to be skeptical of the national concentration and market power data. A number of papers simply do not find that the accepted story—built in significant part around the famous De Loecker and Eeckhout study[79]—regarding the vast size of markups and market power is accurate. Among other things, the claimed markups due to increased concentration are likely not nearly as substantial as commonly assumed.[80] Another study finds that profits have increased, but are still within their historical range.[81] And still another shows decreased wages in concentrated markets, but also that local concentration has been decreasing over the relevant time period.[82]

But even more important, the narrative that purports to find a causal relationship between these data and the various depredations mentioned above is almost certainly incorrect.

To begin with, the assumption that “too much” concentration is harmful assumes both that the structure of a market is what determines economic outcomes, and that anyone knows what the “right” amount of concentration is. But, as economists have understood since at least the 1970s (and despite an extremely vigorous, but ultimately futile, effort to show otherwise), market structure is not outcome determinative.[83]

Once perfect knowledge of technology and price is abandoned, [competitive intensity] may increase, decrease, or remain unchanged as the number of firms in the market is increased. . . . [I]t is presumptuous to conclude . . . that markets populated by fewer firms perform less well or offer competition that is less intense.[84]

This view is not an aberration, and it is held by scholars across the political spectrum. Indeed, Professor Scott Morton herself is coauthor of a recent paper surveying the industrial organization literature and finding that presumptions based on measures of concentration are unlikely to provide sound guidance for public policy:

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration. . . .

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates.[85]

Furthermore, the national concentration statistics that are used to support these claims are generally derived from available data based on industry classifications and market definitions that have limited relevance to antitrust. As Froeb and Werden note:

[T]he data are apt to mask any actual changes in the concentration of markets, which can remain the same or decline despite increasing concentration for broad aggregations of economic activity. Reliable data on trends in market concentration are available for only a few sectors of the economy, and for several, market concentration has not increased despite substantial merger activity.[86]

Most importantly, however, this assumed relationship between concentration and economic outcomes is refuted by a host of recent empirical research studies.

The absence of a correlation between increased concentration and both anticompetitive causes and deleterious economic effects is demonstrated by a recent, influential empirical paper by Sharat Ganapati. Ganapati finds that the increase in industry concentration in non-manufacturing sectors in the US between 1972 and 2012 is “related to an offsetting and positive force—these oligopolies are likely due to technical innovation or scale economies. [The] data suggests that national oligopolies are strongly correlated with innovations in productivity.”[87] The result is that increased concentration results from a beneficial growth in firm size in productive industries that “expand[s] real output and hold[s] down prices, raising consumer welfare, while maintaining or reducing [these firms’] workforces.”[88]

A number of other recent papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data demonstrate clearly that measures of increased national concentration cannot justify a presumption that increased market power has caused economic harm. In fact, as these papers show, the reason for increased concentration in the US in recent years appears to be technological, not anticompetitive, and its effects seem to be beneficial.

In one recent paper,[89] the authors look at both the national and local concentration trends between 1990 and 2014 and find that: (1) overall and for all major sectors, concentration is increasing nationally but decreasing locally; (2) industries with diverging national/local trends are pervasive and account for a large share of employment and sales; (3) among diverging industries, the top firms have increased concentration nationally, but decreased it locally; and (4) among diverging industries, opening of a plant from a top firm is associated with a long-lasting decrease in local concentration.[90] The result, as the authors note, is that

the increase in market concentration observed at the national level over the last 25 years is being shaped by enterprises expanding into new local markets. This expansion into local markets is accompanied by a fall in local concentration as firms open establishments in new locations. These observations are suggestive of more, rather than less, competitive markets.[91]

A related paper shows that new technology has enabled large firms to scale production over a larger number of establishments across a wider geographic space.[92] As a result, these large, national firms have grown by increasing the number of local markets they serve, and in which they are actually relatively smaller players.[93] The net effect is a decrease in the power of top firms relative to the economy as a whole, as the largest firms specialize more, and are dominant in fewer industries.[94]

Economists have been studying the relationship between concentration and various potential indicia of anticompetitive effects—price, markup, profits, rate of return, etc.—for decades. There are, in fact, hundreds of empirical studies addressing this topic. Contrary to the claims of Hovenkamp and Scott Morton, however, taken as a whole this literature is singularly unhelpful in resolving our fundamental ignorance about the functional relationship between structure and performance: “Inter-industry research has taught us much about how markets look. . . even if it has not shown us exactly how markets work.”[95]

Nor do other suggested measures of supracompetitive returns—such as accounting measures of returns on invested capital—seem likely to offer any resolution. As one paper that advocates for the importance of such measures nevertheless makes clear, “[t]he welfare consequences of increasing sunk and fixed costs in an industry are complex, are probably industry specific, and may vary across antitrust and regulatory regimes. . . . It is difficult to see how cross-industry studies can capture the industry-level complexity that results from high fixed and sunk costs.”[96] Though some studies have plausibly shown that an increase in concentration in a particular case led to higher prices (although this is true in only a minority share of the relevant literature), assuming the same result from an increase in concentration in other industries or other contexts is simply not justified: “The most plausible competitive or efficiency theory of any particular industry’s structure and business practices is as likely to be idiosyncratic to that industry as the most plausible strategic theory with market power.”[97]

2.     The Weakness of the Evidence of Under-Enforcement (Type II Errors)

But even assuming the trends showing increased concentration and/or markups are properly identified, it does not appear that the evidence connecting them to lax antitrust enforcement is very strong. Indeed, even proponents of this view express reservations about the state of the evidence.[98]

In their review of the state of antitrust law in 2004 Robert Crandall and Clifford Winston found “little empirical evidence that past interventions have provided much direct benefit to consumers or significantly deterred anticompetitive behavior.”[99] Theirs is not a condemnation of the overall level of enforcement, but a studied conclusion that the enforcement actions that were undertaken did not obviously further the goals of the antitrust laws.

As the FTC’s Michael Vita and David Osinski demonstrate in a thorough review of the critical literature, the claim of lax enforcement is fairly unconvincing on its own terms.[100] Although their study considered only merger enforcement, it is merger enforcement, of course, that is most relevant to claims of increasing concentration. Furthermore, the study’s results offer an important cautionary tale regarding the validity of claims of lax enforcement generally. Thus, Vita & Osinski’s thorough assessment of the evidence offered for the claim that “recent merger control has not been sufficiently aggressive”[101] finds, to the contrary, that:

[O]f the seven mergers in the 2000s [offered as evidence for the claim], four exhibited no increase in post-merger (or post-remedy) prices []; one had disputed results []; one represented a successful challenge to a consummated merger []; leaving only one (Whirlpool/ Maytag) indicative of potentially lax enforcement.[102]

Similarly, another recent study looking at FTC and DOJ merger enforcement data between 1979 and 2017 finds that:

[C]ontrary to the popular narrative, regulators have become more likely to challenge proposed mergers. . . . Indeed, controlling for the number of merger proposals submitted under HSR, the likelihood of a merger challenge has more than doubled over this period.[103]

The number of Sherman Act cases brought by the federal antitrust agencies, meanwhile, has been relatively stable in recent years, but several recent blockbuster cases have been brought by the agencies[104] and private litigants,[105] and there has been no shortage of federal and state investigations. But all of this is beside the point: for reasons discussed below, it is highly misleading to count the number of antitrust cases and, using that number alone, make conclusions about how effective antitrust law is.

The primary evidence adduced to support the claim that under-enforcement (and thus the risk of Type II errors) is more significant than over-enforcement (and thus the risk of Type I errors) is that there are not enough cases brought and won. But, even if superficially true, this is, on its own, just as consistent with a belief that the regime is functioning well as it is with a belief that it is functioning poorly. The antitrust laws have evolved over the course of a century, and in that time have developed a coherent body of doctrine to guide firms, courts, and enforcers.[106] It is entirely predictable that firms would, for the most part, be accurately guided in their affairs by the law and would largely avoid offending well-established competition principles:

For a given level of enforcement effort, the number of enforcement actions (and litigation generally) will be related to the extent of uncertainties and ambiguities about legal outcomes perceived by defendants. . . . If the number [of enforcement actions] is low, the reason could be lax enforcement or it could be clear legal standards and a reputation for vigorous enforcement. . . . Accordingly, in the absence of more information, counts of legal actions by themselves ought not to carry much weight.[107]

Further, in such a mature regime, one would expect relatively fewer marginal cases that present truly novel problems. Thus, the casual empiricism noting that 97 percent of Section 2 cases between February 1999 and May 2009 were dismissed based on the plaintiff’s failure to show anticompetitive effect[108] is not surprising, nor very telling. The vast majority of these cases—of which the study identifies 215 in all[109]—were brought by private plaintiffs pursuing treble damages. Such an outcome is as consistent with an antitrust litigation regime that decisively deters harmful conduct while overly encouraging plaintiffs to attempt to extract payouts as it is one that under-deters anticompetitive conduct.[110] A lack of cases and plaintiff’s victories cannot, on their own, justify an assertion that the antitrust regime is “lax.”

Moreover, assessing the economic consequences of our antitrust laws by considering the effects of only those enforcement actions actually undertaken is woefully misleading. As Douglas Melamed puts it:

Antitrust law [] has a widespread effect on business conduct throughout the economy. Its principal value is found, not in the big litigated cases, but in the multitude of anticompetitive actions that do not occur because they are deterred by the antitrust laws, and in the multitude of efficiency-enhancing actions that are not deterred by an overbroad or ambiguous antitrust law.[111]

For much the same reason, the purported evidence of under-enforcement inferred from the price effects of mergers found in merger retrospective studies[112] is unconvincing. Merger retrospectives are not a random sample of mergers from which the overall effect on conduct—including, crucially, conduct by parties deterred from merging as a result of enforcement actions against others—can be determined. Such evaluations are capable only of demonstrating the effects of potential Type II errors, and neither collect nor evaluate any evidence bearing on the incidence and cost of Type I errors.

3.     The Strength of the Argument for Greater Concern with Type I Errors

As noted, some critics contend that the normative error-cost framework’s heightened concern for Type I errors stems from a faulty concern that “type two errors—where the court fails to see anticompetitive conduct that actually exists—are not really problematic because the market itself will correct the situation.”[113] But Easterbrook’s argument for enforcement restraint is not based on the assertion that markets are perfectly self-correcting. Rather, his claim is rooted in the notion that the incentives of new entrants to compete for supracompetitive profits in monopolized markets operate to limit the social costs of Type II errors more effectively than the legal system’s ability to correct or ameliorate the costs of Type I errors:

If the court errs by condemning a beneficial practice, the benefits may be lost for good. Any other firm that uses the condemned practice faces sanctions in the name of stare decisis, no matter the benefits. If the court errs by permitting a deleterious practice, though, the welfare loss decreases over time. Monopoly is self-destructive. Monopoly prices eventually attract entry. True, this long run may be a long time coming, with loss to society in the interim. The central purpose of antitrust is to speed up the arrival of the long run. But this should not obscure the point: judicial errors that tolerate baleful practices are self-correcting while erroneous condemnations are not.[114]

It is worth quoting him at length on this issue, as it has become central to the debate over the propriety of the error-cost framework:

One cannot have the savings of decision by rule without accepting the costs of mistakes. We accept these mistakes because almost all of the practices covered by per se rules are anticompetitive, and an approach favoring case-by-case adjudication (to prevent condemnation of beneficial practices subsumed by the categories) would permit too many deleterious practices to escape condemnation. The same arguments lead to the conclusion that the Rule of Reason should be replaced by more substantial guides for decision.

In which direction should these rules err? For a number of reasons, errors on the side of excusing questionable practices are preferable. First, because most forms of cooperation are beneficial, excusing a particular practice about which we are ill-informed is unlikely to be harmful. True, the world of economic theory is full of ‘existence theorems’—proofs that under certain conditions ordinarily-beneficial practices could have undesirable consequences. But we cannot live by existence theorems. The costs of searching for these undesirable examples are high. The costs of deterring beneficial conduct (a byproduct of any search for the undesirable examples) are high. When most examples of a category of conduct are competitive, the rules of litigation should be ‘stacked’ so that they do not ensnare many of these practices just to make sure that the few anticompetitive ones are caught. When most examples of a practice are procompetitive or neutral, the rules should have the same structure (although the opposite slant) as those that apply when almost all examples are anticompetitive.

Second, the economic system corrects monopoly more readily than it corrects judicial errors. There is no automatic way to expunge mistaken decisions of the Supreme Court. A practice once condemned is likely to stay condemned, no matter its benefits. A monopolistic practice wrongly excused will eventually yield to competition, though, as the monopolist’s higher prices attract rivalry.

Third, in many cases the costs of monopoly wrongly permitted are small, while the costs of competition wrongly condemned are large. A beneficial practice may reduce the costs of production for every unit of output; a monopolistic practice imposes loss only to the extent it leads to a reduction of output. Under common assumptions about the elasticities of supply and demand, even a small gain in productive efficiency may offset a substantial increase in price and the associated reduction in output. Other things equal, we should prefer the error of tolerating questionable conduct, which imposes losses over a part of the range of output, to the error of condemning beneficial conduct, which imposes losses over the whole range of output.[115]

While the Hovenkamp and Scott Morton criticism of the Easterbrook presumption rests on questioning just one of the underlying reasons Easterbrook gives for adopting it, Baker has undertaken a more thorough attempt at refutation.

Baker first claims that

[t]he unstated premise is that entry will generally prove capable of policing market power in the oligopoly settings of greatest concern in antitrust—or at least prove capable of policing market power with a sufficient frequency, to a sufficient extent, and with sufficient speed to make false positives systematically less costly than false negatives.

Yet there is little reason to believe that entry addresses the problem of market power so frequently, effectively, and quickly as to warrant dismissal of concerns regarding false negatives.[116]

These statements are largely unobjectionable. It has long been understood that the relevant comparison is between the costs of a monopoly erroneously allowed to persist for the time it takes to be mitigated by the market against the costs of erroneously deterring procompetitive behavior for as long as such a legal rule stands. “Markets do not purge themselves of all unfortunate conduct, and purgation (when it comes) is not quick or painless. . . . The point is not that business losses perfectly penalize business mistakes, but that they do so better than the next best alternative.”[117]

No scholars, including Easterbrook, actually “dismiss[] . . . concerns regarding false negatives”; rather, Easterbrook incorporates these concerns in his assessment by noting the relative time frames of market correction versus judicial correction and the relatively narrow consequences of allowing anticompetitive conduct versus the broad effects of deterring procompetitive conduct. These descriptive elements cannot be separated, and the assumption has never rested on a claim that Type II errors never happen, or that Type I errors are always virtually costless. Rather, as Easterbrook writes, “the economic system corrects monopoly more readily than it corrects judicial errors.”[118] He does not say that the economic system always and swiftly corrects monopoly.

The contrary assumption (in the pervasive absence of empirical evidence to support it[119]) is difficult to maintain. Even if only imperfectly or after a lengthy amount of time, it is a virtual certainty that anticompetitive conduct will be rectified or eventually rendered insignificant or irrelevant. But correction of legal error is far from certain and similarly (at best) distant in time. And there is little reason to be sanguine about the speed with which legal antitrust errors are rectified. It took nearly a century for the Leegin Court to correct the error of its per se rule against vertical resale price maintenance in Dr. Miles, for example[120]—even though the economics underlying Dr. Miles was called into question shortly after it was decided and firmly discredited by the economics profession 50 years later.[121] Yet it took another almost 50 years before the Court finally overturned its per se rule against RPM.[122]

Ironically, in fact, the extent to which an improperly stringent rule may subsequently be overturned is a function of its clarity. Within a plausible range,[123] the more certain and therefore more effective (and, therefore more stringent) the rule, the less likely firms would, whether intentionally or accidentally, run afoul of it. A rule that clearly prohibits all mergers over a certain size, for example, would likely be extremely effective, and few if any such mergers would be attempted. But this also means that there would be few opportunities to revisit the rule and potentially overturn it. Thus, an improperly harsh rule is more likely subsequently to be overturned the closer it is to the optimal rule—the less wrong it is, in other words. But for the same reason, overturning it would also be exactly that much less socially beneficial. Over the plausible range of overly-strict erroneous rules, the worst are less likely to be overturned, and the (relatively) best most likely to be reversed.

Moreover, anticompetitive conduct that is erroneously excused may be subsequently corrected, either by another enforcer, a private litigant, or another jurisdiction. An anticompetitive merger that is not stopped, for example, may be later unwound, or the eventual anticompetitive conduct that is enabled by the merger may be enjoined. Ongoing anticompetitive behavior (and, unfortunately, a fair amount of procompetitive behavior) will tend to arouse someone’s ire: competitors, potential competitors, customers, input suppliers. That means such behavior will be noticed and potentially brought to the attention of enforcers. For the same reason—identifiable harm (whether actually anticompetitive or not)—it may also be actionable. By contrast, procompetitive conduct that does not occur because it is prohibited or deterred by legal action has no constituency and no visible evidence on which to base a case for revision.

And, even if it did, there is no ready mechanism for revision anyway. A firm improperly deterred from procompetitive conduct has no standing to sue the government for erroneous antitrust enforcement, or the courts for adopting an improper standard. The existence of a judicial correction presupposes, at the very least, some firm engaging in conduct despite its illegality in the hope that its conduct will go unnoticed or the prior rule may be misapplied or overturned if it is sued. But the primary effect of a Type I error is the nonexistence of such conduct in the first place.

A related critique suggests that “Chicago School antitrust” (often used as a synonym for adherents to the error cost framework) is insensitive to an incumbent monopolist’s ability to deter entry, and thus to mitigate market correction. This critique asserts that the Chicago School approach rests on an indefensible “perfect competition” assumption:

Built into Chicago School doctrine was a strong presumption that markets work themselves pure without any assistance from government. By contrast, imperfect competition models gave more equal weight to competitive and noncompetitive explanations for economic behavior. . . .

. . . Because a firm has a financial incentive to use the profit from market power in order to maintain it, economic theory predicts that this would occur often. The Chicagoans thus needed an additional critical assumption: markets are inherently self-correcting and if left alone, they will work themselves pure.[124]

In other words, the reality that an incumbent monopolist may have the incentive and ability to act strategically to impede entry that could dilute its market power is claimed to be at odds with the Chicago School approach.[125]

Based on this, Hovenkamp and Scott Morton, for example, draw the tendentious conclusion that Chicago/error-cost antitrust scholars are disingenuous ideologues, actively suppressing economic science that contradicts their ideology:

When economic policy takes the model of perfect competition as its starting point, it has nowhere to go but downhill. If we did have a perfectly competitive economy, then of course antitrust intervention would be unnecessary. Faced with the choice of moving to models that provided greater verisimilitude and predictability, but that required more intervention, or clinging to the past, the Chicago School chose the latter.[126]

But this is, at best, a willfully misleading caricature of the Chicago School. Indeed, it is arguably more accurate to say that the pervasiveness of the misallocation of property rights and the presence of transaction costs in the market is not only appreciated by the Chicago School, but it forms a core part of its adherence to Easterbrook’s claim that Type I errors are more problematic than Type II errors.[127]

To begin, the assumption of perfect competition is not, in fact, a part of the Chicago School enterprise. Indeed, it was Chicago School scholars[128] who introduced the analyses that undermined the assumptions of perfect competition that prevailed during the inhospitality era. Thus, for example, scholars like Ronald Coase and Oliver Williamson introduced the fundamental notion that unfettered market allocation was frequently inefficient and that private ordering—ranging from nonstandard contracts to firms themselves—was primarily aimed at ameliorating the inefficiencies of atomistic markets.[129] Scholars like Lester Telser, Ward Bowman, and Howard Marvel explained why assumptions of perfect information were inappropriate.[130] Chicago scholars like Ben Klein and Armen Alchian developed the notion that the risk of appropriation of assets over time could undermine efficient investment against the perfect competition model that assumed no time inconsistency.[131] Meanwhile, Chicago scholars, who first introduced the “single monopoly profit” theory explaining why much conduct, like tying, should not be per se illegal, also anticipated and understood the limitations of the theory.[132] Similarly, Chicago scholars anticipated the raising rivals’ cost (“RRC”) literature[133] and were the first to note its theoretical possibility as an explanation for deviation from the model of perfect competition.[134] They also offered the most comprehensive empirical evidence of its existence.[135]

As Professor Meese summarizes, it was Chicago School (and “fellow traveler”) scholars who stepped in to correct inappropriate reliance on perfect competition models; they did not advocate it:

[Pre-Chicago School] scholars considering questions of market failure did so on the assumption that markets were perfectly competitive. This assumption was not a statement about the actual state of the world, but instead a component of a theoretical model designed to guide scientific research. This methodological habit prevented these scholars from recognizing that various non-standard contracts could overcome market failure. In the absence of a beneficial explanation for these agreements, scholars naturally treated these departures from perfect competition as manifestations of market power.[136]

There is a long and unfortunate history of antitrust institutions (including courts and enforcers) erroneously condemning nonstandard business practices as problematic deviations from a theoretical model of perfect competition.[137] The urge to condemn practices not fully understood arises from an implicit (or sometimes explicit) assumption that deviations from perfect model assumptions are more likely than not expressions of market power, rather than corrections of underlying market failures. As Ronald Coase described this phenomenon decades ago:

If an economist finds something . . . that he does not understand, he looks for a monopoly explanation. And as in this field we are rather ignorant, the number of ununderstandable practices tends to be rather large, and the reliance on monopoly explanations frequent.[138]

Modern economics and antitrust further persist in this inhospitality tradition by, for example, dismissing business strategy and other “soft” literatures[139] that identify and explain reasons for market-correcting structures assumed by much of modern economics to be anticompetitive deviations.[140] The continued adherence to perfect competition assumptions by critics of the Chicago School is what induces them to assume that Type I errors are less problematic. Combined with an unsupported (and often implicit) assumption of heightened government ability, this also leads to the unsupported assumption that Type II errors are less problematic.[141] As Meese puts it:

Reliance on the perfect competition model, I submit, accounts for the failure of modern scholars to offer any account of the formation and enforcement of non-standard contracts that does not depend on the possession or exercise of market power. By focusing solely on the propensity of non-standard contracts to reduce ‘transaction costs,’ these scholars ignore the fact that such agreements also reverse market failures by internalizing externalities and thus altering the costs faced by parties to such agreements. Thus, such restraints naturally produce prices or output different from what would obtain in an unbridled market.[142]

The modern approach makes these assumptions even without recognizing it, for instance by relegating consideration of merger efficiencies to a separate analysis from the analysis of competitive effects, on the assumption that efficiencies can manifest only in the form of relative increases in output—not that the efficiency gained may be the elimination of competition and conceivably the reduction in output in the first place. “Within this framework, efficiencies necessarily manifest themselves as lower production costs and thus increased output of the product than existed before the restraint. This merger paradigm is ill-suited for evaluation of restraints that purportedly overcome market failure.”[143]

In this conception, any reduction in the number of competitors or constraint on the freedom of market participants is a threat to competition—essentially a movement away from the perfect competition ideal. It does not readily admit of reallocation of resources according to better knowledge and coordination as an inherent benefit, unless it manifests in the form of reduced production costs and increased output.[144] In this sense both Chicago and non-Chicago scholars rest substantially on partial equilibrium analysis and a perfect competition baseline, in contrast to evolutionary,[145] dynamic capabilities,[146] resource-advantage,[147] and similar[148] approaches that do actually eschew the baseline of perfect competition. None of these approaches has had significant influence on the development of antitrust policy and law, however. “For over thirty years, the economics profession has produced numerous models of rational predation. Despite these models and some case evidence consistent with episodes of predation, little of this Post-Chicago School learning has been incorporated into antitrust law.”[149]

By stark contrast, the practical, legal status of Easterbrook’s claim is today well-enshrined in antitrust law.

[Thirty-six] years after Judge Easterbrook’s seminal article, the Supreme Court has effectively written Easterbrook’s principal conclusion about error costs into antitrust jurisprudence. Less ideological campaign, more convergent evolution, this process has spanned decades, over a series of opinions, and includes the votes of at least 14 different Justices. Time and again, when confronted with deep questions in antitrust law, those Justices, have reached the same conclusion: False positives are more harmful than false negatives in antitrust.[150]

A number of cases establish this, including several seminal Supreme Court and appellate antitrust decisions.[151]

Nor is it likely that the courts are making an erroneous calculation in the abstract. Evidence of Type I errors is hard to come by, but, for a wide swath of conduct called into question by “Post-Chicago School” and other theories, the evidence of systematic problems is virtually nonexistent.[152] This state of affairs may make it appropriate to adjust the implementation of the error-cost framework in any specific case as the relevant evidence suggests, but it does not counsel its abandonment. “Given the state of empirical knowledge, broad policy questions necessarily rely upon imprecisely estimated factors. As a result, a wide range of policy approaches based on the same error cost methodology is possible.”[153]

Thus, for example, for the conduct most relevant to digital markets—vertical restraints—the theoretical literature suggests that firms can engage in anticompetitive vertical conduct, but the empirical evidence suggests that, even though firms do impose vertical restraints, it is exceedingly rare that they have net anticompetitive effects. Nor is the relative absence of such evidence for lack of looking: countless empirical papers have investigated the competitive effects of vertical integration and vertical contractual arrangements and found predominantly procompetitive benefits or, at worst, neutral effects.[154]

To be sure, there are empirical studies showing that vertically integrated firms follow their unilateral pricing incentives, which means that they do increase prices charged to firms that compete downstream, resulting in increased consumer prices. But it also means that they eliminate double marginalization, resulting in lower consumer prices. Several recent papers have found both effects—and found both that the effects are small and almost exactly offsetting. As one of these papers concludes:

Overall, we find that both double-marginalization and a supplier’s incentive to raise rival’s costs have real impacts on consumer prices. However, these effects in the gasoline markets we study are small. Both the double marginalization effect and raising rival’s cost effect are roughly 1 to 2 [cents per gallon], or roughly 0.76%-1.5% of the price of gasoline. The net effect of vertical separation on retail gasoline prices was essentially zero. . . .[155]

The same is true for other forms of conduct relevant to digital markets. The primary, mainstream theoretical challenge to the normative error-cost framework (and to Chicago School antitrust more generally) is found in the RRC literature.[156] RRC offers a theoretically rigorous, alternative, anticompetitive theory for much ambiguous conduct, including conduct identified by early Chicago School scholars as having plausible procompetitive bases (and often recognized by the courts through the removal of per se illegality).

But, while the identification of a compelling theory of harm for such conduct may alter the specific contours of a decision-theoretic assessment under the Rule of Reason, it does not fundamentally alter the recognition that per se illegality is inappropriate, nor even that any specific doctrinal process element of the Rule of Reason is improperly imposed.[157] Because all of these are implemented in fundamentally discretionary fashion, a court need not, say, reverse the burden of production in order to implement the status quo burden-shifting framework in a way that demands relatively more of one side or the other based on the court’s understanding of the relative applicability of anticompetitive RRC theories and procompetitive Chicago School theories.

Thus it is crucial to note that, despite claims by Chicago School critics that RRC and other developments in economic theory (most notably game theory[158]) should undermine the normative error-cost approach and lead courts to different outcomes, there is not, in fact, a sound evidentiary basis on which to rest this assertion. Judged on the very criteria by which Chicago School critics maintain the superiority of Post-Chicago theories, in fact, these models distinctly fail to “provide[] greater verisimilitude and predictability.”[159] Indeed, they may even reduce our ability to make reliable predictions on which to base policy: “While additional theoretical sophistication and complexity is useful, reliance on untested and in some cases untestable models can create indeterminacy, which can retard rather than advance knowledge.”[160] As Kobayashi and Muris emphasize, the introduction of new possibility theorems, particularly uncorroborated by rigorous empirical reinforcement, does not necessarily alter the implementation of the error-cost analysis:

While the Post-Chicago School literature on predatory pricing may suggest that rational predatory pricing is theoretically possible, such theories do not show that predatory pricing is a more compelling explanation than the alternative hypothesis of competition on the merits. Because of this literature’s focus on theoretical possibility theorems, little evidence exists regarding the empirical relevance of these theories. Absent specific evidence regarding the plausibility of these theories, the courts . . . properly ignore such theories.[161]

RRC is no more amenable to concrete implementation by courts: “As with almost all monopolization strategies, one cannot distinguish an anticompetitive use of RRC from competition on the merits, absent a detailed factual inquiry. . . . [T] here is very little empirical evidence based on in-depth industry studies that RRC is a significant antitrust problem.”[162]

II. Error Costs in Digital Markets: The Problem of Innovation

The arguments in favor of the normative error-cost framework are even stronger in the context of the digital economy. The concern with error costs is especially high in dynamic markets in which it is difficult to discern the real competitive effects of a firm’s conduct from observation alone. And for several reasons, antitrust decision-making in the context of innovation tends much more readily toward distrust of novel behavior, thus exacerbating the risk and cost of over-enforcement.

As noted, there is an “uneven history of courts and enforcement officials in enhancing welfare through antitrust,” suggesting reason to be skeptical.[163] In the face of innovative business conduct, the concern is compounded by the problematic incentives of antitrust economists. As Manne and Wright note:

Innovation creates a special opportunity for antitrust error in two important ways. The first is that innovation by definition generally involves new business practices or products. Novel business practices or innovative products have historically not been treated kindly by antitrust authorities. From an error-cost perspective, the fundamental problem is that economists have had a longstanding tendency to ascribe anticompetitive explanations to new forms of conduct that are not well understood.[164]

The two problems are related. Novel practices generally result in monopoly explanations from the economics profession, followed by hostility from the courts. Often a subsequent, more-nuanced economic understanding of the business practice emerges, recognizing its procompetitive virtues, but this also may come too late to influence courts and enforcers in any reasonable amount of time—and it may never tip the balance sufficiently to appreciably alter established case law. Where economists’ career incentives skew in favor of generating models that demonstrate inefficiencies and debunk the Chicago School status quo, this dynamic is not unexpected.

At the same time, however, defendants engaged in innovative business practices that have evolved over time through trial and error regularly have a difficult time articulating a justification that fits either an economist’s limited model or a court’s expectations. Easterbrook ably described the problem:

[E]ntrepreneurs often flounder from one practice to another trying to find one that works. When they do, they may not know why it works, whether because of efficiency or exclusion. They know only that it works. If they know why it works, they may be unable to articulate the reason to their lawyers-because they are not skilled in the legal and economic jargon in which such “business justifications” must be presented in court. . . .

. . . It takes economists years, sometimes decades, to understand why certain business practices work, to determine whether they work because of increased efficiency or exclusion. To award victory to the plaintiff because the defendant has failed to justify the conduct properly is to turn ignorance, of which we have regrettably much, into prohibition. That is a hard transmutation to justify.[165]

Imposing a burden of proof on entrepreneurs—often to prove a negative in the face of enforcers’ pessimistic assumptions—when that burden can’t plausibly be met can serve only to impede innovation.[166]

Even economists know very little about the optimal conditions for innovation. As Herbert Simon noted in 1959,

Innovation, techcological change, and economic development are examples of areas to which a good empirically tested theory of the processes of human adaptation and problem solving could make a major contribution. For instance, we know very little at present about how the rate of innovation depends on the amounts of resources allocated to various kinds of research and development activity. Nor do we understand very well the nature of “know how,” the costs of transferring technology from one firm or economy to another, or the effects of various kinds and amounts of education upon national product. These are difficult questions to answer from aggregative data and gross observation, with the result that our views have been formed more by arm-chair theorizing than by testing hypotheses with solid facts.[167]

Our understanding has not progressed very far since 1959, at least not insofar as it is applied to antitrust.[168] Simon astutely infers that innovation would be a function of “human adaptation and problem solving”; “the amounts of resources allocated to various kinds of research and development activity”; the nature of ‘know how’”; “the costs of transferring technology”; and “the effects of various kinds and amounts of education.” But economists today tend to focus primarily on how market structure affects innovation. As Teece notes, however:

A less important context for innovation, although one which has received an inordinate amount of attention by economists over the years, is market structure, particularly the degree of market concentration. Indeed, it is not uncommon to find debate about innovation policy among economists collapsing into a rather narrow discussion of the relative virtues of competition and monopoly. . . .

. . . [Yet] reviews of the extensive literature on innovation and market structure generally find that the relationship is weak or holds only when controlling for particular circumstances. The emerging consensus is that market concentration and innovation activity most probably either coevolve or are simultaneously determined.[169]

Even to the extent that economic science has developed some better theories of innovation and its relationship with market structure and antitrust, the literature has still failed to develop clear and concrete theories or empirics that are readily implementable by courts or enforcers in the face of complex economic conditions.[170] Particularly to the extent that contemporary monopolization theorems purport to address novel, often-innovative business practices, they are problematic for antitrust law and policy aiming to maximize welfare (minimize errors), for several reasons.

First, they engender circumstances that increase the likelihood of antitrust complaints, investigations, and enforcement actions.[171] In the face of limited evidence, untestable implications, and possibility theorems regarding the consequences of novel, innovative conduct, a proper application of error-cost principles would likely be expected to deter intervention. Yet it is precisely in these situations that intervention may be more likely.

On the one hand, this may be because in the absence of information disproving a presumption of anticompetitive effect, there is an easier case to be made against the conduct—this despite putative burden-shifting rules that would place the onus on the complainant. On the other hand, successful innovations are also more likely to arouse the ire of competitors and/or customers, and thus both their existence and their negative characterization are more likely brought to the attention of courts or enforcers—abetted in private litigation by the lure of treble damages.

Antitrust is skeptical of, and triggered by, various changes in status quo conduct and relationships. This applies not only to economists (as discussed above),[172] but also to competitors (who are likely to raise challenges to innovative, even if perfectly procompetitive, conduct that makes competition harder), enforcers (who are inherently on the look-out for cutting-edge cases because clearly infringing conduct is rare and opportunities to expand their authority attractive), and judges (who may be particularly swayed by economists’ possibility theorems to believe that they can make upholdable new law).

Business process and organizational innovations are also more relevant to the sorts of conduct with which antitrust concerns itself. New technological advance is rarely an inherent problem for antitrust; rather, its presence increases the potential cost of over-deterrence, but not necessarily its likelihood.[173] But novel technologies are frequently accompanied by novel business arrangements—and these are of particular concern to antitrust.

The problem stemming from both of these is that, to a first approximation (and especially in the digital economy), change (including by incumbents) is the hallmark of competition itself. In these markets competition means innovation and innovation means change. Since Jorde and Teece began writing about antitrust, and especially market definition, in high-tech industries in the late 1980s, we’ve been on notice that traditional, static, price-based antitrust analysis doesn’t work well for understanding these markets. For these industries, performance, not price, is paramount and competition generally unfolds sequentially rather than contemporaneously—which means innovation is key.[174]

Second, over-deterring business model and contractual innovations may be even more damaging to dynamic welfare and economic growth than is reducing incentives to engage in technological innovation.[175] “Although technology change is emphasized in the Schumpeterian tradition, organizational architectures sometimes are the primary force shaping logics of competition. . . . The effects of such organizational innovations . . . can be as profound as that of technology innovations.”[176]

Easterbrook’s 1984 article was particularly important for its identification of the risk of error-cost problems in the face of “new method[s] of making and distributing a product.”[177] The disconnect between business and contractual innovations in the market and economic understanding of them is significant. As Easterbrook noted:

Wisdom lags far behind the market. It is useful for many purposes to think of market behavior as random. Firms try dozens of practices. Most of them are flops, and the firms must try something else or disappear. Other practices offer something extra to consumers—they reduce costs or improve quality—and so they survive. In a competitive struggle the firms that use the best practices survive. Mistakes are buried.

Why do particular practices work? The firms that selected the practices may or may not know what is special about them. They can describe what they do, but the why is more difficult. Only someone with a very detailed knowledge of the market process, as well as the time and data needed for evaluation, would be able to answer that question. Sometimes no one can answer it.[178]

The inclination among economists (and especially decion-makers relying on economic science), as noted, is to condemn these practices. “The critical point here is that innovation is closely related to antitrust error. The argument is simple. Because innovation involves new products and business practices, courts and economists’ initial understanding of these practices will skew initial likelihoods that innovation is anticompetitive and the proper subject of antitrust scrutiny.”[179]

And yet it is precisely when confronted with innovative products and innovative contracts that the consequences of erroneous enforcement and over-deterrence are increased. There is little evidence, however, to suggest that the academic literature appropriately recognizes and calls out these risks, or counsels against the formulation of legal proscriptions based on stylized possibility theorems.[180]

Third, many technological innovations, especially those that facilitate or give rise to innovations in business organization, marketing, or distribution, tend to attract a disproportionate and generally unwarranted degree of skepcticism by antitrust authorities looking to past experience and existing commercial relationships to assess their likely effects.

One problem is that scholars, regulators, politicians, and, of course, competitors tend to assume that markets were less problematic in the past, and that new business realities tend to undermine relatively beneficial, functioning markets, thus fundamentally altering the optimal balance of antitrust toward enforcement. Many further argue in favor of more aggressive interventions in digital markets, aimed at “restoring” markets to the state that existed before allegedly anticompetitive conduct occurred.

The upshot is that antitrust scholarship often emphasizes the risks that new market realities create for competition, while idealizing the extent to which previous market realities led to procompetitive outcomes. This defect is not confined to digital markets, and is, in fact, nothing new. As early as 1942 Joseph Schumpeter derided “the creation of an entirely imaginary golden age of perfect competition that at some time somehow metamorphosed itself into the monopolistic age.”[181] But it is undoubtedly magnified in digital markets.

Underlying these numerous regulatory and scholarly interventions is a fear that new technologies will somehow cause a departure from competitive markets and innovation, moving the economy towards a new paradigm of monopolization and rent-seeking. Scholars and policymakers thus conclude that, facilitated by new market realities, firms that have achieved powerful positions today will be able to maintain their dominance for decades to come. This is a form of “antitrust dystopia.”[182] For its proponents, the future of competition is bleak, despite evidence that humanity has progressed tremendously throughout the last decades, and that information technology and competition have played a huge role in this transformation.[183]

The fear of the new—and the assumption that “ununderstandable practices”[184] emerge from anticompetitive impulses and generate anticompetitive effects—permeates not only much antitrust scholarship, but antitrust doctrine as well. There is an inherent conservatism in all law, especially that developed (as antitrust) through a common-law-like evolution from general principles. While much antitrust doctrine is perfectly capable of accommodating novel technology and innovative business processes, much doctrine is also inherently backward looking: It assesses novel practices by reference to previous structures, organizations, contracts, conduct, and the like, and largely evaluates them in the context of existing (and thus previously developed) competitive structures. As a result, there is a built-in “nostalgia bias” to much antitrust, which casts a skeptical eye upon novel conduct.

These dystopia and nostalgia biases induce proponents to resort to precautionary reasoning. Yet, while there is undoubtedly some level of uncertainty at play in digital markets, the fear that that uncertainty conceals indelibly problematic, fat-tailed outcomes[185] is unsupported. Yet such precautionary principle-type reasoning has increasingly permeated antitrust policy discourse.[186]

Arguments that claims today regarding false-positive error costs wrongly assume that the earlier, inhospitality tradition of antitrust still holds have some merit, but not as much as proponents think.[187] It is certainly true, as noted above, that Easterbrook’s normative error-cost analysis has become a core part of contemporary antitrust jurisprudence,[188] and courts are surely not as quick to strike down unfamiliar practices as they once were. But that doesn’t mean there’s no reason for concern.

The combination of the anti-market bias in favor of monopoly explanations for innovative conduct that courts, enforcers, and economists do not understand, the unwarranted fear of new technologies leading to “technopanics,” and the increased, economy-wide stakes of antitrust intervention against innovative technologies and business practices, increases both the likelihood that antitrust errors surrounding digital markets will be Type I, false-positive errors, as well as increasing their cost.

A.  The Costly Absence of Dynamic Analysis

In particular, with the ascendency of digital-economy antitrust, the risk of error from unduly static antitrust analysis is magnified, and the relative historical success of the error-cost framework may not portend a particularly restrained or accurate mode of antitrust analysis going forward. Indeed, the rise of antitrust populism—spurred on most significantly by concerns about digital markets—and the overwhelming focus on digital markets by antitrust enforcers around the globe suggest that Type I error-cost concerns will be an increasingly significant problem for the foreseeable future.

A standout reason for this concern is the disconnect between the shallowness of appreciation for platform economics, economies of scale, network effects, data, and other attributes of digital markets and the deviations these occasion in business conduct from perfectly competitive, atomistic markets.

In oligopolistic markets, and especially markets predominated by platforms, “[a] stable outcome will require restrictions on the freedom of market participants; that is, stability will require some sort of coordination. These restrictions look like the bread and butter of antitrust lawsuits—cartels, tacit collusion, vertical restrictions, and mergers.”[189] “Clearly, when no competitive equilibrium is possible, something else has to take its place. Since the problems arise from too much competition and too little cooperation, the institutions that solve these problems necessarily imply a variety of arrangements that look ‘anticompetitive.’”[190]

As a result—and paradoxically—an excessive concern for the quite-possibly costly, static effects of innovation arising from nonstandard business models, product designs, and pricing schemes on current users or competitors can harm welfare overall.[191]

With dynamic competition, new entrants and incumbents alike engage in new product and process development and other adjustments to change. Frequent new product introductions followed by rapid price declines are commonplace. Innovations stem from investment in R&D or from the improvement and combination of older technologies. Firms continuously introduce product innovations, and from time to time, dominant designs emerge. With innovation, the number of new entrants explodes, but once dominant designs emerge, implosions are likely, and markets become more concentrated. With dynamic competition, innovation and competition are tightly linked.[192]

Platforms especially have created problems for antitrust.[193] To begin with, much of the most important and insightful literature on platform economics has had scant influence on antitrust economics.[194] This literature consistently and compellingly describes the myriad ways in which platform ecosystems are optimized not by pure openness, but by various, limited restrictions imposed by platforms on their users—including both consumers and complementors who may also be competitors.[195]

The presumption that antitrust should tend to force platforms to allow complementors to compete on their preferred terms, free of constraints or competition from platforms, is a species of the idea that platforms are most socially valuable when they are treated as “essential facilities.” But such an approach is not without costs, most importantly in terms of the effective operation of the platform and its own incentives for innovation. Platforms have an incentive to optimize openness and to assure complementors of sufficient returns on their platform-specific investments. This doesn’t mean that maximum openness is optimal, however; in fact, typically a well-managed platform will exert control where doing so is most important, and openness where control is least meaningful.[196]

A properly dynamic analysis would view these limited constraints with far less skepticism than much of the antitrust community does currently. This does not mean there is no risk that a platform will impose anticompetitive constraints. But the imposition of platform constraints is so widespread that, unless the argument is that independent complementors and their investors are improbably ignorant or repeatedly deceived, it must be the case that they develop their businesses models and operate their businesses in recognition of the risk involved. This implies either that the risk is not as substantial as critics contend or else that complementors are sufficiently compensated for it. In either case, the fact that platform ecosystems are so vast and successful, and that they encourage significant innovation, suggests that we should hesitate before assuming that incentives to invest are inefficiently reduced by apparent, static foreclosure risks.

A complementor that makes itself dependent upon a platform for distribution of its content does take a risk. Although it may benefit from greater access to users, it places itself at the mercy of the other—or at least faces great difficulty (and great cost) adapting to unanticipated platform changes over which it has no control. This is a species of the “asset specificity” problem that animates much of the Transaction Cost Economics literature.[197] But the risk may be a calculated one, and the imposition of constraints on complementors by and to the benefit of platforms may be optimal. As such, assuming harm from ex post foreclosure risks overly encouraging ex ante risk-taking by complementors, under-investment in platforms and platform innovation, and the sub-optimal allocation of resources.

Without adequate consideration of such dynamic effects, antitrust enforcers and courts are likely to make costly Type I errors—as seems to have happened in the European Commission’s Google Shopping case, for example. In its decision, the Commission asserts that Google’s prioritization of its own shopping results harms competition because it reduces traffic to complementary independent comparison shopping sites, potentially foreclosing them from minimum viable scale and causing them to under-innovate.[198] The decision does not identify actual consumer harm; it infers it from the reduction in traffic to comparison shopping sites, constituting an alleged impairment of an “effective competition structure.”[199]

But the fact that Google creates an opportunity for complementors to rely upon it doesn’t mean that a firm’s decision to do so—and to do so without a viable contingency plan—makes good business sense. In the case of comparison shopping sites, it was entirely predictable that Google’s algorithm would change over time. It was also entirely predictable that it would change in ways that could diminish or even eviscerate their traffic.[200]

The problem with the superficial analysis that assumes harm from the diminution of traffic to independent competitors is this: Protecting complementors from the inherent risk in a business model in which they are entirely dependent upon another company with which they have no contractual relationship is at least as likely to encourage excessive risk taking and inefficient overinvestment as it is to ensure that investment and innovation aren’t too low.

The relatively static, “nostalgic” analysis that essentially assumes that any given complementor that succeeded in the past “should” succeed in the future (especially against competition from a platform’s own, integrated product) is deeply flawed. Past success under a particular set of platform constraints is no reason to assume that a complementor would provide any measure of innovation in the future under different constraints, nor is it an argument for insisting that the platform’s constraints cannot change. Indeed, if platform discrimination is rampant, the fact that a complementor previously succeeded under different, discriminatory conditions offers no reason to think that that there was an “effective competition structure” in the first place and thus that its previous success was in any way “merited.”

What this overly static analysis misses is that, while constraints on complementors’ access and use may look restrictive compared to an imaginary world where such restrictions were not allowed, in such a world the platform would not be built in the first place because it would not ensure enough revenue. Similarly, if platforms ever operated near the other extreme—full appropriation—the platform also would not be built because it would attract no complementors. Thus, platforms operate in a delicate middle ground in which some constraints on user/complementor freedom is, in fact, desirable. As Jonathan Barnett aptly sums it up:

The [platform] therefore faces a basic trade-off. On the one hand, it must forfeit control over a portion of the platform in order to elicit user adoption. On the other hand, it must exert control over some other portion of the platform, or some set of complementary goods or services, in order to accrue revenues to cover development and maintenance costs (and, in the case of a for-profit entity, in order to capture any remaining profits).[201]

Viewing such platform deviations from “perfect” competition as suspicious misunderstands platform dynamics and risks costly Type I error.[202]

A great deal of the antitrust literature on the relationship between market structure and innovation which adopts this “inhospitable” stance is as inherently flawed as the now-debunked literature on how market structure affects price and profits.[203] Not only does this literature adopt dramatic simplifying assumptions that offer little in the way of predictive power for implementation as policy addressing the real economy,[204] they also almost uniformly adopt a presumption that innovation is a function of market structure, rather than the other way around.

The ongoing debates within economics over the veracity of the “inverted-U” model of innovation and market structure miss the point.[205]

[A] narrative has developed, based on a number of papers on the topic of “competition and innovation,” that antitrust enforcers should be tolerant of horizontal mergers when innovation is involved because “too much competition might be bad for innovation.” This narrative is summarized with reference to a purported inverted U-shaped relationship between “competition” and “innovation.” As one might expect, the narrative that “too much competition might be bad for innovation” has become popular among firms seeking to merge. However, that conclusion does not follow from a more careful reading of the literature.[206]

In response, Federico, et al. suggest that, in order to make a competition-policy-relevant assessment of innovation,

one holds the market characteristics constant, including the demand structure, product characteristics, and the firms’ cost functions, and seeks to predict what happens to innovation when competition is lessened because of a merger or by exclusionary conduct. Absent synergies, a merger between significant rival innovators is likely to cause innovation to decline, for the reasons provided previously.[207]

But this approach, rooted quite explicitly in a “perfect competition” model of innovation (more competition = more innovation), is no more accurate than the inverted-U model which, at least, acknowledges that the relationship between market structure and innovation can’t always be monotonic. This approach remains committed to a causal relationship between market structure and innovation, and even assumes that it is unidirectional: changes in market structure affect incentives to innovate, not the other way around.

But reality is considerably more complicated. And despite mainstream IO economics’ disregard for the large body of work that has studied these complexities, it does indeed exist. The literature on dynamic capabilities and organizational strategy, for example, takes an explicitly dynamic approach, and finds, at the very least, that the direction of causation is very often reversed: innovation determines market structure.[208]

As Sidak and Teece summarize, the bulk of contemporary antitrust analysis of innovation is unduly crabbed by adherence to inappropriate historical doctrines (like product market definition and concentration metrics), and suffers from a fatal lack of dynamic analysis, often inferring instead net consumer harm from short-term constraints on economic freedom in complicated and ill-understood markets.

To summarize, the basic framework employed in discussions about innovation, technology policy, and competition policy is often remarkably naïve, highly incomplete, and burdened by a myopic focus on market structure as the key determinant of innovation. Indeed, it is common to find a debate about innovation policy among economists collapsing into a rather narrow discussion of the relative virtues of competition and monopoly, as if they were the main determinants of innovation. Clearly, much more is at work.[209]

B.  Caveats

The error-cost approach is not limited to consideration of Type I and Type II errors, of course. As noted, the costs of information collection and administration are also crucial considerations. Indeed, Easterbrook’s 1984 article is ultimately an investigation of potential “simple rules” aimed at simplifying the costly and, in his description, vacuous Rule of Reason analysis that predominates in antitrust.[210] Yet, as Whinston laments:

The importance of administrative costs for the design of optimal antitrust policy has not, I think, been adequately recognized in either the economic or legal literatures. On the economics side, it is common for a journal article that shows that a particular practice may either raise or lower welfare to conclude that this implies that the practice should be accorded a Rule of Reason standard. As the foregoing discussion suggests, such a conclusion makes little sense. On the legal side, there appears to be surprisingly little formal application of the theory of optimal statistical decision-making to the issue of optimal legal rules.[211]

Adding complexity to antitrust analysis by expanding the incorporation of more dynamic analysis may increase accuracy, but it could possibly decrease legal certainty and increase costs by even more.

The notion that uncertainty about the future can have real economic effects—particularly for irreversible decisions (like sunk cost investments)—is long- and well-established in the economic literature.[212] Policymakers often add an additional layer of uncertainty through their monetary, fiscal, and regulatory decisions, known as “economic policy uncertainty.”[213] “The risk that regulation could reduce the rate of return below the cost of capital also creates a disincentive for investment.”[214] Although identifying and measuring causal relationships between policy uncertainty and economic outcomes is fraught, attempts at such measurements have consistently pointed in the same direction. As one brief review sums it up:

We think the weight of the evidence and the lessons of economic theory argue for assigning some weight to the policy uncertainty view. If U.S. policymakers can deliver a policy environment characterized by greater certainty and stability, there will likely be a positive payoff in the form of improved macroeconomic performance.[215]

It is by no means clear that a more dynamic approach would increase legal certainty. Indeed, “the introduction of more dynamic elements into antitrust analysis will inevitably diminish the certainty and predictability of the law.”[216] But the primary reason for this is institutional problems, not information problems. “Operating under that greater degree of uncertainty means agencies (and to a lesser extent courts) will have greater discretion. There will simply be more degrees of freedom for the intuitions, biases, and personal and institutional preferences of decisionmakers to influence the outcomes of investigations and cases.”[217]

In order for dynamic analysis to be worthwhile, the greater accuracy of the approach (which is unquestionable relative to the simplified and problematic static approach that dominates today[218]) in terms of reducing both Type I and Type II errors must be sufficient to offset the increased administrative costs associated with a less certain standard. As Judge Ginsburg and Professor Wright conclude: “In their current state, the leading proposals to incorporate dynamics do not make us optimistic about the benefit, in no small part because of the difficulties facing the institutions charged with making antitrust decisions.”[219]

The concern is a valid one, and the increased discretion from a less certain analytical framework would undoubtedly be a problem of a more dynamic approach given current limitations of knowledge and problems of institutions. But it seems worthwhile to seek to impose some further restraint on prospective antitrust decision making overall, and on findings of liability in particular, rather than excluding dynamic analysis.

A related argument is that the increased use of the rule of reason, occasioned predominantly by past Chicago School critiques of rules of per se illegality, imposes significant administrative costs on enforcers, such that, without significantly greater resources, conduct subject to the rule of reason becomes effectively exempt from antitrust liability. As Ramsi Woodcock argues:

The enforcement budget constraint has made a mockery of the courts’ attempt to use the rule of reason to avoid taking a position on the error cost stalemate. The courts’ imposition of rules of reason on vast swaths of antitrust-relevant conduct has, through a reduction in enforcement by budget-constrained enforcers, turned out to be the imposition of a combination of rules of reason and de facto exemptions on vast swaths of antitrust-relevant conduct.[220]

The point is well-taken, and perhaps it is appropriate to increase enforcement agency budgets (or otherwise to enact institutional reforms that lower the expected cost of enforcement). But at the end of the day, the institutional limitations on enforcement under the rule of reason may be salutary. Although it remains to be rigorously performed, it is possible that the right error-cost analysis, net of administrative costs, is indeed under-enforcement of existing rules, which may not in the abstract go far enough to mitigate the risks of Type I errors. Indeed, in terms of legal certainty and administrative costs, reliable non-enforcement is an effective cost-reducing device.

III. Some Applications of the Error-Cost Framework in Antitrust Doctrine

The error-cost framework is operationalized in a number of ways, some of which are discussed above.[221] The primary application of the framework can be seen in various aspects of antitrust doctrine.

The incorporation of new economic knowledge about the welfare effects of conduct into antitrust analysis is often accomplished through the adoption of procedural, doctrinal rules. Substantive evolution of antitrust is at least partially a function of procedural evolution.[222] “Economic analysis influences not only the substantive legal standards that govern particular forms of business conduct, but also how courts choose which standard to apply from among the alternatives available.”[223]

These “procedural” rules include the range of doctrinal elements of the antitrust litigation process such as standing, antitrust injury, pleading standards, evidentiary standards, burdens of proof, and market definition.

A.  The Per Se/Rule of Reason Distinction

“The Court uses per se rules when the costs of judicial inquiry necessary to separate the beneficial from the detrimental instances of a practice exceed the gain from saving the relatively rare beneficial instances.”[224] As the Court has elucidated, conduct is deemed per se illegal when “the practice facially appears to be one that would always or almost always tend to restrict competition and decrease output.”[225] As Easterbrook points out, “[t]his is just another way of saying that per se rules should be used when they minimize the sum of the welfare loss from monopolization, the loss from false positives, and the costs of administering the rule.”[226]

The adoption of a presumption of illegality under the per se rule is a clear manifestation of the error-cost approach to antitrust. As the Court noted in Jefferson Parish:

[T]he rationale for per se rules in part is to avoid a burdensome inquiry into actual market conditions in situations where the likelihood of anticompetitive conduct is so great as to render unjustified the costs of determining whether the particular case at bar involves anticompetitive conduct.[227]

Importantly, the decision to assess conduct under the per se rule is not distinct from the rule of reason analysis. Rather, it is the preliminary stage of any rule of reason analysis: the characterization and classification of conduct. As Professor Meese explains:

As applied in the courts, then, Standard Oil‘s Rule of Reason manifests itself in a two-step analysis. The first step—per se analysis—requires characterization and then classification of a restraint. Here courts inquire into the nature of the agreement and decide whether it is unlawful per se or instead subject to further scrutiny. If the restraint survives this step, that is, if it is not unreasonable per se, courts proceed to the second step, namely, a fact-intensive analysis of the actual effects of the restraint. While courts refer to this second step as a Rule of Reason analysis, both steps of the process attempt to answer the question put by Standard Oil, viz., is a restraint “unreasonably restrictive of competitive conditions.”[228]

As noted above, the error-cost framework counsels in favor of such an approach because it is mindful not only of the substantive accuracy of results, but also of the administrative costs of judicial decision-making and the deterrent effects of precedential judicial holdings. Animating the adoption of the per se approach, then, is the assumption that the probability times the cost of an erroneous determination (in terms of both any specific case, as well as its deterrent effect on subsequent economic activity) is smaller than the costs of repeated adjudication of the issue.[229]

Much like the rules vs. standards tradeoff, the application of the per se rule in lieu of a full rule of reason analysis countenances some degree of substantive error if the administrative cost savings are sufficiently high.

Per se rules thus require the Court to make broad generalizations about the social utility of particular commercial practices. The probability that anti-competitive consequences will result from a practice and the severity of those consequences must be balanced against its procompetitive consequences. Cases that do not fit the generalization may arise but a per se rule reflects the judgment that such cases are not sufficiently common or important to justify the time and expense necessary to identify them.[230]

Application of the per se standard is thus limited to circumstances where courts have experience with the conduct at issue, and where they can “predict with confidence that [the conduct] would be invalidated in all or almost all instances under the rule of reason.”[231]

One important implication of this is that the per se rule is rarely, if ever, appropriate in the face of novel conduct or in a nascent industry. “[I]t is only after considerable experience with certain business relationships that courts classify them as per se violations.”[232] Indeed, per se condemnation is appropriate only when a practice lacks any plausible procompetitive rationale,[233] which will rarely be the case where there is no existing knowledge or experience to undermine the plausibility of procompetitive explanations of novel conduct.

If there is no long track record of judicial experience establishing that a practice always or almost always lessens competition, then the practice should be subject to analysis under the rule of reason. But, by the same token, as courts learn more about an industry and challenged practices, they can and should amend their approach to reflect updated learning. Thus, the courts’ approach “may vary over time, if rule-of-reason analyses in case after case reach identical conclusions.”[234]

In this regard, the concern for the risk of error costs in the face of innovative conduct is ameliorated, because a finding that a novel practice (or an old practice in a new context) is anticompetitive may be made only after a rigorous analysis of all the facts and circumstances—that is, with greater information specific to the untested conduct at hand. Such a rule sensibly avoids unintentional condemnation of economically valuable activity where the full effects of that activity are simply unknown to the courts.[235]

The “inhospitality” tradition of antitrust, by contrast, saw an “extreme hostility toward any contractual restraint on the freedom of individuals or firms to engage in head-to-head rivalry.”[236] It also included an increased use of per se rules and suspicion of unfamiliar economic activity. As Professor Meese has masterfully detailed, the eventual (if incomplete. . .) shift away from the inhospitality tradition entailed the judicial acknowledgement of more advanced industrial organization economics—most notably, Transaction Costs Economics.[237] As new modes of economic organization came to pervade in the economy—and, more importantly, as new understandings of such conduct came to pervade in the academy—courts began to realize that per se condemnation was inappropriate for many “nonstandard” forms of conduct, even when they departed from the traditional “perfect competition” model.[238]

In general, the Transaction Cost Economics revolution has, ironically, increased the overall lack of certainty of the antitrust enterprise. To the extent that the pre-1970s inhospitality tradition could be defended by the extent of economic learning at the time, that was no longer the case after Williamson. Better understanding of the possibility of procompetitive explanations for previously condemned conduct helps to reduce uncertainty over those specific forms of conduct or situations, but it simultaneously decreases the certainty with which decisionmakers can reasonably condemn novel conduct they don’t understand.

As noted above, this applies most starkly in the context of the assessment of the per se rule.[239] Once it becomes clear that the simplifying presumptions of the per se rule were not more likely than not to produce accurate outcomes, the use of the presumption must decline not only in those specific cases, but in all cases of novel conduct or novel circumstances, absent specific learning to the contrary.

Fundamentally, as antitrust jurisprudence properly evolves, greater substantive economic learning can, and does, lead to changes in antitrust procedure. But the overarching consequence of more complicated, nuanced economic analysis is invariably a move toward greater complexity (and thus higher costs) in antitrust adjudication.

In the per se context, for example, the Court eventually introduced an intermediate process (quick look review) in an attempt to mitigate the increased costs of the overall move away from per se illegality necessitated by better economic understanding.[240] But in practice the quick look process most likely simply formalized the inevitable reality that anything but an automatic application of a per se rule entails effectively a Rule of Reason analysis.

Thus, in California Dental Association v. Federal Trade Commission the Court made it clear that quick look is an appropriate means of by-passing the rule of reason when “an observer with even a rudimentary understanding of economics could conclude that the arrangements in question would have an anticompetitive effect on customers and markets.” But that means that whenever underlying conduct presents novel or nuanced economic circumstances for which past presumptions and burden-shifting rules may not be appropriate—which is to say, the vast majority of the time conduct ends up being litigated—an essentially thorough Rule of Reason analysis will be required:

Although we have said that a challenge to a “naked restraint on price and output” need not be supported by “a detailed market analysis” in order to “requir[e] some competitive justification,” it does not follow that every case attacking a less obviously anticompetitive restraint (like this one) is a candidate for plenary market examination. The truth is that our categories of analysis of anticompetitive effect are less fixed than terms like “per se,” “quick look,” and “rule of reason” tend to make them appear. We have recognized, for example, that “there is often no bright line separating per se from Rule of Reason analysis,” since “considerable inquiry into market conditions” may be required before the application of any so-called “per se” condemnation is justified.[241]

Despite the administrative costs, the Court has determined that antitrust law should not permit courts, which are “ill suited” to “act as central planners,” to condemn a new business model without detailed review of its actual competitive effects.[242] To that end, the Court has instructed that the per se rule should not be applied to “cooperative activity involving a restraint or exclusion” where there are even “plausible arguments that [the activities] were intended to enhance overall efficiency and make markets more competitive.”[243]

B.  Injury and Standing

The doctrines of antitrust injury and standing similarly serve to minimize direct costs by reducing the likelihood that courts will end up adjudicating meritless claims. In the case of these threshold determinations, justiciability is largely a function of the underlying purpose of antitrust. As the Court noted in Brunswick Corp. v. Pueblo Bowl-O-Mat, Inc., in which it created the doctrine of antitrust injury, “[t]he antitrust laws . . . were enacted for ‘the protection of competition not competitors.’ . . . It is inimical to the purposes of these laws to award damages for the type of injury claimed here.”[244] Thus, the antitrust injury doctrine introduced in Brunswick was intended to address the scope of potential litigation, limiting it to a set of cases cognizable under the antitrust laws, and unlikely to amount to the subversion of antitrust laws to benefit competitors.

What is notable about the antitrust injury doctrine (as well as standing, to a somewhat lesser extent) is that, while it is a threshold determination, it contemplates some understanding of substantive antitrust theories of harm. Not all conduct that causes an antitrust plaintiff to overpay, for example, constitutes antitrust injury. Rather, all antitrust plaintiffs, including those that allege per se violations, must prove that their injuries stem from a “competition-reducing aspect or effect” of the defendant’s behavior.[245]

The intention of such rules is clear: to economize on administrative costs without unduly sacrificing substantive accuracy. A plaintiff must show more than simply harm to a particular competitor, which might just as well arise from procompetitive as anticompetitive behavior. “In both cases [antitrust injury and standing], however, the procedural element of standing is a function of the underlying economic understanding of the conduct at issue. For injury to be deemed an injury ‘to competition, not competitors’ requires an understanding of the substantive economics.”[246]

Such rules serve to minimize error costs only if they are sufficiently accurate predictors of the ultimate outcome of litigated cases, where the cost of their inaccuracy is equal to or less than the administrative cost savings such threshold rules offer.

C.  Market Definition

Market definition is similarly employed as a function of error-cost minimization. One of its primary functions is to decrease administrative costs: analysis of total effects of a proposed conduct would be inordinately expensive or impossible without reducing the scope of analysis. Market definition defines the geographic and product areas most likely to be affected by challenged conduct, sacrificing a degree of analytical accuracy for the sake of tractability.

But an early and proper market definition determination also provides increased substantive accuracy and a better understanding of the issues throughout all stages of the adjudicatory process. As Greg Werden notes, “[a]lleging the relevant market in an antitrust case does not merely identify the portion of the economy most directly affected by the challenged conduct; it identifies the competitive process alleged to be harmed.”[247] Particularly where novel conduct or novel markets are involved and thus the relevant economic relationships are poorly understood, market definition is crucial to determine “what the nature of [the relevant] products is, how they are priced and on what terms they are sold, what levers [a firm] can use to increase its profits, and what competitive constraints affect its ability to do so.”[248] This approach is perhaps most prominently (and certainly most recently) seen in the Supreme Court’s recent Amex decision, in which the Court held that, for many novel, platform markets, “evaluating both sides of a two-sided transaction platform is also necessary to accurately assess competition.”[249]

Despite the Court’s (controversial[250]) expansion of its approach to market definition in Amex to accommodate nonstandard platform conduct, market definition as usually employed in antitrust analysis in the face of novel, innovative business arrangements is potentially quite problematic.

Market definition is inherently retrospective—systematically minimizing where competition is going, and locking even fast-evolving digital competitors into the past. Traditional market definition analysis that infers future substitution possibilities from existing or past market conditions will systematically lead to overly narrow markets and an increased likelihood of erroneous market power determinations. This is the problem of viewing Google as a “search engine” and Amazon as an “online retailer,” for example, and excluding each from the other’s market. In reality, of course, both are competing for scarce user attention (and advertising dollars) in digital environments; the specific functionality they employ in order to do so is a red herring. As such (and as is apparent to virtually everyone but antitrust enforcers and advocates of increased antitrust intervention) they invest significantly in new technology, product designs, and business models because of competitive pressures from each other—competition that comes from outside a retrospectively defined market. “Economics provides no reason to believe innovation ordinarily will come from within a ‘market’ as defined for the purpose of static antitrust analysis.”[251]

Relatively static market definitions may lead systematically to the erroneous identification of such innovation (or other procompetitive conduct) as anticompetitive. And the benefits of innovation aimed at competing with rivals outside an improperly narrow market, or procompetitive effects conferred on users elsewhere on the platform or in another market, will be relatively, if not completely, neglected.

“[M]arket definition is an entirely artificial construct that has been called an incoherent process as a matter of basic economic principles. Real markets do not come defined. Market definition is an exercise that serves to establish the group of products that are sufficiently substitutable with one another.”[252] But it must be recognized that some things that are excluded from the market because they seem to differ in superficial ways may actually be at least as similar, and at least as likely to operate as substitutes, as any number of items that are included in the market. Most obviously, this is true when it comes to digital platforms.

The bigger problem is that while such market definitions are, as noted, inherently backward-looking, true competition in high-tech markets tends to come from the future. As Jorde & Teece explain:

It is especially in assessing potential competition that a departure must be made from orthodox approaches when new technologies and new products are at issue. The reason is that potential competition from new technologies can destroy a firm’s position in a particular market and its underlying competences. Price competition, on the other hand, may erode profit margins but is less likely to completely destroy the value of a firm’s underlying technological, physical, and human assets. Accordingly, potential competition from new products and processes is the more powerful form of competition.[253]

Yet even when enforcers or courts consider future effects (say, of efficiencies) or potential entry, it is typically limited to fact-intensive analysis and potential entry into existing markets (and rarely does potential entry actually alter outcomes in either enforcement decisions or cases). As the European Commission’s competition enforcer once said regarding its analysis of potential competition:

The third source of competitive constraint, potential competition, is not taken into account when defining markets, since the conditions under which potential competition will actually represent an effective competitive constraint depend on the analysis of specific factors and circumstances related to the conditions of entry. If required, this analysis is only carried out at a subsequent stage, in general once the position of the companies involved in the relevant market has already been ascertained, and when such position gives rise to concerns from a competition point of view.[254]

There are, in fact, a few cases where agencies have challenged activity (mergers) on a theory of “actual potential competition,” in which it is asserted that one of the merging parties would likely enter the other’s market, and thus that the merger would reduce (likely) future competition.

The FTC’s Nielsen-Arbitron merger challenge offers an even more speculative analysis to challenge a proposed merger. There the Commission asserted a future relevant market for a product that did not yet exist, asserted that both of the merging firms were likely to enter this hypothetical market, and that their combination would reduce future, hypothetical competition. Unlike the fact-specific analyses of asserted future effects in typical merger analysis, the assertion of anticompetitive effect in Nielsen rested not only on speculation but on “a general presumption that economic theory teaches that an increase in market concentration implies a reduced incentive to invest in innovation.”[255]

Furthermore, as suggested above, the myopic focus on product markets in antitrust diverts attention away from what may be the real dimensions of competition under a more dynamic understanding:

The capabilities approach would depart markedly from standard antitrust analysis. It would calibrate a firm’s competitive standing not by reference to products but by reference to more enduring traits. In a dynamic context, a firm will have a kaleidoscope of products, yet the underlying capabilities are likely to be more stable. . . . A capabilities approach might lead to “markets” defined more narrowly or broadly than how the current Merger Guidelines define product markets. Potential competition (or its absence) would receive more attention.

The tools for assessing capabilities may not be well developed yet, but they are developed enough to allow tentative application. Clearly, product market analysis can be unhelpful and misleading in dynamic contexts.[256]

Perhaps the most overtly static aspect of current market definition doctrine is the consideration of only demand-side substitution in defining markets, especially for merger analysis.[257] Yet an important component of getting market definition right, especially in high tech markets, may be an expansion of the role of supply-side substitution in market definition and market power calculations, and especially from potential entrants.

The US Horizontal Merger Guidelines significantly downplay the role of supply-side substitution.[258] But demand-side substitution is extremely crabbed in these markets because price competition doesn’t predominate and because the relevant competition may not exist yet (product development often long predates commercialization, new entrants may come from very different quarters, and thus there may be no identifiable substitute products yet in the market to which consumers may substitute). This is a key implication of the relative importance of competition via product innovation, rather than price, in these markets.[259] This also means that seemingly unrelated suppliers and seemingly unrelated markets should often properly be counted in the same market.

Footnotes

[1] This chapter builds on a number of prior works including Geoffrey A. Manne & Joshua D. Wright, Introduction, in Competition Policy and Patent Law Under Uncertainty: Regulating Innovation (Geoffrey A. Manne & Joshua D. Wright, eds. 2009); Geoffrey A. Manne & Joshua D. Wright, Innovation and the Limits of Antitrust, 6 J. Competition L. & Econ. 153 (2010); and Geoffrey A. Manne & Kristian Stout, The Evolution of Antitrust Doctrine After Ohio v. Amex and the Apple v. Pepper Decision That Should Have Been, 98 Neb. L. Rev. 425 (2019). I thank Bruce Kobayashi and Joshua Wright for helpful comments, and Rachel Burke for excellent research assistance.

[2] Robert H. Mnookin & Lewis Kornhauser, Bargaining in the Shadow of the Law: The Case of Divorce, 88 Yale L.J. 950, 968 (1979).

[3] See, e.g., Richard A. Posner, Cost-Benefit Analysis: Definition, Justification, and Comment on Conference Papers, 29 J. Leg. Stud. 1153, 1153 (2000) (“At the highest level of generality . . . , [cost-benefit analysis] is virtually synonymous with welfare economics, that is, economics used normatively—used, that is, to provide guidance for the formation of policy. . . . At the other end of the scale of generality, the term denotes the use of the Kaldor-Hicks (wealth maximization rather than utility maximization) concept of efficiency to evaluate government projects. . . .”).

[4] David Weisbach, Introduction: Legal Decision Making under Deep Uncertainty, 44 J. Leg. Stud. S319, S321 (2015). See generally Herbert Simon, Theories of Decision Making in Economics, 44 Am. Econ. Rev. 253, 272 (1959) (“The decision-maker’s model of the world encompasses only a minute fraction of all the relevant characteristics of the real environment, and his inferences extract only a minute fraction of all the information that is present even in his model.”).

[5] See infra Section I.D.2.

[6] Verizon Commc’ns Inc. v. Law Offices of Curtis V. Trinko, LLP, 540 U.S. 398, 414 (2004) (quoting Matsushita Elec. Indus. Co. v. Zenith Radio Corp., 475 U.S. 574, 594 (1986)). This approach is not limited to addressing predation and duty to deal claims, and US courts have employed the error cost framework in a range of cases. See cases collected infra note 151.

[7] Manne & Wright, Innovation, supra note 1, at 168.

[8] See generally, Douglas H. Ginsburg & Joshua D. Wright, Dynamic Analysis and the Limits of Antitrust Institutions, 78 Antitrust L.J. 1 (2012).

[9] See generally John Pratt, Howard Raiffa and Robert Schlaifer, Introduction to Statistical Decision Theory (1995)

[10] See Frank H. Knight, Risk, Uncertainty, and Profit 19 (1921) (“Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. The term ‘risk,’ as loosely used in everyday speech and in economic discussion, really covers two things which, functionally at least, in their causal relations to the phenomena of economic organization, are categorically different. . . . The essential fact is that ‘risk’ means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomenon depending on which of the two is really present and operating. . . . It will appear that a measurable uncertainty, or ‘risk’ proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We shall accordingly restrict the term ‘uncertainty’ to cases of the non-quantitative type.”).

[11] Adrian Vermeule, Judging Under Uncertainty: An Institutional Theory of Legal Interpretation 176 (2006).

[12] Some scholars attempt to refute this, largely by referring to the advance of economic theory that purports to better discern pro- from anticompetitive conduct. See, e.g., Herbert J. Hovenkamp & Fiona Scott Morton, Framing the Chicago School of Antitrust Analysis, 168 U. Penn. L. Rev. (forthcoming 2020) (working paper at 6-7) (“[M]ore up-to-date economic analysis revealed anticompetitive conduct and called for greater enforcement. Making the problem worse, about this time (1980s) the economics profession developed applied game theory and there was a spate of sophisticated models of imperfect competition. Now many more patterns of anticompetitive conduct could be explained and understood, particularly those in oligopoly markets.”). But, as I discuss below, that learning is, for the most part, entirely theoretical, constrained to “possibility theorems” divorced from realistic complications and the real institutional settings of decision-making. The proliferation of these theories may actually increase, rather than decrease, uncertainty by further complicating the analysis and asking generalist judges to choose from competing theories without any realistic means of doing so. See infra notes 137 to 162 and accompanying text.

[13] See, e.g., Joshua Wright, Former Comm’r, Fed. Trade Comm’n, Remarks at the Executive Committee Meeting of the New York State Bar Association’s Antitrust Section: Section 5 Recast: Defining the Federal Trade Commission’s Unfair Methods of Competition Authority 24 (June 19, 2013) (transcript available at https://www.ftc.gov/public-statements/2013/06/section-5-recast-defining-federal-trade-commissions-unfair-methods) (“Where conduct plausibly produces both costs and benefits for consumers it is fundamentally difficult to identify the net competitive consequences associated with the conduct. This is particularly true if business conduct is novel or is being applied to an emerging or rapidly changing industry. . . .”). See generally Manne & Wright, Innovation, supra note 1.

[14] See discussion of market definition in digital markets, infra Section III.C.

[15] C. Frederick Beckner III & Steven C. Salop, Decision Theory and Antitrust Rules, 67 Antitrust L.J. 41, 51 (1999).

[16] United States v. Microsoft Corp., 253 F.3d 34, 58 (D.C. Cir. 2001) (emphasis added).

[17] Isaac Ehrlich & Richard A. Posner, An Economic Analysis of Legal Rulemaking, 3 J. Legal Stud. 257, 277 (1974). See also Bruce H. Kobayashi & Joshua D. Wright, Antitrust and Ex-Ante Sector Regulation, in The GAI Report on the Digital Economy (2020).

[18] Ehrlich & Posner, supra note 17, at 277.

[19] Id. at 261.

[20] See generally Manne & Stout, Evolution, supra note 1.

[21] Ehrlich & Posner, supra note 17, at 272.

[22] Id. at 280.

[23] Paul L. Joskow & Alvin K. Klevorick, A Framework for Analyzing Predatory Pricing Policy, 89 Yale L.J. 213 (1979).

[24] Id. at 222.

[25] Id. at 218.

[26] See Frank H. Easterbrook, The Limits of Antitrust, 63 Tex. L. Rev. 1 (1984). See also Frank H. Easterbrook, On Identifying Exclusionary Conduct, 61 Notre Dame L. Rev. 972 (1986); Frank H. Easterbrook, Allocating Antitrust Decisionmaking Tasks, 76 Geo. L.J. 305 (1987). Jonathan Baker asserts that credit for the origination of the error-cost framework in antitrust, usually credited to Easterbrook, properly belongs to Joskow & Klevorick. See Jonathan B. Baker, Taking the Error Out of “Error Cost” Analysis: What’s Wrong with Antitrust’s Right, 80 Antitrust L.J. 1, 4-5 n. 16 (2015) (“Citing Easterbrook’s ‘pioneer[ing]’ role in using the error cost approach, Commissioner Joshua Wright describes the use of the approach within antitrust as ‘distinctively Chicagoan,’ without noting Joskow and Klevorick’s prior use.”). But Easterbrook himself notes Joskow & Klevorick’s use of the framework (along with that of several others), rightly pointing out that, previous to him, it was applied only to specific areas of antitrust. See Easterbrook, Limits, id. at 16 n. 34. Importantly, Joskow and Klevorick primarily saw its use as a function of overcoming the uncertainty of time. But Easterbrook applied the problem more generally to the inherent competitive ambiguity of business conduct. Moreover, as discussed below, Easterbrook was also the first to make the fundamental point that antitrust tended toward false positives, and that these are particularly costly relative to the cost of false negative errors. See infra Section I.D.3.

[27] Easterbrook, Limits, supra note 26, at 16.

[28] Id. at 16-17.

[29] For more on the consumer welfare standard, see Elyse Dorsey, Antitrust in Retrograde: The Consumer Welfare Standard, Socio-Political Goals, and the Future of Enforcement, in The GAI Report on the Digital Economy (2020).

[30] See, e.g., Beckner & Salop, supra note 15, at 43 (“A court inevitably must make its decisions on the basis of limited and imperfect information. As a result, a court can never be absolutely certain that its factual findings are correct, the correct litigant prevails, or the remedy it mandates still would be the best outcome if all the facts were known.”).

[31] Id. at 45 (“The decision theory approach can be reformulated in terms of minimizing the cost of error. . . . Whether framed in terms of error analysis or expected net benefit, the answer is the same. This answer represents the first key insight of the economic approach to decision making. Rational decision making is based on weighing the benefits and costs of alternative actions.”).

[32] Id. at 46.

[33] See Steven C. Salop, An Enquiry Meet for the Case: Decision Theory, Presumptions, and Evidentiary Burdens in Formulating Antitrust Legal Standards 9 (Geo. L. Ctr. Working Paper, 2017). (“In the case of antitrust judicial standards, the uncertainty is complicated by the fact that the decision will lead to market responses by the parties to the litigation and others. If the judicial decision has precedential effects, it also will lead to market responses by non-parties in the future.”).

[34] Table 1 is from Manne & Wright, Innovation, supra note 1, at 159, itself adapted from David S. Evans & Jorge Padilla, Designing Antitrust Rules for Assessing Unilateral Practices: A Neo-Chicago Approach, 72 U. Chi. L. Rev. 73 (2005).

[35] Adapted from Manne & Wright, Innovation, supra note 1, at 159.

[36] Easterbrook, Limits, supra note 26, at 15. See also Ehrlich & Posner, supra note 17.

[37] As then-Judge Breyer admonished, antitrust rules “must be administratively workable and therefore cannot always take account of every complex economic circumstance or qualification.” Town of Concord v. Boston Edison Co., 915 F.2d 17, 22 (1st Cir. 1990). Easterbrook makes the same point and proposes several simple rules in this vein. See Easterbrook, Limits, supra note 26, at 14, ff.

[38] Easterbrook, Limits, supra note 26, at 9. See also Frank H. Easterbrook, Workable Antitrust Policy, 84 Mich. L. Rev.1696 (1986).

[39] Manne & Wright, Innovation, supra note 1, at 157.

[40] Id.

[41] The limited ability of generalist judges and antitrust enforcers to apply economic science to complex facts is not the primary reason for this strain of uncertainty, as some critics sometimes reduce this argument to. But nor is it irrelevant. Indeed, there is evidence that neither courts nor antitrust agencies perform particularly well in antitrust disputes involving sophisticated economics. See Michael R. Baye & Joshua D. Wright, Is Antitrust Too Complicated for Generalist Judges? The Impact of Economic Complexity & Judicial Training on Appeals, L. & Soc’y: Courts E-Journal 21 (2009); Joshua D. Wright & Angela Diveley, Do Expert Agencies Outperform Generalist Judges? Some Preliminary Evidence from the Federal Trade Commission, 1 J. Antitrust Enforcement 82 (2013).

[42] Manne & Wright, Innovation, supra note 1, at 163.

[43] See Easterbrook, Limits, supra note 26, at 14–15. This underlying issue is explored at length in Ehrlich & Posner, supra note 17, at 268 (“The inherent ambiguity of language and the limitations of human foresight and knowledge limit the practical ability of the rulemaker to catalog accurately and exhaustively the circumstances that should activate the general standard. Hence the reduction of a standard to a set of rules must in practice create both overinclusion and underinclusion.”).

[44] See Beckner & Salop, supra note 15, at 65 (“Thus, the choice between per se rules and the rule of reason has a decision theoretic basis.”). For a more detailed discussion of the choice between per se and rule of reason analysis, particularly in the context of digital markets, see infra Section III.A.

[45] Leegin Creative Leather Prod., Inc. v. PSKS, Inc., 551 U.S. 877, 886-87 (2007) (omission in original; citation omitted).

[46] See Easterbrook, Limits, supra note 26, at 10 (“These changes in the structure of antitrust analysis follow ineluctably from changes in our understanding of the economic consequences of the practices involved. If condemnation per se depends on a conclusion that almost all examples of some practice are deleterious, then discoveries of possible benefits lead to new legal rules. We cannot condemn so quickly anymore. What we do not condemn, we must study. The approved method of study is the Rule of Reason.”).

[47] See Murat C. Mungan & Joshua Wright, Optimal Standards of Proof in Antitrust 4 (George Mason Univ. Law & Econ. Research Paper Series No. 19-20, 2019), https://ssrn.com/abstract=3428771. (“Quite interestingly, the influence of Easterbrook’s observations concerning error costs has largely been seen in the evolution and shaping of antitrust liability rules, and academic discussions of these rules, rather than in specific procedural rules or evidentiary burdens.”).

[48] See Bell Atl. Corp. v. Twombly, 550 U.S. 544, 559 (2007) (adjusting pleading standards in order to avoid Type I errors, noting that it is “self-evident that the problem of discovery abuse cannot be solved by careful scrutiny of evidence at the Summary Judgment stage, much less lucid instructions to juries; the threat of discovery expense will push cost-conscious defendants to settle even anemic cases before reaching those proceedings”). See also Keith N. Hylton, When Should a Case Be Dismissed? The Economics of Pleading and Summary Judgment Standards, 16 Sup. Ct. Econ. Rev. 39 (2008).

[49] See Matsushita Elec. Indus. Co. v. Zenith Radio Corp., 475 U.S. 574 (1986).

[50] See Steven C. Salop, supra note 33 (“While the plaintiff in civil litigation bears the burden of proof to show that anticompetitive conduct is more likely than not, presumptions are added to decision process. Many antitrust presumptions are based on and represent the court’s view of the likely competitive impact of a category of restraint inferred from market facts. When there is a strong anticompetitive presumption, the evidentiary burden of production to rebut the presumption is placed on the defendant. . . When there is a procompetitive presumption, the burden of proof allocated to the plaintiff is heightened. Either way, presumptions place a ‘thumb on the scale.’”).

[51] For a more detailed discussion of the error-cost function of standing and injury, see infra Section III.B.

[52] See generally Manne & Stout, Evolution, supra note 1.

[53] For a more detailed discussion of the error-cost function of market definition, particularly in the context of digital markets, see infra Section III.C.

[54] Geoffrey A. Manne, In Defence of the Supreme Court’s ‘Single Market’ Definition in Ohio v. American Express, 7 J. Antitrust Enforcement 104, 106 (2019).

[55] See Salop, An Enquiry Meet for the Case, supra note 50; Andrew I. Gavil, Burden of Proof in U.S. Antitrust Law, 1 Issues in Comp. L. and Pol’y 125 (2008).

[56] See Beckner & Salop, supra note 15, at 61 (“Antitrust and many other areas of civil law apply a standard of proof based on ‘preponderance of the evidence.’ This standard typically is satisfied when the conduct is ‘more likely than not’ to lead to a particular result, or a likelihood in excess of 50 percent.”).

[57] See, e.g., Cal. Dental Ass’n v. FTC, 526 U.S. 756, 770 (1999); Polygram Holding, Inc. v. FTC, 416 F.3d 29, 36-37 (D.C. Cir. 2005).

[58] James C. Cooper, Luke M. Froeb, Dan O’Brien & Michael G. Vita, Vertical Antitrust Policy as a Problem of Inference, 23 Int’l J. Indus. Org. 639, 641 (2005).

[59] Beckner & Salop, supra note 15, at 61.

[60] Keith N. Hylton & Wendy Xu, Error Costs, Ratio Tests, and Patent Antitrust Law, 56 Rev. Indus. Org. 563, 567 (2020).

[61] See Michelle Burtis, Jonah B. Gelbach, & Bruce H. Kobayashi, Error Costs, Legal Standards of Proof, and Statistical Significance, 25 Sup. Ct. Econ. Rev. 1, 11 (2018) (“Comparing the preponderance standard (9) to the optimal standard derived in (5), it is easy to see that the two will coincide when where the cost of Type I and Type II errors are equal. . . .”).

[62] See, e.g., Jonathan B. Baker, Nancy L. Rose, Steven C. Salop & Fiona Scott Morton, Five Principles for Vertical Merger Enforcement Policy, 33 Antitrust 12 (2019).

[63] See Joshua D. Wright & Murat M. Mungan, The Easterbrook Theorem and Optimal Standards of Proof: An Application to Digital Markets (Working Paper, Jul. 15, 2020).

[64] Mungan & Wright, Optimal Standards of Proof in Antitrust, supra note 47, at 4.

[65] Id. at 13.

[66] See Matsushita Elec. Indus. Co., Ltd. v. Zenith Radio Corp., 475 U.S. 574, 585-88 (1986)

[67] See id. at 588-89.

[68] Id. at 591 (“These economic realities tend to make predatory pricing conspiracies self-deterring: unlike most other conduct that violates the antitrust laws, failed predatory pricing schemes are costly to the conspirators.”) (citing Easterbrook, Limits, supra note 26, at 26).

[69] Id. at 587 (“It follows from these settled principles that if the factual context renders respondents’ claim implausible—if the claim is one that simply makes no economic sense—respondents must come forward with more persuasive evidence to support their claim than would otherwise be necessary.”).

[70] Easterbrook, Limits, supra note 26.

[71] Id. at 15.

[72] See, e.g., Baker, Error Costs, supra note 26, at 2 (arguing that “antitrust conservatives . . . systematically overstate the incidence and significance of false positives [and] understate the incidence and significance of false negatives . . . .”); Hovenkamp & Scott Morton, supra note 12, at 28-29.

[73] Hovenkamp & Scott Morton, supra note 12, at 29.

[74] Id. at 10 (collecting references, including: Fiona Scott Morton, Modern U.S. Antitrust Theory and Evidence amid Rising Concerns of Market Power and Its Effects, Wash. Ctr. For Equitable Growth (May 29, 2019), https://perma.cc/879H-9QBK; Jan De Loecker & Jan Eeckhout, The Rise of Market Power and the Macroeconomic Implications 1-2 (Nat’l Bureau of Econ. Research, Working Paper No. 23687, 2017), https://www.nber.org/papers/w23687.pdf; Simcha Barkai, Declining Labor and Capital Shares 34 fig.3 (Univ. of Chi. Stigler Ctr. for the Study of the Econ. & the State, New Working Paper Series No. 2, 2016), https://perma.cc/W7TD-PP3R; Giulio Federico et al., Antitrust and Innovation: Welcoming and Protecting Disruption (Nat’l Bureau of Econ. Research, Working Paper No. 26005, 2019); Modern US Antitrust Theory and Evidence amid Rising Concerns of Market Power and Its Effects, Wash. Ctr. For Equitable Growth (May 29, 2019), https://perma.cc/8BFZ-AZBY).

[75] See, e.g., Germán Gutiérrez and Thomas Philippon, Declining Competition and Investment in the U.S. (NBER Working Paper No. 23583, 2017), https://www.nber.org/papers/w23583.

[76] See Jan De Loecker, Jan Eeckhout & Gabriel Unger, The Rise of Market Power and the Macroeconomic Implications, 135 Q. J. Econ. 561 (2020).

[77] See David Autor, et al., The Fall of the Labor Share and the Rise of Superstar Firms, 135 (2) Q. J. Econ. 645, 649 (2020), https://economics.mit.edu/files/12979.

[78] Ryan A. Decker, John Haltiwanger, Ron S. Jarmin & Javier Miranda, Where Has All the Skewness Gone? The Decline in High-Growth (Young) Firms in the U.S, 86 Eur. Econ. R. 4, 5 (2016), https://www.sciencedirect
.com/science/article/pii/S0014292116300125?via%3Dihub.

[79] De Loecker, Eeckhout & Unger, supra note 76.

[80] See, e.g., James Traina, Is Aggregate Market Power Increasing? Production Trends Using Financial Statements (Stigler Ctr. Working Paper, 2018), https://pdfs.semanticscholar.org/8059/7e4e80edebd
66d3eef57e28d324623ad9ee0.pdf; see also World Economic Outlook, April 2019 Growth Slowdown, Precarious Recovery, International Monetary Fund (Apr. 2019), https://www.imf.org/en/
Publications/WEO/Issues/2019/03/28/world-economic-outlook-april-2019.

[81] See Loukas Karabarbounis & Brent Neiman, Accounting for Factorless Income (NBER Working Paper No. 24404, 2018), https://www.nber.org/papers/w24404.

[82] See Kevin Rinz, Labor Market Concentration, Earnings Inequality, and Earnings Mobility, (U.S. Census Bureau Working Paper 2018-10, 2018), https://www.census.gov/content/dam/Census/library/working-papers
/2018/adrm/carra-wp-2018-10.pdf.

[83] See Harold Demsetz, Industry Structure, Market Rivalry, and Public Policy, 16 J. L. & Econ. 1 (1973).

[84] Harold Demsetz, The Intensity and Dimensionality of Competition, in Harold Demsetz, The Economics of the Business Firm: Seven Critical Commentaries 137, 140-41 (1995).

[85] Steven Berry, Martin Gaynor & Fiona Scott Morton, Do Increasing Markups Matter? Lessons from Empirical Industrial Organization, 33 (3) J. of Econ. Perspectives 44, 48 (2019). See also Jonathan Baker & Timothy F. Bresnahan, Economic Evidence in Antitrust: Defining Markets and Measuring Market Power 24 (Stanford Law and Econ. Olin Working Paper No. 328, 2006) (“The Chicago identification argument has carried the day, and structure-conduct-performance empirical methods have largely been discarded in economics.”).

[86] Gregory J. Werden & Luke M. Froeb, Don’t Panic: A Guide to Claims of Increasing Concentration, 33 Antitrust 74, 74 (2018).

[87] Sharat Ganapati, Growing Oligopolies, Prices, Output, and Productivity 13 (Census Working Paper CES-WP-18-48, 2018) (forthcoming Am. Econ. J. Microeconomics 2020), https://papers.ssrn.com/sol3/
papers.cfm?abstract_id=3030966.

[88] Id. at 1.

[89] Esteban Rossi-Hansberg, Pierre-Daniel Sarte & Nicholas Trachter, Diverging Trends in National and Local Concentration, in NBER Macroeconomics Annual 2020, volume 35 (Martin Eichenbaum & Erik Hurst, eds., forthcoming 2020), preliminary draft available at https://www.nber.org/chapters/c14475.

[90] Rossi-Hansberg, et al., Presentation: Diverging Trends in National and Local Concentration, NBER Macro Annual, slide 3 (2020), https://conference.nber.org/conf_papers/f132587/f132587.slides.pdf.

[91] Rossi-Hansberg, et al, supra note 89, at 27 (emphasis added).

[92] Chang-Tai Hsieh & Esteban Rossi-Hansberg, The Industrial Revolution in Services (Univ. of Chi., Becker Friedman Inst. for Econ. Working Paper No. 2019-87, 2020), https://www.princeton.edu/~erossi/IRS.pdf.

[93] Id. at 4 (“[R]ising [national] concentration in these sectors is entirely driven by an increase [in] the number of local markets served by the top firms”).

[94] Id. at 17.

[95] Richard Schmalensee, Inter-Industry Studies of Structure and Performance, in 2 Handbook of Industrial Organization 951, 1000 (Richard Schmalensee & Robert Willig eds., 1989). See also Timothy F. Bresnahan, Empirical Studies of Industries with Market Power, in 2 Handbook of Industrial Organization 1011, 1053-54 (Richard Schmalensee & Robert Willig eds., 1989) (“[A]lthough the [most advanced empirical literature] has had a great deal to say about measuring market power, it has had very little, as yet, to say about the causes of market power.”); Easterbrook, Workable Antitrust Policy, supra note 38, at 1698 (“Today it is hard to find an economist who believes the old structure-conduct-performance paradigm.”).

[96] Berry, et al., supra note 85, at 55.

[97] Baker & Bresnahan, supra note 85, at 26.

[98] See, e.g., Berry, et al., supra note 85, at 59 (“The decline of antitrust enforcement in recent decades may be a contributor to rising markups, although more research is needed to substantiate this conclusion firmly.”).

[99] Robert W. Crandall & Clifford Winston, Does Antitrust Policy Improve Consumer Welfare? Assessing the Evidence, 17 J. Econ. Persp. 3, 4 (2003). See also id. (“[T]he economics profession should conclude that until it can provide some hard evidence that identifies where the antitrust authorities are significantly improving consumer welfare and can explain why some enforcement actions and remedies are helpful and others are not, those authorities would be well advised to prosecute only the most egregious anticompetitive violations.”).

[100] See Michael Vita & David F. Osinski, John Kwoka’s Mergers, Merger Control, and Remedies: A Critical Review, 82 Antitrust L.J. 361 (2018). See also Michael Vita, Kwoka’s Mergers, Merger Control, and Remedies: Rejoinder to Kwoka, 28 Research in L. & Econ. 433 (2018).

[101] John Kwoka, Mergers, Merger Control, and Remedies: A Retrospective Analysis of U.S. Policy 158 (2015).

[102] Vita & Osinski, A Critical Review, supra note 100, at 385.

[103] Jeffrey T. Macher & John W. Mayo, The Evolution of Merger Enforcement Intensity: What Do the Data Show?, Geo. Ctr. for Bus. & Pub. Pol’y (Nov. 2019) (emphasis added), https://www.dropbox.com
/s/69xqogvda9g5ehj/The%20Evolution%20of%20Merger%20Enforcement%20Intensity%20Nov.%20%2719.pdf.

[104] Among many others, see, for example, Fed. Trade Comm’n v. Surescripts, LLC, Civil Action No. 19-1080-JDB (D.D.C. filed Apr. 17, 2019); Fed. Trade Comm’n v. Qualcomm Inc., No. 17-CV-00220-LHK (N.D. Cal. June 26, 2017); United States v. American Express Co., 88 F. Supp. 3d 143 (E.D.N.Y.2015); Fed. Trade Comm’n v. Actavis, Inc., 570 U.S. 136 (2013); In the Matter of Intel Corp., Fed. Trade Comm’n Docket No. 9341 (Oct. 29, 2010); United States v. Dentsply International, Inc., 399 F.3d 181 (3d Cir. 2005); U.S. v. Visa U.S.A., Inc., 344 F.3d 229 (2d Cir. 2003); United States v. Microsoft Corp. 253 F.3d 34 (D.C. Cir. 2001).

[105] Private civil actions are too numerous to count. Among significant recent cases, see, for example, Apple, Inc. v. Pepper, 139 S. Ct. 1514 (2019); In re Qualcomm Antitrust Litig., 328 F.R.D. 280 (N.D. Cal. 2018); O’Bannon v. Nat’l Collegiate Athletic Ass’n, 802 F.3d 1049 (9th Cir. 2015); American Needle, Inc. v. Nat’l Football League, 560 U.S. 183 (2010).

[106] See, e.g., Elyse Dorsey, et al., Consumer Welfare & the Rule of Law: The Case Against the New Populist Antitrust, Pepperdine L. R. 5-9 (2020).

[107] Lawrence J. White, Antitrust Activities During the Clinton Administration, in High Stakes Antitrust—The Last Hurrah? 11, 12-13 (Robert W. Hahn ed., 2003).

[108] Michael A. Carrier, The Rule of Reason: An Empirical Update for the 21st Century, 16 Geo. Mason L. Rev. 827, 828 (2009) (“Courts dispose of 97% of [Rule of Reason] cases at the first stage, on the grounds that there is no anticompetitive effect.”).

[109] Id. at 829 (“I reviewed 738 cases. Of these, 222 involved a court’s final determination in a rule of reason case. Out of the 222 cases, 215 were resolved on the grounds that the plaintiff did not prove an anticompetitive effect.”).

[110] See Manne & Wright, Innovation, supra note 1, at 199 (“[I]n the vast majority of private litigation involving exclusionary conduct and mergers, trebling has little economic function other than to draw excessive resources into enforcement and exacerbate the Type 1 error problem by attracting follow-on actions.”).

[111] A. Douglas Melamed, Antitrust Law and Its Critics, 83 Antitrust L.J. 269, 285 (2020).

[112] See Kwoka, supra note 101.

[113] Hovenkamp & Scott Morton, supra note 12, at 29.

[114] Easterbrook, Limits, supra note 26, at 2-3.

[115] Easterbrook, Limits, supra note 26, at 15-16.

[116] Baker, Error Costs, supra note 26, at 9-10.

[117] Easterbrook, Limits, supra note 26, at 24.

[118] Id. at 16 (emphasis added).

[119] See supra Section D.  1.

[120] Dr. Miles Medical Co. v. John D. Park & Sons Co., 220 U.S. 373 (1911).

[121] As Bill Breit recounts, see William Breit, Resale Price Maintenance: What do Economists Know and When did They Know it?, 147 J. Institutional & Theoretical Econ. 72 (1991), economic opinion on RPM was mixed at the time the case was decided, but significantly undercut by at least one scholar in 1916. See J.R. Turner, Discussion, in 6(1) Am. Econ. Rev. (Supplement) (1916). Lester Telser swayed economic opinion comprehensively and decisively in relative favor of RPM in 1960. See Lester G. Telser, Why Should Manufacturers Want Fair Trade?, 3 J. L. Econ. 86 (1960).

[122] Leegin Creative Leather Prod., Inc. v. PSKS, Inc., 551 U.S. 877 (2007).

[123] Meaning that a rule that was so clearly erroneous—say, a rule literally prohibiting “all contracts in restraint of trade” as the Sherman Act nominally demands—would be subject to a different calculus.

[124] Hovenkamp & Scott Morton, supra note 12, at 4-5 (citing to Easterbrook, Limits, supra note 26, at 15-16).

[125] See, e.g., id.; Kevin A. Bryan & Erik Hovenkamp, Startup Acquisitions, Error Costs, and Antitrust Policy, 87 U. Chi. L. Rev. 331, 336 (2020) (“The problem with this argument is that it abstracts away from strategic interactions among the incumbent and the entrant.”) (distinguishing the argument that “[e]xcess profits therefore attract entry,” attributed to Chicago School pioneer, George Stigler. See George J. Stigler, A Theory of Oligopoly, 72 J. Pol. Econ 44 (1964)).

[126] Hovenkamp & Scott Morton, supra note 12, at 37.

[127] See especially Alan J. Meese, Market Failure and Non-Standard Contracting: How the Ghost of Perfect Competition Still Haunts Antitrust, 1 J. Competition L. & Econ. 21 (2005); Alan J. Meese, Price Theory, Competition, and the Rule of Reason, 2003 U. Ill. L. Rev. 77; Alan J. Meese, Price Theory and Vertical Restraints: A Misunderstood Relation, 45 UCLA L. Rev. 143 (1997).

[128] There may be, of course, some disagreement about who counts as a “Chicago School scholar.” For many Chicago School critics, it seems that the Chicago School of antitrust starts and ends with Robert Bork. Others may limit the Chicago School’s scope to actual University of Chicago professors like George Stigler, Aaron Director, Ronald Coase, Lester Telser, Richard Posner, and Frank Easterbrook. But most Chicago School adherents would also count a significant number of non-Chicago-based scholars among their ranks including, among many others, Oliver Williamson, Yale Brozen, Armen Alchian, Harold Demsetz, Ken Elzinga, and Ben Klein.

[129] See Ronald H. Coase, The Nature of the Firm, 4 Economica 386 (1937); Oliver E. Williamson, The Vertical Integration of Production: Market Failure Considerations, 61 Am. Econ. Rev. 112 (1971).

[130] See Ward S. Bowman, The Prerequisites and Effects of Resale Price Maintenance, 22 U. Chi. L. Rev. 825 (1955); Telser, supra note 121; Howard Marvel, Exclusive Dealing, 25 J. L. Econ. 1 (1982). Non-Chicago economists, by contrast, saw information dissemination devices like advertising and minimum RPM as costly efforts to extend market power. See e.g., Joe S. Bain, Price Theory 449-50 (1952); William Commanor, White Motor And Its Aftermath, 81 Harv. L. Rev. 1419 (1967).

[131] See Benjamin Klein, Robert Crawford and Armen Alchian, Vertical Integration, Appropriable Rents and the Competitive Contracting Process, 21 J. L. Econ. 297 (1978); Benjamin Klein, Fisher-General Motors and the Nature of the Firm, 43 J. L. Econ. 105 (2000); Benjamin Klein, Asset Specificity and Holdups in The Elgar Companion to Transaction Cost Economics 120-26 (Peter G. Klein & Michael Sykuta, eds., 2010). Pre-Chicago antitrust, in contrast, condemned any conduct that impeded the free flow of factors of production, thus finding things like exclusive territories and RPM illegal per se. See e.g., Standard Oil Co. v U.S., 337 U.S. 293 (1949); U.S. v. Topco, 405 U.S. 596 (1972).

[132] Despite later critics asserting the definitiveness of such ideas, see Einer Elhauge, Tying, Bundled Discounts, and the Death of the Single Monopoly Profit Theory, 123 Harv. L. Rev. 397 (2009), early Chicago School analysis recognized price discrimination explanations, the differences between fixed and variable proportions, and the possibility of a leverage argument in tying cases. See, e.g., Ward S. Bowman, Jr., Tying Arrangements and the Leverage Problem, 67 Yale L.J. 19 (1957). But see Daniel A. Crane & Joshua D. Wright, Can Bundled Discounting Increase Consumer Prices Without Excluding Rivals?, Competition Pol’y Int’l (Autumn 2009) 209, 210 (“The conditions necessary for monopoly leveraging through tying are narrow and rarely exhibited in real markets and, thus, we should continue to be presumptively skeptical about leverage claims.”); Daniel A. Crane, Mixed Bundling, Profit Sacrifice, and Consumer Welfare, 55 Emory L.J. 423, 464 (2006) (“Whether practices facilitating product branding or price discrimination are efficient in this sense raises questions that are fact-dependent at best and virtually always unanswerable in litigation.”).

[133] See Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73 Am. Econ. Rev. 267 (1983); Thomas G. Krattenmaker & Steven C. Salop, Anticompetitive Exclusion: Raising Rivals’ Costs to Achieve Power over Price, 96 Yale L.J. 209 (1986); Steven C. Salop & David T. Scheffman, Cost-Raising Strategies, 36 J. Indus. Econ. 19 (1987).

[134] See Aaron Director & Edward H. Levi, Law and the Future: Trade Regulation, 51 Northwestern U. L. Rev. 281, 293 (1956) (explaining that a manufacturer’s monopoly power may, in fact, be increased by foreclosing access to distributors when “the restrictions on the outlets impose greater costs on potential competitors than they do on the monopoly company itself”).

[135] The best-known empirical demonstration of RRC belongs to Chicago School scholars. See Elizabeth Granitz & Benjamin Klein, Monopolization by “Raising Rivals’ Costs”: The Standard Oil Case, 39 J.L. & Econ. 1 (1996).

[136] Meese, Non-Standard Contracting, supra note 127, at 83.

[137] See Manne & Wright, Innovation, supra note 1, at 163–77; Elyse Dorsey, Anything You Can Do, I Can Do Better—Except in Big Tech?: Antitrust’s New Inhospitality Tradition, 68 Kansas L. Rev. 975 (2019).

[138] Ronald Coase, Industrial Organization: A Proposal for Research, in 3 Policy Issues and Research Opportunities in Industrial Organization 59, 67 (Victor Fuchs, ed., 1972).

[139] See George A. Akerlof, Sins of Omission and the Practice of Economics, 58 J. Econ. Lit. 405 (2020); Deirdre N. McCloskey, Why Economics Is on the Wrong Track, in Economics of Economists: Institutional Setting, Individual Incentives, and Future Prospects 211 (Alessandro Lanteri & Jack Vromen eds., 2014).

[140] See, e.g., D. Daniel Sokol, Vertical Mergers and Entrepreneurial Exit, 70 Fla. L. Rev. 1357, 1371 (2018) (“For the past thirty years, antitrust literature has largely ignored the significant literature within strategy related to vertical integration in the technology setting. Overall, this literature shows the important efficiency-enhancing effects of vertical mergers. These mergers are largely complementary, combining the strengths of the acquiring firm in process innovation with the product innovation of the target firms.”). See also Geoffrey A. Manne, Kristian Stout & Eric Fruits, The Fatal Economic Flaws of the Contemporary Campaign Against Vertical Integration, 68 Kansas L. Rev. 923, 925-26 (2020) (“This narrow view of vertical integration thus ignores and threatens to undermine dynamic competition and innovation. Indeed, if we take the organization theory and business strategy literature on the organization of firms in dynamic industries seriously, the status quo might even be over-enforcing, and leading to the deterrence of innovative, procompetitive mergers.”).

[141] Here, too, Coase offers the best, most succinct explanation of why this assumption is a problem for a sensible error-cost analysis:

There is, of course, a further alternative, which is to do nothing about the problem at all [because] the costs involved in solving the problem by regulations . . . will often be heavy [and] it will no doubt be commonly the case that the gain which would come from regulating the actions which give rise to the harmful effects will be less than the costs involved in government regulation.

All solutions have costs and there is no reason to suppose that government regulation is called for simply because the problem is not well handled by the market or the firm.

Ronald H. Coase, The Problem of Social Cost, 3 J.L. & Econ. 1, 18 (1960).

[142] Meese, Non-Standard Contracting, supra note 127, at 85.

[143] Id. at 94.

[144] This is true despite the fact that even non-Chicago School scholars broadly recognize that the reallocation of resources through the elimination of horizontal or vertical competition can increase efficiency. See, e.g., Michael D. Whinston, Lectures on Antitrust Economics 16-17 (2008) (“It is well-understood by now that the number of firms that unfettered competition can support in a market need not be efficient in such cases. . . . The . . . ruinous competition argument can be viewed as saying exactly this: that unrestricted oligopolistic competition would lead to too few firms . . . relative to what is socially efficient. In such cases, it is possible that an inducement to entry in the form of cartelized prices could actually raise social welfare.”); Baker, Error Costs, supra note 26, at 30 (noting that distinguishing procompetitive from anticompetitive collusion may be no easier than for exclusion because “horizontal price fixing and market division . . . also can have efficiency justifications”).

[145] See, e.g., Armen Alchian, Uncertainty, Evolution, and Economic Theory, 58 J. Pol. Econ. 211 (1950); Richard R. Nelson & Sydney G. Winter, An Evolutionary Theory of Economic Change (1982).

[146] See, e.g., David Teece & Pisano, The Dynamic Capabilities of Firms, 3 Indus. & Corp. Change 537 (1994), Richard N. Langlois & P.L. Robertson, Firms, Markets, and Economic Change: A Dynamic Theory of Business Institutions (1995).

[147] See, e.g., Shelby D. Hunt, The Resource-Advantage Theory of Competition: Toward Explaining Productivity and Economic Growth, 4 J. Mgmt. Inquiry 317 (1995).

[148] See, e.g., Edith Penrose, the Theory of the Growth of the Firm (1959).

[149] Bruce H. Kobayashi, & Timothy J. Muris, Chicago, Post-Chicago, and Beyond: Time to Let Go of the 20th Century, 78 Antitrust L.J. 147, 166 (2012).

[150] Wright & Mungan, The Easterbrook Theorem and Optimal Standards of Proof, supra note 63, at 5.

[151] See, e.g., Matsushita Elec. Indus. Co., Ltd. v. Zenith Radio Corp., 475 U.S. 574, 594 (1986) (“Mistaken inferences in cases such as this one are especially costly, because they chill the very conduct the antitrust laws are designed to protect.”); Brooke Grp. Ltd. v. Brown & Williamson Tobacco Corp., 509 U.S. 209, 233 (1993) (refraining from condemning price cuts because of the cost of Type I errors stemming from “the antitrust laws [serving as] an obstacle to the chain of events most conductive to a breakdown of oligopoly pricing and the onset of competition.”); Verizon Comm. Inc. v. Law Offices of Curtis V. Trinko, LLP, 540 U.S. 398, 414 (2004) (“The cost of false positives counsels against an undue expansion of §2 liability.”); Bell Atl. Corp. v. Twombly, 550 U.S. 544, 559 (2007) (adjusting pleading standards in order to avoid Type I errors, noting that it is “self-evident that the problem of discovery abuse cannot be solved by careful scrutiny of evidence at the Summary Judgment stage, much less lucid instructions to juries; the threat of discovery expense will push cost-conscious defendants to settle even anemic cases before reaching those proceedings”); Credit Suisse Sec. (USA) LLC v. Billing, 551 U.S. 264, 281 (2007) (“In light of the nuanced nature of the evidentiary evaluations necessary to separate the permissible from the impermissible, it will prove difficult for those many different courts to reach consistent results.”); Leegin Creative Leather Prod., Inc. v. PSKS, Inc., 551 U.S. 877, 895 (2007) (“[R]ules can be counterproductive. They can increase the total cost of the antitrust system by prohibiting procompetitive conduct the antitrust laws should encourage.”) (citing Easterbrook, Vertical Arrangements and the Rule of Reason, 53 Antitrust L. J. 135, 158 (1984)); Brunswick Corp. v. Riegel Textile Corp., 752 F.2d 261, 267 (7th Cir. 1984) (quoting Easterbrook, supra note 26, at 33–39); SCFC ILC, Inc. v. Visa USA, Inc., 36 F.3d 958, 965 n.9 (10th Cir. 1994) (quoting Easterbrook, supra note 26, at 17); Saint Alphonsus Med. Ctr.-Nampa Inc. v. St. Luke’s Health Sys., Ltd., 778 F.3d 775, 790 (9th Cir. 2015) (quoting Easterbrook, supra note 26, at 39).

[152] (“[T]here is very little empirical evidence based on in-depth industry studies that RRC is a significant antitrust problem.”); Kobayashi, & Muris, Chicago, Post-Chicago, and Beyond, supra note 149, 166 (“Because of [the Post-Chicago School] literature’s focus on theoretical possibility theorems, little evidence exists regarding the empirical relevance of these theories.”). Id. at 148.

[153] Id. at 166.

[154] These papers are collected and assessed in several literature reviews including Francine Lafontaine & Margaret Slade, Exclusive Contracts and Vertical Restraints: Empirical Evidence and Public Policy, in Handbook of Antitrust Economics (Paolo Buccirossi ed., 2008); Daniel P. O’Brien, The Antitrust Treatment of Vertical Restraints: Beyond the Possibility Theorems, in The Pros and Cons of Vertical Restraints 40, 76–81 (Konkurrensverket Swedish Competition Authority ed., 2008); Cooper, et al., supra note 58, at 648; Global Antitrust Institute, Comment Letter on Federal Trade Commission’s Hearings on Competition and Consumer Protection in the 21st Century, Vertical Mergers 8 (George Mason Law & Econ. Research Paper No. 18-27, Sep. 6, 2018). Even the reviews of such conduct that purport to be critical are only tepidly so. See Marissa Beck & Fiona Scott Morton, Evaluating the Evidence on Vertical Mergers 2 (Working Paper, Feb. 26, 2020), https://ssrn.com/abstract=3554073 (“many vertical mergers are harmless or procompetitive, but that is a far weaker statement than presuming every or even most vertical mergers benefit competition regardless of market structure.”).

[155] Daniel Hosken & Christopher Taylor, Vertical Disintegration: The Effect of Refiner Exit from Gasoline Retailing on Retail Gasoline Pricing 34 (FTC Bureau of Economics Working Paper No. 344, Jul. 2020), https://www.ftc.gov/system/files/documents/reports/vertical-disintegration-effect-refiner-exit-gasoline-retailing-retail-gasoline-pricing/working_paper_344.pdf. For papers with similar results, see Fernando Luco & Guillermo Marshal, The Competitive Impact of Vertical Integration by Multiproduct Firms, 12 Am. Econ. J.: Microeconomics 1 (2020); Gregory S. Crawford, Robin S. Lee, Michael D. Whinston & Ali Yurukoglu, The Welfare Effects of Vertical Integration in Multichannel Television Markets, 86 Econometrica 891 (2018).

[156] See supra note 133.

[157] For a discussion of how elements of antitrust doctrine implement error-cost concerns, see supra Section I.C. See generally Manne & Stout, supra note 1.

[158] Hovenkamp & Scott Morton, supra note 12.

[159] Id. at 37.

[160] Kobayashi, & Muris, Chicago, Post-Chicago, and Beyond, supra note 149, at 148.

[161] Id. at 166.

[162] Id. at 162.

[163] David McGowan, Innovation, Uncertainty, and Stability in Antitrust Law, 16 Berkeley Tech. L.J. 729, 738 (2001). McGowan does go on to argue that “skepticism is not surrender. It instead demands nothing more than a clear-eyed look at evidence of market structure and behavior, and rigorous analysis of the implications of both for social welfare.” Id.

[164] Manne & Wright, Innovation, supra note 1, at 164.

[165] Easterbrook, Exclusionary Conduct, supra note 26, at 975. See also Manne & Wright, Innovation, supra note 1, at 165; Geoffrey A. Manne & E. Marcellus Williamson, Hot Docs vs. Cold Economics: The Use and Misuse of Business Documents in Antitrust Enforcement and Adjudication, 47 Ariz. L. Rev. 609, 619-24 (2005) (discussing the disconnect between business knowledge and economic reality). See generally Alchian, supra note 145.

[166] See generally Adam Thierer, Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle, 14 Minn. J. L. Sci. & Tech. 309 (2013).

[167] Simon, Theories of Decision-Making, supra note 4, at 278-79.

[168] See, e.g., Manne & Wright, Introduction, supra note 1, at 1 (“[T]he ratio of what is known to unknown with respect to the relationship between innovation, competition, and regulatory policy is staggeringly low. In addition to this uncertainty concerning the relationships between regulation, innovation, and economic growth, the process of innovation itself is not well understood.”); Manne & Wright, Innovation, supra note 1, at 166 (“[A]s a general rule, economists know much less about the relationship between competition and innovation, and in turn, consumer welfare, than they do about standard price competition.”); Joshua D. Wright, Antitrust, Multi-Dimensional Competition, and Innovation: Do We Have An Antitrust Relevant Theory of Competition Now?, in Competition Policy and Patent Law Under Uncertainty: Regulating Innovation (Geoffrey A. Manne & Joshua D. Wright eds., 2010); Richard J. Gilbert, Competition and Innovation, in 1 Issues in Competition Law and Policy 577, 583 (W. Dale Collins ed., 2008) (“[E]conomic theory does not provide unambiguous support either for the view that market power generally threatens innovation by lowering the return to innovative efforts nor the Schumpeterian view that concentrated markets generally promote innovation.”).

[169] David J. Teece, Technological Innovation and the Theory of the Firm: The Role of Enterprise-Level Knowledge, Complementarities, and (Dynamic) Capabilities, in 1 Handbook of the Economics of Innovation 679, 687-88 (Bronwyn H. Hall & Nathan Rosenberg eds., 2010).

[170] This problem is endemic to contemporary economics’ possibility theorems, of course. See, e.g., Richard A. Posner, Antitrust in the New Economy, 68 Antitrust L.J. 925, 927 (2001) (“Whenever an antitrust court is called on to balance efficiency against monopoly, there is trouble; legal uncertainty, and the likelihood of error, soar.”); Manne & Wright, Innovation, supra note 1, at 172 (“Thus, a key critique of the modern industrial organization literature and its possibility theorems involving anticompetitive behavior has been that it fails to consistently produce testable implications.”).

[171] See Manne & Wright, Innovation, supra note 1, at 185 (“Business innovations, like product innovations, confer competitive advantages and, while remaining ill-understood, engender uncertainty, rent-seeking, and reprisal.”).

[172] Id. (“Business innovations present interesting opportunities for economic analysis (to an even greater extent than product innovations, in fact) and are thus susceptible to the systematic biases in economic analysis that we have discussed.”).

[173] As noted below, however, a significant impetus toward “precautionary antitrust” often attends technological innovation—in fact increasing the likelihood of antitrust over-deterrence. See infra notes 184 to 188 and accompanying text.

[174] See, e.g., Thomas M. Jorde & David J. Teece, Competing Through Innovation: Implications for Market Definition, 64 Chi.-Kent L. Rev. 741, 742 (1988) (“Moreover, in markets characterized by rapid technological progress, competition often takes place on the basis of performance features and not price.”). See also David S. Evans & Richard Schmalensee, Some Economic Aspects of Antitrust Analysis in Dynamically Competitive Industries, in 2 Innovation Policy and the Economy 1, 3 (Adam B. Jaffe, et al., eds., 2002) (“The defining feature of new-economy industries is a competitive process dominated by efforts to create intellectual property through R&D, which often results in rapid and disruptive technological change.”).

[175] See Manne & Wright, Innovation, supra note 1, at 185 (“These innovations are also extremely valuable, in particular because they may be directly extendable to a much wider range of the economy than product innovations, and like product innovations, business innovations can have wide-ranging, dynamic follow-on effects throughout the economy.”).

[176] William P. Barnett, The Red Queen Among Organizations: How Competitiveness Evolves 19-20 (2008).

[177] Easterbrook, Limits, supra note 26, at 5.

[178] Id.

[179] Manne & Wright, Innovation, supra note 1, at 167.

[180] There are occasional exceptions, of course. See, e.g., Michael D. Whinston, Tying, Foreclosure, and Exclusion, 80 Am. Econ. Rev. 837 (1990) (“While the analysis vindicates the leverage hypothesis on a positive level, its normative implications are less clear. Even in the simple models considered here, which ignore a number of other possible motivations for the practice, the impact of this exclusion on welfare is uncertain. This fact, combined with the difficulty of sorting out the leverage-based instances of tying from other cases, makes the specification of a practical legal standard extremely difficult.”).

[181] Joseph A. Schumpeter, Capitalism, Socialism and Democracy 71 (Routledge ed. 1976).

[182] The term, “antitrust dystopia,” along with its cousin, “antitrust nostalgia,” is from Dirk Auer & Geoffrey A. Manne, Antitrust Dystopia and Antitrust Nostalgia: Alarmist Theories of Harm in Digital Markets (ICLE Working Paper, forthcoming).

[183] See M. Ridley, The Rational Optimist: How Prosperity Evolves (2010). See also S. Pinker, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (2018).

[184] Coase, Industrial Organization, supra note 138, at 67.

[185] That is, low probability/high impact events, sometimes referred to as “Black Swans.” See N.N. Taleb, The Black Swan: The Impact of the Highly Improbable xvii (2008).

[186] See, e.g., Aurelien Portuese, The Rise of Precautionary Antitrust: An Illustration with the EU Google Android Decision, CPI Europe Column (Nov. 17, 2019), https://www.competitionpolicyinternational.com/the-rise-of-precautionary-antitrust-an-illustration-with-the-eu-google-android-decision/. See also Thierer, supra note 166, at 342-43 (noting the incentives of competitors to foment such fears, including in the antitrust context, to burden their rivals with “regulation that might constrain their efforts to innovate, expand, and compete. Unfortunately, when companies and other interests employ such tactics, it merely raises the general level of anxiety about information technology and the Internet more broadly”).

[187] See, e.g., William H. Page, Antitrust Review of Mergers in Transition Economies: A Comment, With Some Lessons from Brazil, 66 U. Cin. L. Rev. 1113, 1124 (1998) (“This approach is widely discredited in modern American antitrust because courts, recognizing the limits of their powers of evaluation and remediation, have come to respect the dynamism of the market, and to hesitate before prohibiting complex practices.”).

[188] See supra note 151.

[189] George Bittlingmayer, The Economic Problem of Fixed Costs and What Legal Research Can Contribute, 14 L. & Social Inquiry 739, 740 (1989). Bittlingmayer notes Lester Telser’s foundational role in this literature—the theory of the empty core—collected in a series of books in the late 1970s and 1980s. See Lester G. Telser, Economic Theory and the Core (1978); Lester G. Telser, A Theory of Efficient Cooperation and Competition (1987); and Lester G. Telser, Theories of Competition (1988). See also George Bittlingmayer, Decreasing Average Cost and Competition: A New Look at the Addyston Pipe Case, 25 J.L. & Econ. 201 (1982); George Bittlingmayer, Price-Fixing and the Addyston Pipe Case, 5 Res. L. & Econ. 57 (1983); George Bittlingmayer, Did Antitrust Policy Cause the Great Merger Wave?, 28 J.L. & Econ. 77 (1985).

[190] Bittlingmayer, The Economic Problem of Fixed Costs, id., at 751.

[191] See Thomas M. Jorde & David J. Teece, Antitrust Policy and Innovation: Taking Account of Performance Competition and Competitor Cooperation, 147 J. Inst’l & Theoretical Econ. 118, 120 (1991) (“At minimum, we would propose that when the promotion of static consumer welfare and innovation are in conflict, the courts and administrative agencies should favor innovation. Adopting dynamic competition and innovation as the goal of antitrust would, in our view, serve consumer welfare over time more assuredly than would the current focus on short-run consumer welfare.”).

[192] J. Gregory Sidak & David J. Teece, Dynamic Competition in Antitrust Law, 5 J. Competition L. & Econ. 581, 604 (2009).

[193] The following (through the text accompanying note 200) draws substantially from Geoffrey A. Manne, Against the vertical discrimination presumption, Concurrences N° 2-2020 (2020).

[194] See, inter alia, Jonathan M. Barnett, The Host’s Dilemma: Strategic Forfeiture in Platform Markets for Informational Goods, 124 Harv. L. Rev. 1861 (2011); David J. Teece, Profiting from Technological Innovation: Implications for Integration, Collaboration, Licensing and Public Policy, 15 Res. Pol’y 285 (1986); Andrei Hagiu & Kevin Boudreau, Platform Rules: Multi-Sided Platforms As Regulators, in Platforms, Markets and Innovation (Annabelle Gawer, ed. 2009); Kevin Boudreau, Open Platform Strategies and Innovation: Granting Access vs. Devolving Control, 56 Mgmt. Sci. 1849 (2010).

[195] See generally John M. Yun, Overview of Network Effects & Platforms in Digital Markets, in The GAI Report on the Digital Economy (2020); and Michael Salinger, Self-Preferencing, in The GAI Report on the Digital Economy (2020).

[196] See Hagiu & Boudreau, Platform Rules, supra note 194; Barnett, The Host’s Dilemma, supra note 194.

[197] See, e.g., Oliver E. Williamson, The Vertical Integration of Production: Market Failure Considerations, 61 Am. Econ. Rev. 112 (1971); Benjamin Klein, Asset Specificity and Holdups, in The Elgar Companion to Transaction Cost Economics 120-26 (Peter G. Klein & Michael Sykuta, eds., 2010).

[198] Commission Decision No. AT.39740 (Google Search (Shopping)) at ¶¶ 591-607.

[199] Id. at ¶ 332.

[200] As one online marketing/SEO expert put it: “counting on search engine traffic as your primary traffic source is a bit foolish to say the least. . . .” See Ana Hoffman, Where Does Website Traffic Come From: Search Engine and Referral Traffic, Traffic Generation Café (Mar. 12, 2018), https://trafficgeneration
cafe.com/website-traffic-source-search-engine-referral/.

[201] Barnett, The Host’s Dilemma, supra note 194, at 1890.

[202] See Sidak & Teece, Dynamic Competition, supra note 192, at 611 (“Simple rules based on static analysis may well produce policy actions and judicial decisions that impede competition. In particular, policymakers should de-emphasize concentration analysis.”).

[203] See, e.g., Harold Demsetz, The Intensity and Dimensionality of Competition, in Harold Demsetz, the Economics of the Business Firm: Seven Critical Commentaries 137, 140-41 (1995) (“Once perfect knowledge of technology and price is abandoned, [competitive intensity] may increase, decrease, or remain unchanged as the number of firms in the market is increased. . . . [I]t is presumptuous to conclude . . . that markets populated by fewer firms perform less well or offer competition that is less intense.”).

[204] See supra notes 136 to 162 and accompanying text.

[205] See, e.g., Giulio Federico, Fiona Scott Morton & Carl Shapiro, Antitrust and Innovation: Welcoming and Protecting Disruption, in 20 Innovation Policy and the Economy 125 (Josh Lerner and Scott Stern eds., 2020).

[206] Id. at 135-36.

[207] Id. at 136.

[208] See Sidak & Teece, Dynamic Competition, supra note 192, at 585.

[209] Id. at 589.

[210] See Easterbrook, Limits, supra note 26, at 12-13 (“When everything is relevant, nothing is dispositive. Any one factor might or might not outweigh another, or all of the others, in the factfinder’s contemplation. The formulation [of the Rule of Reason] offers no help to businesses planning their conduct. . . . Litigation costs are the product of vague rules combined with high stakes, and nowhere is that combination more deadly than in antitrust litigation under the Rule of Reason.”).

[211] Whinston, Lectures, supra note 144, at 18-19.

[212] See, e.g., Ben Bernanke, Irreversibility, Uncertainty and Cyclical Investment, 98 Q. J. Econ. 85 (1983); Avinash Dixit, Entry and Exit Decisions Under Uncertainty, 97 J. Pol. Econ. 620 (1989); Robert S. Pindyck, Irreversibility, Uncertainty, and Investment, 29 J. Econ. Lit. 1110 (1991); Avinash Dixit & Robert S. Pindyck, Investment Under Uncertainty (Princeton U. Press 1994); Ricardo J. Caballero & Robert S. Pindyck, Uncertainty, Investment, and Industry Evolution, 37 Int’l Econ. Rev. 641 (1996); Nicholas Bloom et al., Uncertainty and Investment Dynamics, 74 Rev. Econ. Stud. 391 (2007); Nicholas Bloom, The Impact of Uncertainty Shocks, 77 Econometrica 623 (2009).

[213] See generally Steven J. Davis, Regulatory Complexity and Policy Uncertainty: Headwinds of Our Own Making (Becker Friedman Inst. for Rsrch. in Econ. Working Paper No. 2723980, 2017), https://ssrn.com/abstract=2723980.

[214] Jerry Ellig, et al., Economics at the FCC 2007-2018: International Broadband Pricing Comparisons, and a New Office of Economics and Analytics, 53 Rev. Indus. Econ. 681, 689-90 (2018) (emphasis added), https://doi.org/10.1007/s11151-018-9672-6.

[215] Scott R. Baker, Nicholas Bloom, & Steven J. Davis, Has Economic Policy Uncertainty Hampered the Recovery?, in Government Policies and the Delayed Economic Recovery (Lee E. Ohanian & John B. Taylor & Ian J. Wright, eds. 2012), prepublication draft available at https://www.semanticscholar.org/paper/Has-Economic-Policy-Uncertainty-Hampered-the-Baker-Bloom/0eb2f1ae9e2b5693043d13ef0b44036fe36d2165.

[216] Ginsburg & Wright, supra note 8, at 14.

[217] Id. at 15.

[218] Id. at 20 (“[W]e all know that static analysis has significant limitations; the future rarely turns out looking like the present, and straight-line projections from the recent past through to the future give only the illusion of foresight.”).

[219] Id. at 15.

[220] Ramsi A. Woodcock, The Hidden Rules of a Modest Antitrust, Minn. L. Rev. (forthcoming 2021) (working paper at 9-10), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2896453.

[221] See supra, Section I.C.

[222] See Manne & Stout, Evolution, supra note 1.

[223] Lindsey M. Edwards & Joshua D. Wright, The Death of Antitrust Safe Harbors: Causes and Consequences, 23 Geo. Mason L. Rev. 1205, 1223 (2016).

[224] Frank H. Easterbrook, Predatory Strategies and Counterstrategies, 48 U. Chi. L. Rev. 263, 335 (1981).

[225] Broadcast Music, Inc. v. Columbia Broadcasting Sys., Inc., 441 U.S. 1, 19-20 (1979).

[226] Easterbrook, Predatory Strategies, supra note 224, at 335.

[227] Jefferson Parish Hospital Dist. No. 2 v. Hyde, 466 U.S. 2, 16 n. 25 (1984).

[228] Meese, Price Theory, supra note 127, at 93.

[229] Id. (“A conclusion that a particular class of restraint is unlawful per se rests upon a determination that a thoroughgoing examination of the reasonableness of such restraints will always or almost always result in a conclusion that they exercise or create market power and thus restrain competition (rivalry) unduly. In this way, per se rules replicate the result that full blown analysis would produce while at the same time avoiding the administrative costs of such an inquiry.”).

[230] Continental T.V., Inc. v. GTE Sylvania Inc., 433 U.S. 36, 50 (1977).

[231] Leegin Creative Leather Prod., Inc. v. PSKS, Inc., 551 U.S. 877, 886-87 (2007) (omission in original; citation omitted).

[232] Broadcast Music, Inc. v. Columbia Broadcasting System, Inc., 441 U.S. 1, 9 (1979).

[233] Cal. Dental Ass’n v. FTC, 526 U.S. 756, 771 (1999).

[234] Cal. Dental, 526 U.S. at 781. See also Ehrlich & Posner, supra note 17, at 266 (“Initially a particular type of case is decided under a general standard which permits a broad-ranging factual inquiry. Successive decisions convey information about how such cases should be decided. A point is eventually reached at which the additional information imparted by another decision under the standard is not worth the additional costs. . . of decision by standard as compared to decision by rule. So a rule is adopted, based on the information previously obtained, to control subsequent decisions.”).

[235] Broadcast Music, 441 U.S. at 23-24. See also In re Sulfuric Acid Antitrust Litig., 703 F.3d 1004, 1011-12 (7th Cir. 2012) (“[i]t is a bad idea to subject a novel way of doing business (or an old way in a new and previously unexamined context. . .) to per se treatment”); United States v. Microsoft Corp., 253 F.3d 34, 84, 89 (D.C. Cir. 2001) (refusing to apply the per se rule to “tying arrangements involving platform software products” because they were an entirely “novel categor[y] of dealings”).

[236] Meese, Price Theory, supra note 127, at 124. On the “inhospitality tradition” and its problematic consequences generally, see id. at 124-34; Oliver E. Williamson, The Economic Institutions of Capitalism: Firms, Markets, Relational Contracting 19, 370-73 (1985). For one of the paradigmatic cases espousing this tradition, see United States v. Topco Associates, 405 U.S. 596 (1972).

[237] See, e.g., Oliver E. Williamson, The Economics of Organization: The Transaction Cost Approach, 87 Am. J. Soc. 548 (1981).

[238] Meese, Price Theory, supra note 127.

[239] As the Court noted in Leegin Creative Leather Prods. v. PSKS, Inc., 551 U.S. 877, 887 (2007), “as we have stated, a ‘departure from the rule-of-reason standard must be based upon demonstrable economic effect rather than. . . upon formalistic line drawing.’” Cases in which the per se rule was abandoned include Cont’l T.V. v. GTE Sylvania Inc., 433 U.S. 36 (1977) (holding dealer restraints on purchasers no longer per se unlawful); Broad. Music Inc. v. Columbia Broad. Sys., 441 U.S. 1 (1979) (finding price-fixing agreement among horizontal competitors legal); State Oil v. Khan, 522 U.S. 3 (1997) (applying rule of reason to maximum resale price maintenance); Leegin, 551 U.S. at 877 (holding minimum resale price maintenance subject to rule of reason).

[240] See Cal. Dental Ass’n v. FTC, 526 U.S. 756, 771 (1999).

[241] Id. at 779 (citations omitted) (emphasis removed).

[242] Verizon Commc’ns Inc. v. Law Offices of Curtis V. Trinko, LLP, 540 U.S. 398, 408 (2004).

[243] Nw. Wholesale Stationers, Inc. v. Pac. Stationery & Printing Co., 472 U.S. 284, 294-96 (1985) (emphasis added); accord Cal. Dental, 526 U.S. at 771.

[244] Brunswick Corp. v. Pueblo Bowl-O-Mat, Inc., 429 U.S. 477, 488 (1977) (citing Brown Shoe Co. v. United States, 370 U.S. 294, 320 (1962)). Brunswick was decided in the context of Section 4 of the Clayton Act. But the Court has subsequently held the antitrust injury limitation in Brunswick to apply in Sherman Act and other antitrust cases, as well. See Blue Shield of Va. v. McCready, 457 U.S. 465 (1982) (applying the antitrust injury rule to a claim brought under the Sherman Act); Atl. Richfield Co. v. USA Petroleum Co., 495 U.S. 328 (1990) (imposing the antitrust injury requirement on every private antitrust case, irrespective of the statutory source of liability).

[245] Atl. Richfield Co. v. USA Petroleum Co., 495 U.S. 328, 344 (1990). See also Gatt Commc’ns Inc. v. PMC Assocs., 711 F.3d 68, 76 (2d Cir. 2013) (“It is not enough for the actual injury to be ‘causally linked’ to the asserted violation.”).

[246] Manne & Stout, Evolution, supra note 1, at 437.

[247] Gregory J. Werden, Why (Ever) Define Markets? An Answer to Professor Kaplow, 78 Antitrust L.J. 729, 741 (2013).

[248] Manne, In Defence of the Supreme Court’s ‘Single Market’ Definition, supra note 54, at 106.

[249] Ohio v. Am. Express Co., 138 S. Ct. 2274, 2287 (2018).

[250] For a discussion of the market definition controversy in Amex, see Manne, supra note 248; Joshua D. Wright & John M. Yun, Burdens and Balancing in Multisided Markets: The First Principles Approach of Ohio v. American Express, 54 Rev. Indus. Org. 717 (2019).

[251] Ginsburg & Wright, supra note 8, at 4.

[252] Pinar Akman, The Theory of Abuse in Google Search: A Positive and Normative Assessment Under EU Competition Law, 2017 J. L. Tech. & Pol’y 301, 369 (2017) (citing Louis Kaplow, Why (Ever) Define Markets, 124 Harv. L. Rev.437 (2010)).

[253] Thomas M. Jorde & David J. Teece, Innovation, Dynamic Competition, and Antitrust Policy, Regulation (Fall 1990) at 37-38.

[254] European Commission, Commission Notice on the Definition of Relevant Market for the Purposes of Community Competition Law, OJ C 372, 9.12.1997, http://eur-lex.europa.eu/legal-content/EN/ALL/?uri=celex
%3A31997Y1209(01).

[255] Dissenting Statement of Commissioner Joshua D. Wright, In the Matter of Nielsen Holdings N.V. and Arbitron Inc., FTC File No. 131-0058 (Sep. 20, 2013) at 3. As then-Commissioner Wright further points out in a related footnote:

The link between market structure and incentives to innovate remains inconclusive. See, e.g., [Ginsburg & Wright, supra note 8,] at 4-5 (“To this day, the complex relationship between static product market competition and the incentive to innovate is not well understood.”); Richard J. Gilbert, Competition and Innovation, in 1 ABA Section of Antitrust Law, Issues in Competition Law and Policy 577, 583 (W. Dale Collins ed., 2008) (“[E]conomic theory does not provide unambiguous support either for the view that market power generally threatens innovation by lowering the return to innovative efforts nor the Schumpeterian view that concentrated markets generally promote innovation.”).

Id. at n. 7. See also John M. Yun, Potential Competition, Nascent Competitors, and Killer Acquisitions, in The GAI Report on the Digital Economy (2020).

[256] Sidak & Teece, supra note 192, at 617.

[257] Id. at 625 (“The SSNIP test focuses on consumer substitution. Supply substitution (including entry) is not considered until after market shares are calculated solely on the basis of the static, consumer-oriented market definition. One can dispute whether that approach is good economics; as a matter of law, however, the static approach [is] the law.”).

[258] U.S. Dep’t Of Justice & Fed. Trade Comm’n, Horizontal Merger Guidelines (2010), http://www.justice.gov/atr/public/guidelines/hmg2010.pdf.

[259] See Jorde & Teece, supra note 174.

Back to Top