Thursday, June 23, 2016

Cuozzo v. Lee and the Potential for Patent Law Deference Mistakes

I wrote a short post on Monday's decision in Cuozzo v. Lee for Stanford's Legal Aggregate blog, which I'm reposting here. My co-blogger Michael Risch has already posted his initial reactions to the opinion on Monday, and he also wrote about deference mistakes in the context of the "broadest reasonable interpretation" standard in an earlier article, The Failure of Public Notice in Patent Prosecution.

The Federal Circuit's patent law losing streak was broken Monday with the Supreme Court's decision in Cuozzo v. Lee. At issue were two provisions of the 2011 America Invents Act related to the new "inter partes review" (IPR) procedure for challenging granted patents before the Patent and Trademark Office. IPR proceedings have been melodramatically termed "death squads" for patents—only 14% of patents that have been through completed trials have emerged unscathed—but the Supreme Court dashed patent owners' hopes by upholding the status quo. Patent commentators are divided on whether the ease of invalidating patents through IPR spurs or hinders innovation, but I have a more subtle concern: the Supreme Court's affirmance means that the PTO and the courts will evaluate the validity of granted patents under different standards of review and different interpretive rules, providing ample possibilities for what Prof. Jonathan Masur and I have termed "deference mistakes" if decisionmakers aren't careful about distinguishing them.

Monday, June 20, 2016

Cuozzo: So Right, Yet So Wrong

The Supreme Court issued its basically unanimous opinion in Cuozzo today. I won't give a lot of background here; anyone taking the time to read this likely understands the issues. The gist of the ruling is this: USPTO institution decisions in inter partes review (IPR) are unappealable, and the PTO can set the claim construction rules for IPR's, and thus the current broadest reasonable construction rule will surely remain unchanged.

I have just a few thoughts on the ruling, which I'll discuss here briefly.

First, the unappealability ruling seems right to me. That is, what part of "final and non-appealable" do we not understand? Of course, this leads to a partial dissent, that it means no interlocutory appeals, but you can appeal upon a final disposition. But that's just a statutory interpretation difference in my book. I'm not a general admin law expert, but the core of the reading, that Congress can give the right to institute a proceeding and make it unreviewable, so long as the outcome of the proceeding is reviewable, seems well within the range of rationality here.

But, even so, the ruling is unpalatable based on what I know about some of the decisions that have been made by the PTO. (Side note, my student won the NYIPLA writing competition based on a paper discussing this issue.) The court dismisses patentee's complaint that the PTO might institute on claims that weren't even petitioned for review as simply quibbling with the particularity of the petition and not raising any constitutional issue. This is troublesome, and it sure doesn't ring true in light of Twiqbal.

Second, the broadest reasonable construction ruling seems entirely, well, broadly reasonable. The PTO uses that method already in assessing claims, and it has wide discretion in the procedures it uses to determine patentability. Of course the PTO can do this.

But, still, it's so wrong. The Court understates, I believe, the difficulty of obtaining amendments during IPR. The Court also points to the opportunity to amend during the initial prosecution; of course, the art in the IPR is now newly being applied - so it is not as if the BRC rule had been used in prosecution to narrow the claim. Which is the entire point of the rule - to read claims broadly to invalidate them, so that they may be narrowed during prosecution. But this goal often fails, as I wrote in my job talk article: The Failure of Public Notice in Patent Prosecution, in which I suggested dumping the BRC rule about 10 years ago.

Whatever the merits of the BRC rule in prosecution, they are lost in IPR, where the goal is to test a patent for validity, not to engage in an iterative process of narrowing the claims with an examiner. I think more liberal allowance of amendments (which is happening a bit) would solve some of the problems of the rule in IPRs.

Thus, my takeaway is a simple one: sometimes the law doesn't line up with preferred policy. It's something you see on the Supreme Court a lot. See, e.g. Justice Sotomayor's dissent today in Utah v. Strieff

Thursday, June 16, 2016

Halo v. Pulse and the Increased Risks of Reading Patents

I wrote a short post on Monday's decision in Halo v. Pulse for Stanford's Legal Aggregate blog, which I'm reposting here.

The Supreme Court just made it easier for patent plaintiffs to get enhanced damages—but perhaps at the cost of limiting the teaching benefit patents can provide to other researchers. Chief Justice Robert’s opinion in Halo v. Pulse marks yet another case in which the Supreme Court has unanimously rejected the Federal Circuit’s efforts to create clearer rules for patent litigants. Unlike most other Supreme Court patent decisions over the past decade, however, Halo v. Pulse serves to strengthen rather than weaken patent rights.

Patent plaintiffs typically may recover only their lost profits or a “reasonable royalty” to compensate for the infringement, but § 284 of the Patent Act states that “the court may increase the damages up to three times the amount found or assessed.” In the absence of statutory guidance on when the court may award these enhanced damages, the Federal Circuit created a two-part test in its 2007 en banc Seagate opinion, holding that the patentee must show both “objective recklessness” and “subjective knowledge” on the part of the infringer. The Supreme Court has now replaced this “unduly rigid” rule with a more uncertain standard, holding that district courts have wide discretion “to punish the full range of culpable behavior” though “such punishment should generally be reserved for egregious cases.”

Monday, June 13, 2016

On Empirical Studies of Judicial Opinions

I've always found it odd that we (and I include myself in this category) perform empirical studies of outcomes in judicial cases. There's plenty to be gleaned from studying the internals of opinions - citation analysis, judge voting, issue handling, etc., but outcomes are what they are. It should simply be tallying up what happened. Further, modeling those outcomes on the internals becomes the realest of realist pursuits.

And, yet, we undertake the effort, in large part because someone has to. Otherwise, we have no idea what is happening out there in the real world of litigation (and yes, I know there are detractors who say that even this isn't sufficient to describe reality because of selection effects).

But as data is easier to come by, studies have become easier. When I started gathering data for Patent Troll Myths in 2009, there was literally no publicly aggregated data about NPE activity. By the time my third article in the series, The Layered Patent System, hit the presses last month (it had been on SSRN for 16 months, mind you) there was a veritable cottage industry of litigation reporting - studies published by my IP colleagues at other schools, annual reports by firms, etc.

Even so, they all measure things differently, even when they are measuring the same thing. This is where Jason Rantanen's new paper comes in. It's called Empirical Analyses of Judicial Opinions: Methodology, Metrics and the Federal Circuit, and the abstract follows:

Despite the popularity of empirical studies of the Federal Circuit’s patent law decisions, a comprehensive picture of those decisions has only recently begun to emerge. Historically, the literature has largely consisted of individual studies that provide just a narrow slice of quantitative data relating to a specific patent law doctrine. Even studies that take a more holistic approach to the Federal Circuit’s jurisprudence primarily focus on their own results and address only briefly the findings of other studies. While recent developments in the field hold great promise, one important but yet unexplored dimension is the use of multiple studies to form a complete and rigorously supported understanding of particular attributes of the court’s decisions.

Drawing upon the empirical literature as a whole, this Article examines the degree to which the reported data can be considered in collective terms. It focuses specifically on the rates at which the Federal Circuit reverses lower tribunals — a subject whose importance is likely to continue to grow as scholars, judges, and practitioners attempt to ascertain the impact of the Supreme Court’s recent decisions addressing the standard of review applied by the Federal Circuit, including in the highly contentious area of claim construction. The existence of multiple studies purportedly measuring the same thing should give a sense of the degree to which researchers can measure that attribute.

Surprisingly, as this examination reveals, there is often substantial variation of reported results within the empirical literature, even when the same parameter is measured. Such variation presents a substantial hurdle to meaningful use of metrics such as reversal rates. This article explores the sources of this variability, assesses its impact on the literature and proposes ways for future researchers to ensure that their studies can add meaningful data (as opposed to just noise) to the collective understanding of both reversal rate studies and quantitative studies of appellate jurisprudence more broadly. Although its focus is on the Federal Circuit, a highly studied court, the insights of this Article are applicable to virtually all empirical studies of judicial opinions.
I liked this paper. It provides a very helpful overview of the different types of decisions researchers make that can affected how their empirical "measurement" (read counting) can be affect and thus inconsistent with others. It also provides some suggestions for solving this issue in the future.

My final takeaway is mixed, however. On the one hand, Rantanen is right that the different methodologies make it hard to combine studies to get a complete picture. More consistent measures would be helpful. On the other hand, many folks count the way they do because they see deficiencies with past methodologies. I know I did. For example, when counting outcomes, I was sure to count how many cases settled without a merits ruling either way (almost all of them). Why? Because "half of patents are invalidated" is very different than "half of the 10% of patents ever challenged are invalidated" are two very different outcomes.

Thus, I suspect one reason we see inconsistency is that each later researcher has improved on the methodology of those who went before, at least in his or her own mind. If that's true, the only way we get to consistency now is if we are in some sort of "post-experimental" world of counting. And if that's true, then I suspect we won't see multiple studies in the first place (at least not for the same time period). Why bother counting the same thing the same way a second time?

Friday, June 10, 2016

Patent Damages Conference at Texas Law

Numerous patent academics, practitioners, and judges gathered in Austin at the University of Texas School of Law yesterday and today for a conference on patent damages, organized by Prof. John Golden and supported by a gift from Intel. Here's a quick overview of the 12 papers that were presented, the suggestions from the paper commenters, and some notes from the Q&A. (We're following a modified Chatham House Rules in which only statements from academics can be attributed, but it was great having others in the room.)

Jason Bartlett & Jorge Contreras, Interpleader and FRAND Royalties – There is no reason to believe the sum of the bottom-up royalty determinations from FRAND proceedings will be reasonable in terms of the overall value the patents contribute to the standard. To fix this, statutory interpleader should be used to join all patent owners for a particular standard into a single proceeding that starts with a top-down approach. Arti Rai asks whether the bottom-up approach really creates such significant problems. Why can’t courts doing the bottom-up approach look at what prior courts have done? And doesn’t this vary depending on what product you’re talking about? But ultimately, this is a voluntary proposal that individual clients could test out. Doug Melamed notes that even if royalties in individual cases are excessive, standard implementers won't have an incentive to interplead unless their aggregate burden is excessive—and given the large number of "sleeping dog" patents, it's not clear that's true.

Ted Sichelman, Innovation Factors for Reasonable Royalties – Instead of calculating royalties based on the infringer's revenues, let's use the patentee's R&D costs (including related failures and commercialization costs) and award reasonable rate of return. Better aligned with innovation-focused goals of patent law. Becky Eisenberg notes that it is stunning that patentee costs aren't in the kitchen-sink Georgia-Pacific list, and she thinks idea of moving toward a cost-based approach more broadly has significant normative appeal, but she doesn't think it's easier to apply (see, e.g., criticisms of DiMasi estimates of pharmaceutical R&D costs). I think this paper is tapping into the benefits of R&D tax credits as an innovation reward. Daniel Hemel and I have compared the cost-based reward of R&D tax credits with the typical patent reward (in a paper Ted has generously reviewed), and it seems worth thinking more about whether and when it makes sense to move this cost-based reward into the patent system.

Tuesday, June 7, 2016

Does Europe Have Patent Trolls?

There have been countless articles—including in the popular press—about the problems (or lack thereof) with "patent trolls" or "non-practicing entities" (NPEs) or "patent-assertion entities" (PAEs) in the United States. Are PAEs and NPEs a uniquely American phenomenon? Not exactly, says a new book chapter, Patent Assertion Entities in Europe, by Brian Love, Christian Helmers, Fabian Gaessler, and Max Ernicke.

They study all patent suits filed from 2000-2008 in Germany's three busiest courts and most cases filed from 2000-2013 in the UK. They find that PAEs (including failed product companies) account for about 9% of these suits and that NPEs (PAEs plus universities, pre-product startups, individuals, industry consortiums, and IP subsidiaries of product companies) account for about 19%. These are small numbers by U.S. standards, but still significant. Most European PAE suits involve computer and telecom technologies. Compared with the United States, more PAE suits are initiated by the alleged infringer, fewer suits involve validity challenges, fewer suits settle, and more suits involve patentee wins.

Many explanations have been offered for the comparative rarity of PAE suits in Europe, including higher barriers to patenting software, higher enforcement costs, cheaper defense costs, smaller damages awards, and more frequent attorney's fee awards. The authors think their "data suggests that each explanation plays a role," but that "the European practice of routinely awarding attorney's fees stands out the most as a key reason why PAEs tend to avoid Europe."

Tuesday, May 31, 2016

Rachel Sachs: Prizing Insurance

If anyone is looking for a clear and comprehensive review of the ways in which patents can distort investment in innovation, as well as a summary of the literature on incentives "beyond IP", I highly recommend Rachel Sachs' new article Prizing Insurance: Prescription Drug Insurance as Innovation Incentive, forthcoming in the Harvard Journal of Law & Technology. Sachs' article is specific to the pharmaceutical industry but is very useful for anyone writing on the general topics of non-patent alternatives and patent-caused distortion of innovation. Sachs follows in the footsteps of IP scholars like Amy Kapczynski, along with Rebecca Eisenberg, Nicholson Price, Arti Rai, and Ben Roin–and draws on plentiful literature in the health law field that IP scholars may never see. Her analysis is far more detailed and sophisticated than this brief summary. Read more at the jump.

Friday, May 27, 2016

Thoughts on Google's Fair Use Win in Oracle v. Google

It seems like I write a blog post about Oracle v. Google every two years. My last one was on May 9, 2014, so the time seems right (and a fair use jury verdict indicates now or never). It turns out that I really like what I said last time, so I'm going to reprint what I wrote at a couple years ago at the bottom. Nothing has changed about my my views of the law and of what the Federal Circuit ruled.

So, this was a big win for Google, especially given the damages Oracle was seeking. But it was a costly win. It was expensive to have a trial, and it was particularly expensive to have this trial. But it is also costly because it leaves so little answered: what happens the next time someone wants to do what Google did? I don't know. Quite frankly, I don't know how often people make compatible programs already, how many were holding back, or how many will be deterred.

Google did this a long time ago thinking it was legal. How many others have done similar work that haven't been sued? Given how long has it been since Lotus v. Borland quieted things, has the status quo changed at all? My thoughts after the jump.

Thursday, May 19, 2016

Galasso & Schankerman on the Effect of Patent Invalidation on Subsequent Innovation by the Patentee

In a paper previously featured on this blog, economists Alberto Galasso (Toronto School of Management) and Mark Schankerman (London School of Economics) pioneered the use of effectively random Federal Circuit panel assignments as an instrumental variable for patent invalidation. That paper looked at the effect of invalidation on citations to the patent; they now have a new paper, Patent Rights and Innovation by Small and Large Firms, examining the effect of invalidation on subsequent innovation by the patent holder. They summarize their results as follows:
Patent invalidation leads to a 50 percent decrease in patenting by the patent holder, on average, but the impact depends critically on characteristics of the patentee and the competitive environment. The effect is entirely driven by small innovative firms in technology fields where they face many large incumbents. Invalidation of patents held by large firms does not change the intensity of their innovation but shifts the technological direction of their subsequent patenting.
Their measure of post-invalidation patenting is the number of applications filed by the patent owner in a 5-year window after the Federal Circuit decision. They also present results suggesting that large firms tend to redirect their research efforts after invalidation of a non-core patent (but not for a core patent), whereas "the loss of a patent leads small firms to reduce innovation across the board, rather than to redirect it." (A "core" patent is one whose two-digit technology field accounts for at least 2/3 of the firm's patenting.)

This is a rich paper with many, many results and nuances and caveats—highly recommended for anyone interested in patent empirics.

Monday, May 16, 2016

Rules, Standards, and Change in the Patent System (Keynote Speech Transcript)

Last weekend I was honored to give the keynote speech at the Giles S. Rich Inn of Court annual dinner held at the Supreme Court. It was a great time, and I met many judges, lawyers, clerks, and consultants that I had not met before.

Several people asked me what I planned to discuss, so I thought I would post a (very lightly edited) transcription of my talk. I'll note that the kind words I mention at the beginning refer to my introduction, given by Judge Taranto, which really was too kind and generous by at least half.

The text after the jump.

Wednesday, May 11, 2016

Buccafusco, Heald & Bu: Do Pornographic Knock-offs Tarnish the Original Work?

Trademark law provides a remedy against "dilution by tarnishment of [a] famous mark" and the extension of copyright term was justified in part by concerns about tarnishment if Mickey Mouse fell into the public domain. But there has been little evidence of what harm (if any) trademark and copyright owners suffer due to unwholesome uses of their works. Chris Buccafusco, Paul Heald, and Wen Bu provide some new experimental evidence on this question in their new article, Testing Tarnishment in Trademark and Copyright Law: The Effect of Pornographic Versions of Protected Marks and Works. In short, they exposed over 1000 MTurk subjects to posters of pornographic versions of popular movies and measured perceptions of the targeted movie. They "find little evidence of tarnishment, except for among the most conservative subjects, and some significant evidence of enhanced consumer preferences for the 'tarnished' movies."

Before describing the experiments, their article begins with a thorough review of tarnishment theory and doctrine, as well as consumer psychology literature on the role of sex in advertising. For both experiments, subjects were shown numerous pairs of movie posters, and were asked questions like which movie a theater should show to maximize profits. In the first experiment, treatment subjects saw a poster for a pornographic version of one of the movies; e.g., before comparing Titanic vs. Good Will Hunting, treatment subjects had to compare the porn parody Bi-Tanic vs. another porn movie. Overall, control subjects chose the target movie (e.g., Titanic) 53% of the time, whereas treatment subjects who saw the porn poster (e.g., Bi-Tanic) chose the target movie 58% of the time, and this increase was statistically significant. Women were no less affected by the pornographic "tarnishment" than men, and familiarity with the target movie did not have any consistent effect.

Sunday, May 8, 2016

Jotwell Post: Is It Time To Overrule the Trademark Classification Scheme?

As I've noted before, Jotwell is a great way to keep up with interesting recent scholarship in IP and other areas of law. My latest Jotwell review, of Jake Linford's Are Trademarks Ever Fanciful?, was just published on Friday. As I describe in the post, this is the latest in an impressive trifecta of recent articles that have attacked the Abercrombie spectrum for word marks from all sides. The full review is available here.

Tuesday, May 3, 2016

[with Colleen Chien] Recap of the Berkley Software IP Symposium

Slides and papers from the 20th Annual Berkeley Center for Law and Technology/Berkeley Technology Law Journal Symposium - focused on IP and software are now posted. Colleen Chien and I thought we would discuss a few highlights (with some commentary sprinkled in):

David Hayes' opening keynote on the history of software and IP was terrific. The general tenor was that copyright rose and fell with a lot of uncertainty in between. Just was copyright fell, patent rose, and is now falling, with a lot of uncertainty in between. And trade secret law has remained generally steady throughout. David has long been the Chair of the Intellectual Property Group of Fenwick and West, former home to USPTO Director Michelle Lee, as well as IP professors Brenda Simon, Steve Yelderman, and Colleen Chien and is one of the wisest and most experienced IP counselors in the valley. (Relatedly, Michael Risch's former firm was founded by former Fenwick & West lawyers.)

Peter Menell's masterful presentation on copyright and software spanned decades and ended with a Star Wars message, "May the Fair Use Be With You."

Randall Picker took a different view of copyright and software, focusing instead on whether reuse was simply an add-on/clone or a new platform/core product. Thus, he thought Sega v. Accolade came out wrong because allowing fair use for an unlicensed game undermined the discount pricing for game consoles, but thought Whelan v. Jaslow (a case nearly everyone hates) came out properly because the infringing software was a me-too clone. Borland, on the other hand, created a whole new spreadsheet program to create competition. In related work, Risch published "How can Whelan v. Jaslow and Lotus v. Borland Both be Right?" some 15 years ago.

Felix Wu presented an interesting talk about how the copyright "abstraction-filtration-comparison" test might be used to determine the meaning of "means plus function" claims in patent law.

MIT's Randall Davis's "technical talk" explained how software is made and how abstractions are the essence of software. It's turtles all the way down: one level that seems concrete is merely an abstraction when viewed from the level below. The challenge, it seems, is that calling anything abstract can have wide meaning.

Rob Merges further discussed how we might define abstract. His suggestion was to look at abstract as the opposite of concrete and definite. Thus, patents would need to be far more detailed than many that are being rejected now, but such a standard might be more clear to apply.

Arti Rai discussed a similar solution, noting that lower levels of abstraction were more likely to be affirmed. Furthermore, solutions to computer specific problems seem to hold a key. Rai and Merges should be posting papers on these topics soon.

Kevin Collins presented a draft paper on Williamson v. Citrix Online. He posited that Williamson would present difficult challenges for courts trying to determine structure - including structure that's supposedly present in the claim. He presented some ideas about how to think about solutions to the problem.

Similarly, Lee Van Pelt showed some difficulties with Williamson (including Williamson itself) in practice.

Michael Risch's talk and paper leaves off where Hayes ended, with the fall of patents. It explores whether or not, in the wake of the trouble software patents are in, developers might turn to trade secret to protect visible features, and what the implications might be. It turns out that less than a week after the conference, a software company won a $940m jury verdict on exactly this theory.

Colleen Chien's talk explored, if software is eating the world (H/T MarcAndreesen), how much IP and its default allotments matter, in a world where contract is king, and monopolies are coming from data, network effects, scale (a la Thiel) and, possibly, winner take all dynamics, as discussed on Mike Masnick’s recent podcast rather than patents and copyrights. It presents early results and an early draft paper from an analysis of ~2000 technology agreements and some 30k sales involving software, finding evidence of both technology and liability transfers.

Aaron Perzanowski's presentation and forthcoming book with Jason Schultz suggests that perhaps the IoT should be known as IoThings-We-Don't-Own.

Relatedly, John Duffy addressed the first sale doctrine and presented his recent paper with Richard Hynes that shows how commercial law ties to and explains how exhaustion should work. This is relevant to the Federal Circuit's recent decision in the Lexmark case on international exhaustion.

Second day lunchtime keynote, William Raduchel, talked about the importance of culture to innovation and IP. As Mark Zuckerberg mentioned on an investor call, Facebook develops openly (some of it's IT infrastructure and non-core innovation, at least) because that's what it's developers demand and need to get the job done. He also discussed how "deep learning" may change how we consider IP, because computers will now be writing the code that produces creative and inventive output.

The empirical panel provided a helpful overview of recent studies. Pam Samuelson’s talk highlighted changes in the software industry, particularly with the growth of software as a service (SaAS), the cloud, the app market, the IoT, and embedded software as well as the software IP protection landscape since the Berkeley Patent Survey was carried out in 2007. Samuelson also discussed how recent invalidations of algorithms and data structure patents will affect copyright. If those features are too abstract for patenting, then we should consider whether they are too abstract for copyright protection, even if they might be expressed in multiple ways. (NB: A return to the old Baker v. Selden conundrum: bookkeeping systems are the province of patents, not copyrights. But can you patent a bookkeeping system? Maybe a long time ago, but surely not today).

John Allison gave an overview of what we know (empirically) about software patents. And the chief IP officers panel was a highlight, as each person had a different perspective on the system based on its own position - though they did agree on a few basics, such as the need for some way to appropriate investments and the preference for clear lines.

There is much more at the link to the symposium, including slides, drafts, and past (but relevant) papers. It's well worth a look! TAP is also running a seven-part series on the conference, starting with this overview of David Hayes' talk.

Saturday, April 30, 2016

A Trend at Pat Con: Regulating Patents Earlier

I was thrilled to attend "Pat Con 6" this year at Boston College, thanks to this year's host David Olson, along with Andrew Torrence and David Schwartz.  I noted a trend in several of the papers: many sought, either explicitly or implicitly, to shift back the timeline in which bad patents are kicked out of the system or in which low quality patent assertions are halted––presumably in order to avoid the high costs of litigation and the imposition on consumers and downstream innovation. I suppose we can think of this as "ex ante" versus "ex post" regulation of patents. In theory, this is a way to limit the toxic effects of low quality patents, and low quality patent assertions, on the system. Read more at the jump.

Thursday, April 28, 2016

Jonathan Masur on Improving Cost-Benefit Analysis at the PTO

Federal agencies are required to use cost-benefit analysis (CBA) for all "economically significant" regulations—those with an impact of at least $100 million. Given the economic importance of patents, even small procedural changes at the PTO likely cross this threshold. But as Jonathan Masur notes in CBA at the PTO, the PTO regularly promulgates regulations without following CBA procedures. The PTO did deem its recent fee-setting regulations economically significant, and Masur writes that "the PTO deserves commendation for attempting CBA in such a difficult field." Yet the resulting analysis "misunderstands basic precepts of patent economics."

CBA is supposed to measure the social costs and benefits of a proposed regulation, not just the private costs and benefits for the agency and the regulated party. The PTO's analysis for its fee-setting regulation correctly counted PTO operating costs as a real administrative cost. But the only other costs and benefits it considered related to the quantity and speed of patent grants—where more/faster patents were viewed as benefits, and fewer/slower patents were viewed as costs. The PTO viewed a "decrease in successful patent application filings" as a "cost to society." As Masur notes, this approach "improperly conflates the private value of patents to their owners with the value of patents to society at large." (As I've previously discussed, this distinction can also be conflated in academic work.) The private value of a patent—the supracompetitive rents it enables—is a transfer from consumers to the patent owner, not a social gain. If there is a social benefit, it is in the dynamic incentives the patent creates.

In addition to making this "fundamental" error about the benefits of patents, the PTO's CBA "entirely ignores the costs that accompany patents" such as deadweight loss and inhibition of follow-on innovation. The PTO did note that uncertainty about the scope of others' patent rights can inhibit innovation, but this was treated "as a cost created by pending patent applications, as if the cost disappears entirely when the patent is granted"—which Masur notes is "entirely backward." In prior work, Masur has explained that high patent filing fees can serve the benefit of screening out low-value inventions. The PTO's CBA, however, "errs by treating lower fees—and greater numbers of patents—as an unalloyed good."