Tuesday, February 9, 2016

More Cool Lab Experiments on Creativity by Bechtold, Buccafusco & Sprigman

As Chris Sprigman explained in a 2011 Jotwell post, laboratory experiments are largely missing from the legal academy, but they shouldn't be. Experiments can be used to test theories and tease apart effects that can't be measured in the real world. They can explode old hypotheses and generate new ones. Chris Sprigman and Chris Buccafusco and various coauthors have been among those remedying the dearth of experimental work in IP law; e.g., I've previously blogged about a clever study by the Chrises of how people price creative works. (For more on the benefits and drawbacks of work like this, and citations to many other studies, see my Patent Experimentalism article starting at p. 87.)

Most recently, Chris and Chris have teamed up with Stefan Bechtold for a new project, Innovation Heuristics: Experiments on Sequential Creativity in Intellectual Property, which presents results from four new experiments on cumulative innovation/creation that "suggest that creators do not consistently behave the way that economic analysis assumes." (This should not be surprising to those following the behavioral law and economics literature. Or to anyone who lives in the real world.) I briefly summarize their results below.

Thursday, February 4, 2016

An Alternate History of the Web & Copyright Law

I've been enjoying Walter Isaacson's The Innovators, a history of computers and the Internet. As with any book related to innovation, I've been interested in the importance (or non-importance) of patents for different inventors, and in the key role of non-patent government incentives for innovation at different points of computing's history. But the rise of the Internet is of course interesting to IP scholars not only for the technical advance it represented, but also for the effect it had on the copyright markets. So I was particularly struck by a passage about how it all could have turned out differently. Isaacson described a meeting between Tim Berners-Lee, who created the World Wide Web while working at CERN, and Ted Nelson, an earlier hypertext innovator:
Twenty-five years earlier, Nelson had pioneered the concept of a hypertext network with his proposed Xanadu project. It was a pleasant meeting, but Nelson was annoyed that the Web lacked key elements of Xanadu. He believed that a hypertext network should have two-way links, which would require the approval of both the person creating the link and the person whose page was being linked to. Such a system would have the side benefit of enabling micropayments to content producers. "HTML is precisely what we were trying to prevent—ever-breaking links, links going outward only, quotes you can't follow to their origins, no version management, no rights management," Nelson later lamented.
Had Nelson's system of two-way links prevailed, it would have been possible to meter the use of links and allow small automatic payments to accrue to those who produced the content that was used. The entire business of publishing and journalism and blogging would have turned out differently. Producers of digital content could have been compensated in an easy, frictionless manner, permitting a variety of revenue models, including ones that did not depend on being beholden solely to advertisers. Instead the Web became a realm where aggregators could make more money than content producers. Journalists at both big media companies and little blogging sites had fewer options for getting paid. As Jason Lanier, the author of Who Owns the Future?, has argued, "The whole business of using advertising to fund communication on the Internet is inherently self-destructive. If you have universal backlinks, you have a basis for micropayments from somebody's information that's useful to somebody else." But a system of two-way links and micropayments would have required some central coordination and made it hard for the Web to spread wildly, so Berners-Lee resisted the idea.

Monday, February 1, 2016

Sean O'Connor: What happened to the "art" in "useful arts"?

The constitutional justification for patents and copyrights is "[t]o promote the Progress of Science and useful Arts." In the late eighteenth century, "science" included all knowledge, and "useful arts" referred to technological rather than liberal arts. In The Lost 'Art' of the Patent System, Professor Sean O'Connor argues that although the modern patent system retains some "art"-based terminology—prior art, person having ordinary skill in the art, state of the art—the traditional conception of "art" has largely been displaced by modern conceptions of technology or science. He laments the implications of these developments, such as the increase in "upstream patenting" and a prejudice against non-technological inventions, and he argues that we must "recover the lost 'art' of the patent system."

The primary doctrinal lever O'Connor points to for addressing this issue is the utility requirement. He argues that "its current diluted interpretation (anything that does anything likely has substantial utility) may stem from its separation from the underlying art," and that courts should recognize that utility is, in my co-blogger Michael Risch's words, "A Surprisingly Useful Requirement." It might seem unlikely that courts will revive utility from its current "diluted" form, but commentators probably thought the same thing about patentable subject matter ten years ago. In a 2014 talk at Stanford, Federal Circuit Judge Dyk noted: "Strangely, we don't generally ask whether a utility patent has the utility that is required by the patent statute," and he criticized the patent bar for being "too timid and too lacking in creativity" about raising novel arguments like this. I don't know that O'Connor's vision of utility is the one he had in mind, but there are some parallels between O'Connor's work and Judge Dyk's history-focused concurrence in Bilski (which was cited by the majority and Stevens's concurring opinion in Bilski, and by Justice Sotomayor's concurrence in Alice). Perhaps creative litigants attempting to breathe more life into utility will meet a more welcome reception at the Federal Circuit than they might expect.

Sunday, January 31, 2016

Want to be a Stanford Law Research Fellow in IP Law?

Official announcement and application information here, and also pasted below. We're looking for someone to start this summer, and the application deadline is 2/29.

Research Fellow, Intellectual Property, Stanford Law School

Description Professor Mark Lemley and Professor Lisa Ouellette are looking for a research fellow with expertise in qualitative or quantitative empirical studies to help with empirical projects related to intellectual property. There will be opportunities to work closely with professors on academic projects and possibly to co-author papers. The research fellow will have the opportunity to enhance their knowledge of IP Law.

Friday, January 29, 2016

Planning a patent citation study? Read this first.

Michael's post this morning about how patent citation data has changed over time reminded me of a nice review of the patent citation literature I saw recently by economists Adam Jaffe and GaĆ©tan de Rassenfosse: Patent Citation Data in Social Science Research: Overview and Best Practices. (Unfortunately, you need to be in an academic or government network or otherwise have access to NBER papers to read for free.) For those who are new to the field, this is a great place to start. In particular, it warns you about some common pitfalls, such as different citation practices across patent offices, changes across time and across technologies, examiner heterogeneity, and strategic effects. I think it understates the importance of recent work by Abrams et al. on why some high-value patents seem to receive few citations, but overall, it seems like a nice overview of the area.

Rethinking Patent Citations

Patent citations are one of the coins of the economic analysis realm. Many studies have used which patents cite which others to determine value, technological relatedness, or other opaque information about a batch of patents. There are some drawbacks, of course, including recent work that questions the role of citations in calculating value or in predicting patent validity.

But what if citing itself has changed over the years? What if easier access to search engines, strategic behavior, or other factors have changed citing patterns? This would mean that citation analysis from the past might yield different answers than citation analysis today.

This is the question tackled by Jeffrey Kuhn and Kenneth Younge in Patent Citations: An Examination of the Data Generating Process, now on SSRN. Their abstract:
Existing measures of innovation often rely on patent citations to indicate intellectual lineage and impact. We show that the data generating process for patent citations has changed substantially since citation-based measures were validated a decade ago. Today, far more citations are created per patent, and the mean technological similarity between citing and cited patents has fallen significantly. These changes suggest that the use of patent citations for scholarship needs to be re-validated. We develop a novel vector space model to examine the information content of patent citations, and show that methods for sub-setting and/or weighting informative citations can substantially improve the predictive power of patent citation measures.
I haven't read the methods for improving predictive power carefully enough yet to comment on them, so I'll limit my comments to the factual predicate: that citation patterns are changing.

As I read the paper, they find that there is a subset of patents that cite significantly more patents than others, and that those citations are attenuated from the technology listed in those patents -- they are filler.

On the one hand, this makes perfect intuitive sense to me, for a variety of reasons. Indeed, in my own study of patents in litigation, I found that more citations were associated with invalidity findings. The conventional wisdom is the contrary, that more backward citations means the patent is strong, because the patent surmounted all that prior art. But if the prior art is filler, then there is no reason to expect a validity finding.

On the other hand, I wonder about the word matching methodology used here. While it's clever, might it represent patentee wordsmithing? People often think that patent lawyers use complex words to say simple ideas (mechanical interface device = plug). Theoretically this shouldn't matter if patentees wordsmith at the same rate over time, but if newer patents add filler words in addition to more cited patents, then perhaps lack of matching words also reflect changes in data over time.

These are just a few thoughts - the data in the paper is both fascinating and illuminating, and there are plenty of nice charts that illustrate it will, along with ideas for better analyzing citations that I think will deserve some close attention.

Thursday, January 28, 2016

Christopher Funk: Protecting Trade Secrets in Patent Litigation

What should a court do when attorneys involved in patent litigation get access to the other party's unpatented trade secrets, at the same time as they are also involved in amending or drafting new patents for their client? Take the facts of In re Deutsche Bank. After being sued for allegedly infringing patents on financial deposit-sweep services, the defendant Deutsche Bank had to reveal under seal significant amounts of confidential information about its allegedly infringing products, including source code and descriptions of its deposit sweep services. Yet the plaintiff, Island Intellectual Property, was simultaneously in the process of obtaining nineteen more patents covering the same general technology; and those patents were being drafted by the same patent attorneys who were viewing Deutsche Bank's secrets in the course of litigation.

Monday, January 25, 2016

Consent and authorization under the CFAA

James Grimmelmann (Maryland) has posted Consenting to Computer Use on SSRN. It's a short, terrific essay on how we should think about solving the definition of authorized access and exceeding authorized access under the CFAA, what I've previously called a very scary statute.

At the heart of the matter is this: how do we know when use of a publicly accessible computer is authorized or when that authorization has been exceeded? Grimmelmann suggests that we the question is not as new as it seems; rather than focusing on the behavior of the accused, we should be looking at the consent given by the computer owner. And there's plenty of law, analysis, and philosophy relating to consent. The abstract is here:

The federal Computer Fraud and Abuse Act (CFAA) makes it a crime to “access[] a computer without authorization or exceed[] authorized access.” Courts and commentators have struggled to explain what types of conduct by a computer user are “without authorization.” But this approach is backwards; authorization is not so much a question of what a computer user does, as it is a question of what a computer owner allows.

In other words, authorization under the CFAA is an issue of consent, not conduct; to understand authorization, we need to understand consent. Building on Peter Westen’s taxonomy of consent, I argue that we should distinguish between the factual question of what uses a computer owner manifests her consent to and the legal question of what uses courts will deem her to have consented to. Doing so allows to distinguish the different kinds of questions presented by different kinds of CFAA cases, and to give clearer and more precise answers to all of them. Some cases require careful fact-finding about what reasonable computer users in the defendant’s position would have known about the owner’s expressed intentions; other cases require frank policy judgments about which kinds of unwanted uses should be considered serious enough to trigger the CFAA.
On the one hand, I thought the analysis was really helpful. It separates legal from factual consent, for example. On the other hand, it does not offer an answer to the conundrum (nor does it pretend to - it is admittedly a first step): in the borderline case, how is a user to know in advance whether a particular action will be consented to?

Grimmelmann moves the ball forward by distinguishing legal consent (which can be imposed by law) even if factual consent is implicitly or explicitly lacking. But diverging views of what the law should allow (along with zealous prosecutors and no ex ante notice) still leaves the CFAA pretty scary in my view.

Thursday, January 21, 2016

A Literature Review of Patenting and Economic History

Petra Moser (NYU Stern) has posted Patents and Innovation in Economic History on SSRN. Is is a literature review of economic history papers relating to patents and innovation. In general, I think the prestige market undervalues literature reviews, because who cares that you can summarize what everyone in the field should already know about (or can look up on their own)? In practice, though, I think there is great value in such reviews. First, knowing about articles and having them listed, organized, and discussed are two different things. Second, not everyone is in the field, nor does everyone take the time to look up every article. Even in areas where I consider myself a subject matter expert (to avoid critique, I'll leave out which), a well done literature review will often turn up at least one writing I was unaware of or frame prior work in a way I hadn't thought of.

And so it is with this draft; the abstract is below. Many different studies are discussed, dating back to the 1950's. They are organized by topic and many are helpfully described. As you would expect, more space is devoted to Moser's work and the analysis and critique tends to favor her point of view on the evidence (though she does point out some limitations of her own work). To that, my response is if you don't like the angle or focus, write your own literature review that highlights all the other studies and their viewpoints. Better yet, do some Bayesian analysis!
A strong tradition in economic history, which primarily relies on qualitative evidence and statistical correlations, has emphasized the importance of intellectual property rights in encouraging innovation. Recent improvements in empirical methodology - through the creation of new data sets and advances in identification - challenge this traditional view. These empirical results provide a more nuanced view of the effects of intellectual property, which suggests that, whenever intellectual property rights have been too broad or too strong, they have discouraged innovation. This paper summarizes existing results from this research agenda and presents some open questions.

Tuesday, January 19, 2016

Do Patents Help Startups?

Do patents help startups? I've debated this question many times over the years, and no one seems to have a definitive answer. My own research, along with others, shows that patents are associated with higher levels of venture funding. In my own data (which comes from the Kauffmann Firm Survey), startups with patents were 10 times as likely to have venture funding than startups without patents.

But even this is not definitive. First, a small fraction of firms--even of those with patents--get venture funding, so it is unclear what role patents play. Second, causality is notoriously hard to show, especially where unobserved factors may lead to both patenting and success. Third, timing is also difficult; many have answered my simple data with the argument that it is the funding that causes patenting, and not vice-versa. Fourth (and contrary to the third in a way), signaling theory suggests that the patent (and even the patent application) signals value to investors, regardless of the value of the underlying invention.

Following my last post, I'll discuss here a paper that uses granular application data to get at some causality questions. The paper is The Bright Side of Patents by Joan-Farre Mensa (Harvard Bus. School), Deepak Hegde (NYU Stern School of Business), and Alexander Ljungqvist (NYU Finance Dept.). Here is the abstract:
Motivated by concerns that the patent system is hindering innovation, particularly for small inventors, this study investigates the bright side of patents. We examine whether patents help startups grow and succeed using detailed micro data on all patent applications filed by startups at the U.S. Patent and Trademark Office (USPTO) since 2001 and approved or rejected before 2014. We leverage the fact that patent applications are assigned quasi-randomly to USPTO examiners and instrument for the probability that an application is approved with individual examiners’ historical approval rates. We find that patent approvals help startups create jobs, grow their sales, innovate, and reward their investors. Exogenous delays in the patent examination process significantly reduce firm growth, job creation, and innovation, even when a firm’s patent application is eventually approved. Our results suggest that patents act as a catalyst that sets startups on a growth path by facilitating their access to capital. Proposals for patent reform should consider these benefits of patents alongside their alleged costs.
The sample size is large: more than 45,000 companies, which the authors believe constitute all the startups filing for patents during their sample years. For those not steeped in econometric lingo, the PTO examiner "instrument" is a tool that allows the authors to make causal inferences from the data. More on this after the jump.

Wednesday, January 13, 2016

A New Source for Using Patent Application Data for Empirical Research

Getting detailed patent application data is notoriously difficult. Traditionally, such information was only available via Public Pair, the PTO's useful, but clunky for bulk research, interface for getting application data. Thus, there haven't been too many such papers. Sampat & Lemley was an early and well known paper from 2009, which looked at a cross-section of 10,000 applications. That was surely daunting work at the time.

Since then, FOIA requests and bulk downloads have allowed for more comprehensive papers. Frakes & Wasserman have papers using a more comprehensive dataset, as does Tu.

But now the PTO has released an even more comprehensive dataset, available to the masses. This is a truly exciting day for people who have yearned for better patent application data but lacked the resources to obtain it. Here's an abstract introducing the dataset, by Graham, Marco & Miller -- The USPTO Patent Examination Research Dataset: A Window on the Process of Patent Examination:

A surprisingly small amount of empirical research has been focused on the process of obtaining a patent grant from the United States Patent and Trademark Office (PTO). The purpose of this document is to describe the Patent Examination Dataset (PatEX), make a large amount of information from the Public Patent Application Information Retrieval system (Public PAIR) more readily available to researchers. PatEX includes records on over 9 million US patent applications, with information complete as of January 24, 2015 for all applications included in Public PAIR with filing dates prior to January 1, 2015. Variables in PatEX cover most of the relevant information related to US patent examination, including characteristics of inventions, applications, applicants, attorneys, and examiners, and status codes for all actions taken, by both the applicant and examiner, throughout the examination process. A significant section of this documentation describes the selectivity issues that arise from the omission of “nonpublic” applications. We find that the selection issues were much more pronounced for applications received prior to the implementation of the American Inventors Protection Act (AIPA) in late 2000. We also find that the extent of any selection bias will be at least partially determined by the sub-population of interest in any given research project.
That's right, data on 9 million patent applications - the patents granted, and the patent applications not granted (after they became published in 2000). The paper does a comparison with the internal PTO records (which shows non-public applications) to determine whether there is any bias in the data. There are a few areas where there isn't perfect alignment, but the data is generally representative. That said, be sure to read the paper to make sure your application is representative (much older applications, for example, have more trouble aligning with USPTO internal data).

The data isn't completely straightforward - each "tab" in public pair is a different data file, so users will have to merge them as needed (easily done in any statistics package, sql, or even with excel lookup functions).

Thanks to Alan Marco, Chief Economist at the PTO, as well as anyone else involved in getting this project done. I believe it will be of great long term research value.

In my next post, I'll highlight a recent paper that uses granular examination data to useful ends.

Monday, January 11, 2016

Samuel Ernst on Reviving the Reverse Doctrine of Equivalents

Samuel Ernst (Chapman University) has recently posted The Lost Precedent of the Reverse Doctrine of Equivalents, which argues that this doctrine is the solution to the patent crisis. The reverse doctrine of equivalents was established by the Supreme Court in the 1898 case Boyden Power-Brake v. Westinghouse, in which the Court wrote that "[t]he patentee may bring the defendant within the letter of his claims, but if the latter has so far changed the principle of the device that the claims of the patent, literally construed, have ceased to represent his actual invention," the defendant does not infringe.

Here is Professor Ernst's abstract:
Proponents of legislative patent reform argue that the current patent system perversely impedes true innovation in the name of protecting a vast web of patented inventions, the majority of which are never even commercialized for the benefit of the public. Opponents of such legislation argue that comprehensive, prospective patent reform legislation would harm the incentive to innovate more than it would curb the vexatious practices of non-practicing entities. But while the “Innovation Act” wallows in Congress, there is a common law tool to protect innovation from the patent thicket lying right under our noses: the reverse doctrine of equivalents. Properly applied, this judge-made doctrine can be used to excuse infringement on a case-by-case basis if the court determines that the accused product is substantially superior to the patented invention, despite proof of literal infringement. Unfortunately, the reverse doctrine is disfavored by the Court of Appeals for the Federal Circuit and therefore rarely applied. It was not always so. This article is the first comprehensive study of published opinions applying the reverse doctrine of equivalents to excuse infringement between 1898, when the Supreme Court established the doctrine, and the 1982 creation of the Federal Circuit. This “lost precedent” reveals a flexible doctrine that takes into account the technological and commercial superiority of the accused product to any embodiment of the patented invention made by the patent-holder. An invigorated reverse doctrine of equivalents could therefore serve to protect true innovations from uncommercialized patents on a case-by-case basis, without the potential harm to the innovation incentive that prospective patent legislation might cause.
Interestingly, according to Ernst, "the Second, Sixth, and Ninth Circuits had precedent requiring that the district court must always consider reverse equivalents prior to determining infringement," and the standard was only whether the accused product was "substantially changed," not whether it was a "radical improvement" (a standard that emerged from scholarly articles, not case law).

I don't have high hopes for the revival of this doctrine, but the Federal Circuit has made clear that it is not dead yet; for example, Plant Genetic Systems v. DeKalb (2003) quoted an earlier case as saying that "the judicially-developed 'reverse doctrine of equivalents' . . . may be safely relied upon to preclude improper enforcement against later developers." So litigators should keep this in their toolkits, just in case.

Tuesday, December 22, 2015

Burk: Is Dolly patentable subject matter in light of Alice?

Dan Burk's work should already be familiar to those who follow patentable subject matter debates (see, e.g., here, here, and here). In a new essay, Dolly and Alice, he questions whether the Federal Circuit's May 2014 In re Roslin decision—holding clones such as Dolly to not be patentable subject matter—should have come out differently under the Supreme Court's June 2014 decision in Alice v. CLS Bank. Short answer: yes.

Burk does not have kind words for either the Federal Circuit or the Supreme Court, and he reiterates his prior criticism of developments like the gDNA/cDNA distinction in Myriad. His analysis of how Roslin should be analyzed under Alice begins on p. 11 of the current draft:
[E]ven assuming that the cloned sheep failed the first prong of the Alice test, the analysis would then move to the second prong to look for an "inventive concept" that takes the claimed invention beyond an attempt to merely capture the prohibited category of subject matter identified in the first step. . . . The Roslin patent claims surely entail such an inventive concept in the method of creating the sheep. The claims recite "clones," which the specification discloses were produced by a novel method that is universally acknowledged to have been a highly significant and difficult advance in reproductive technology—an "inventive concept" if there ever was one . . . [which] was not achieved via conventional, routine, or readily available techniques . . . .
But while Burk thinks Roslin might have benefited from the Alice framework, he also contends that this exercise demonstrates the confusion Alice creates across a range of doctrines, and particularly for product by process claims. He concludes by drawing an interesting parallel to the old Durden problem of how the novelty of a starting material affects the patentability of a process, and he expresses skepticism that there is any coherent way out; rather, he thinks Alice "leaves unsettled questions that will haunt us for years to come."

Tuesday, December 15, 2015

3 New Copyright Articles: Buccafusco, Bell & Parchomovsky, Grimmelmann

My own scholarship and scholarly reading focuses most heavily on patent law, but I've recently come across a few interesting copyright papers that seem worth highlighting:
  • Christopher Buccafusco, A Theory of Copyright Authorship – Argues that "authorship involves the intentional creation of mental effects in an audience," which expands copyrightability to gardens, cuisine, and tactile works, but withdraws it from aspects of photographs, taxonomies, and computer programs.
  • Abraham Bell & Gideon Parchomovsky, The Dual-Grant Theory of Fair Use – Argues that rather than addressing market failure, fair use calibrates the allocation of uses among authors and the public. A prima facie finding of fair use in certain categories (such as political speech) could only be defeated by showing the use would eliminate sufficient incentives for creation.
  • James Grimmelmann, There's No Such Thing as a Computer-Authored Work – And It's a Good Thing, Too – "Treating computers as authors for copyright purposes is a non-solution to a non-problem. It is a non-solution because unless and until computer programs can qualify as persons in life and law, it does no practical good to call them 'authors' when someone else will end up owning the copyright anyway. And it responds to a non-problem because there is nothing actually distinctive about computer-generated works."
Are there other copyright pieces posted this fall that I should take a look at?

Update: For readers not on Twitter, Chris Buccafusco added some additional suggestions:

Tuesday, December 8, 2015

Bernard Chao on Horizontal Innovation and Interface Patents

Bernard Chao has posted an interesting new paper, Horizontal Innovation and Interface Patents (forthcoming in the Wisconsin Law Review), on inventions whose value comes merely from compatibility rather than improvements on existing technology. And I'm grateful to him for writing an abstract that concisely summarizes the point of the article:
Scholars understandably devote a great deal of effort to studying how well patent law works to incentive the most important inventions. After all, these inventions form the foundation of our new technological age. But very little time is spent focusing on the other end of the spectrum, inventions that are no better than what the public already has. At first blush, studying such “horizontal” innovation seems pointless. But this inquiry actually reveals much about how patents can be used in unintended, and arguably, anticompetitive ways.
This issue has roots in one unintuitive aspect of patent law. Despite the law’s goal of promoting innovation, patents can be obtained on inventions that are no better than existing technology. Such patents might appear worthless, but companies regularly obtain these patents to cover interfaces. That is because interface patents actually derive value from two distinct characteristics. First, they can have “innovation value” that is based on how much better the patented interface is than prior technology. Second, interface patents can also have “compatibility value.” In other words, the patented technology is often needed to make products operate (i.e. compatible) with a particular interface. In practical terms, this means that an interface patent that is not innovative can still give a company the ability to foreclose competition.
This undesirable result is a consequence of how patent law has structured its remedies. Under current law, recoveries implicitly include both innovation and compatibility values. This Article argues that the law should change its remedies to exclude the latter kind of recovery. This proposal has two benefits. It would eliminate wasteful patents on horizontal technology. Second, and more importantly, the value of all interface patents would be better aligned with the goals of the patent system. To achieve these outcomes, this Article proposes changes to the standards for awarding injunctions, lost profits and reasonable royalties.
The article covers examples ranging from razor/handle interfaces to Apple's patented Lightning interface, so it is a fun read. And it also illustrates what seems like an increasing trend in patent scholarship, in which authors turn to remedies as the optimal policy tool for effecting their desired changes.