Separating Fact from Fiction in the Notice and Takedown Debate

Cross-posted from the Center for the Protection of Intellectual Property (CPIP) blog.

By Kevin Madigan & Devlin Hartline

With the Copyright Office undertaking a new study to evaluate the impact and effectiveness of the Section 512 safe harbor provisions, there’s been much discussion about how well the DMCA’s notice and takedown system is working for copyright owners, service providers, and users. While hearing from a variety of viewpoints can help foster a healthy discussion, it’s important to separate rigorous research efforts from overblown reports that offer incomplete data in support of dubious policy recommendations.

Falling into the latter category is Notice and Takedown in Everyday Practice, a recently-released study claiming to take an in-depth look at how well the notice and takedown system operates after nearly twenty years in practice. The study has garnered numerous headlines that repeat its conclusion that nearly 30% of all takedown requests are “questionable” and that echo its suggestions for statutory reforms that invariably disfavor copyright owners. But what the headlines don’t mention is that the study presents only a narrow and misleading assessment of the notice and takedown process that overstates its findings and fails to adequately support its broad policy recommendations.

Presumably released to coincide with the deadline for submitting comments to the Copyright Office on the state of Section 512, the authors claim to have produced “the broadest empirical analysis of the DMCA notice and takedown” system to date. They make bold pronouncements about how “the notice and takedown system . . . meets the goals it was intended to address” and “continues to provide an efficient method of enforcement in many circumstances.” But the goals identified by the authors are heavily skewed towards service providers and users at the expense of copyright owners, and the authors include no empirical analysis of whether the notice and takedown system is actually effective at combating widespread piracy.

The study reads more like propaganda than robust empiricism. It should be taken for what it is: A policy piece masquerading as an independent study. The authors’ narrow focus on one sliver of the notice and takedown process, with no analysis of the systemic results, leads to conclusions and recommendations that completely ignore the central issue of whether Section 512 fosters an online environment that adequately protects the rights of copyright owners. The authors conveniently ignore this part of the DMCA calculus and instead put forth a series of proposals that would systematically make it harder for copyright owners to protect their property rights.

To its credit, the study acknowledges many of its own limitations. For example, the authors recognize that the “dominance of Google notices in our dataset limits our ability to draw broader conclusions about the notice ecosystem.” Indeed, over 99.992% of the individual requests in the dataset for the takedown study were directed at Google, with 99.8% of that dataset directed at Google Search in particular. Of course, search engines do not include user-generated content—the links Google provides are links that Google itself collects and publishes. There are no third parties to alert about the takedowns since Google is taking down its own content. Likewise, removing links from Google Search does not actually remove the linked-to content from the internet.

The authors correctly admit that “the characteristics of these notices cannot be extrapolated to the entire world of notice sending.” A more thorough quantitative study would include data on sites that host user-generated content, like YouTube and Facebook. As it stands, the study gives us some interesting data on one search engine, but even that data is limited to a sample size of 1,826 requests out of 108 million over a six-month period in mid-2013. And it’s not even clear how these samples were randomized since the authors admittedly created “tranches” to ensure the notices collected were “of great substantive interest,” but they provide no details about how they created these tranches.

Despite explicitly acknowledging that the study’s data is not generalizable, the authors nonetheless rely on it to make numerous policy suggestions that would affect the entire notice and takedown system and that would tilt the deck further in favor of infringement and against copyright owners. They even identify some of their suggestions as explicitly reflecting “Public Knowledge’s suggestion,” which is a far cry from a reasoned academic approach. The authors do note that “any changes should take into account the interests of . . . small- and medium-sized copyright holders,” but this is mere lip service. Their proposals would hurt copyright owners of all shapes and sizes.

The authors justify their policy proposals by pointing to the “mistaken and abusive takedown demands” that they allegedly uncover in the study. These so-called “questionable” notices are the supposed proof that the entire notice and takedown system needs fixing. A closer look at these “questionable” notices shows that they’re not nearly so questionable. The authors claim that 4.2% of the notices surveyed (about 77 notices) are “fundamentally flawed because they targeted content that clearly did not match the identified infringed work.” This figure includes obvious mismatches, where the titles aren’t even the same. But it also includes ambiguous notices, such as where the underlying work does not match the title or where the underlying page changes over time.

The bulk of the so-called “questionable” notices comes from those notices that raise “questions about compliance with the statutory requirements” (15.4%, about 281 notices) or raise “potential fair use defenses” (7.3%, about 133 notices). As to the statutory requirements issue, the authors argue that these notices make it difficult for Google to locate the material to take down. This claim is severely undercut by the fact that, as they acknowledge in a footnote, Google complies with 97.5% of takedown notices overall. Moreover, it wades into the murky waters of whether copyright owners can send service providers a “representative list” of infringing works. Turning to the complaint about potential fair uses, the authors argue that copyright owners are not adequately considering “mashups, remixes, or covers.” But none of these uses are inherently fair, and there’s no reason to think that the notices were sent in bad faith just because someone might be able to make a fair use argument.

The authors claim that their “recommendations for statutory reforms are relatively modest,” but that supposed modesty is absent from their broad list of suggestions. Of course, everything they suggest increases the burdens and liabilities of copyright owners while lowering the burdens and liabilities of users, service providers, and infringers. Having overplayed the data on “questionable” notices, the authors reveal their true biases. And it’s important to keep in mind that they make these broad suggestions that would affect everyone in the notice and takedown system after explicitly acknowledging that their data “cannot be extrapolated to the entire world of notice sending.” Indeed, the study contains no empirical data on sites that host user-generated content, so there’s nothing whatsoever to support any changes for such sites.

The study concludes that the increased use of automated systems to identify infringing works online has resulted in the need for better mechanisms to verify the accuracy of takedown requests, including human review. But the data is limited to small surveys with secret questions and a tiny fraction of notices sent to one search engine. The authors offer no analysis of the potential costs of implementing their recommendations, nor do they consider how it might affect the ability of copyright owners to police piracy. Furthermore, data presented later in the study suggests that increased human review might have little effect on the accuracy of takedown notices. Not only do the authors fail to address the larger problem of whether the DMCA adequately addresses online piracy, their suggestions aren’t even likely to address the narrower problem of inaccurate notices that they want to fix.

Worse still, the study almost completely discards the ability of users to contest mistaken or abusive notices by filing counternotices. This is the solution that’s already built into the DMCA, yet the authors inexplicably dismiss it as ineffective and unused. Apart from providing limited answers from a few unidentified survey respondents, the authors offer no data on the frequency or effectiveness of counternotices. The study repeatedly criticizes the counternotice system as failing to offer “due process protection” to users, but that belief is grounded in the notion that a user that fails to send a counternotice has somehow been denied the chance. Moreover, it implies a constitutional right that is not at issue when two parties interact in the absence of government action. The same holds true for the authors’ repeated—and mistaken—invocation of “freedom of expression.”

More fundamentally, the study ignores the fact that the counternotice system is stacked against copyright owners. A user can simply file a counternotice and have the content in question reposted, and most service providers are willing to repost the content following a counternotice because they’re no longer on the hook should the content turn out to be infringing. The copyright owner, by contrast, then faces the choice of allowing the infringement to continue or filing an expensive lawsuit in federal court. The study makes it sound like users are rendered helpless because counternotices are too onerous, but the reality is that the system leaves copyright owners practically powerless to combat bad faith counternotices.

Pretty much everyone agrees that the notice and takedown system needs a tune up. The amount of infringing content available online today is immense. This rampant piracy has resulted in an incredible number of takedown notices being sent to service providers by copyright owners each day. Undoubtedly, the notice and takedown system should be updated to address these realities. And to the extent that some are abusing the system, they should be held accountable. But in considering changes to the entire system, we should not be persuaded by biased studies based on limited (and secret) datasets that provide little to no support for their ultimate conclusions and recommendations. While it may make for evocative headlines, it doesn’t make for good policy.

Acknowledging the Limitations of the FTC’s PAE Study

Cross-posted from the Center for the Protection of Intellectual Property (CPIP) blog.

The FTC’s long-awaited case study of patent assertion entities (PAEs) is expected to be released this spring. Using its subpoena power under Section 6(b) to gather information from a handful of firms, the study promises us a glimpse at their inner workings. But while the results may be interesting, they’ll also be too narrow to support any informed policy changes. And you don’t have to take my word for it—the FTC admits as much. In one submission to the Office of Management and Budget (OMB), which ultimately decided whether the study should move forward, the FTC acknowledges that its findings “will not be generalizable to the universe of all PAE activity.” In another submission to the OMB, the FTC recognizes that “the case study should be viewed as descriptive and probative for future studies seeking to explore the relationships between organizational form and assertion behavior.”

However, this doesn’t mean that no one will use the study to advocate for drastic changes to the patent system. Even before the study’s release, many people—including some FTC Commissioners themselves—have already jumped to conclusions when it comes to PAEs, arguing that they are a drag on innovation and competition. Yet these same people say that we need this study because there’s no good empirical data analyzing the systemic costs and benefits of PAEs. They can’t have it both ways. The uproar about PAEs is emblematic of the broader movement that advocates for the next big change to the patent system before we’ve even seen how the last one panned out. In this environment, it’s unlikely that the FTC and other critics will responsibly acknowledge that the study simply cannot give us an accurate assessment of the bigger picture.

Limitations of the FTC Study

Many scholars have written about the study’s fundamental limitations. As statistician Fritz Scheuren points out, there are two kinds of studies: exploratory and confirmatory. An exploratory study is a starting point that asks general questions in order to generate testable hypotheses, while a confirmatory study is then used to test the validity of those hypotheses. The FTC study, with its open-ended questions to a handful of firms, is a classic exploratory study. At best, the study will generate answers that could help researchers begin to form theories and design another round of questions for further research. Scheuren notes that while the “FTC study may well be useful at generating exploratory data with respect to PAE activity,” it “is not designed to confirm supportable subject matter conclusions.”

One significant constraint with the FTC study is that the sample size is small—only twenty-five PAEs—and the control group is even smaller—a mixture of fifteen manufacturers and non-practicing entities (NPEs) in the wireless chipset industry. Scheuren reasons that there “is also the risk of non-representative sampling and potential selection bias due to the fact that the universe of PAEs is largely unknown and likely quite diverse.” And the fact that the control group comes from one narrow industry further prevents any generalization of the results. Scheuren concludes that the FTC study “may result in potentially valuable information worthy of further study,” but that it is “not designed in a way as to support public policy decisions.”

Professor Michael Risch questions the FTC’s entire approach: “If the FTC is going to the trouble of doing a study, why not get it done right the first time and a) sample a larger number of manufacturers, in b) a more diverse area of manufacturing, and c) get identical information?” He points out that the FTC won’t be well-positioned to draw conclusions because the control group is not even being asked the same questions as the PAEs. Risch concludes that “any report risks looking like so many others: a static look at an industry with no benchmark to compare it to.” Professor Kristen Osenga echoes these same sentiments and notes that “the study has been shaped in a way that will simply add fuel to the anti–‘patent troll’ fire without providing any data that would explain the best way to fix the real problems in the patent field today.”

Osenga further argues that the study is flawed since the FTC’s definition of PAEs perpetuates the myth that patent licensing firms are all the same. The reality is that many different types of businesses fall under the “PAE” umbrella, and it makes no sense to impute the actions of a small subset to the entire group when making policy recommendations. Moreover, Osenga questions the FTC’s “shortsighted viewpoint” of the potential benefits of PAEs, and she doubts how the “impact on innovation and competition” will be ascertainable given the questions being asked. Anne Layne-Farrar expresses similar doubts about the conclusions that can be drawn from the FTC study since only licensors are being surveyed. She posits that it “cannot generate a full dataset for understanding the conduct of the parties in patent license negotiation or the reasons for the failure of negotiations.”

Layne-Farrar concludes that the FTC study “can point us in fruitful directions for further inquiry and may offer context for interpreting quantitative studies of PAE litigation, but should not be used to justify any policy changes.” Consistent with the FTC’s own admissions of the study’s limitations, this is the real bottom line of what we should expect. The study will have no predictive power because it only looks at how a small sample of firms affect a few other players within the patent ecosystem. It does not quantify how that activity ultimately affects innovation and competition—the very information needed to support policy recommendations. The FTC study is not intended to produce the sort of compelling statistical data that can be extrapolated to the larger universe of firms.

FTC Commissioners Put Cart Before Horse

The FTC has a history of bias against PAEs, as demonstrated in its 2011 report that skeptically questioned the “uncertain benefits” of PAEs while assuming their “detrimental effects” in undermining innovation. That report recommended special remedy rules for PAEs, even as the FTC acknowledged the lack of objective evidence of systemic failure and the difficulty of distinguishing “patent transactions that harm innovation from those that promote it.” With its new study, the FTC concedes to the OMB that much is still not known about PAEs and that the findings will be preliminary and non-generalizable. However, this hasn’t prevented some Commissioners from putting the cart before the horse with PAEs.

In fact, the very call for the FTC to institute the PAE study started with its conclusion. In her 2013 speech suggesting the study, FTC Chairwoman Edith Ramirez recognized that “we still have only snapshots of the costs and benefits of PAE activity” and that “we will need to learn a lot more” in order “to see the full competitive picture.” While acknowledging the vast potential benefits of PAEs in rewarding invention, benefiting competition and consumers, reducing enforcement hurdles, increasing liquidity, encouraging venture capital investment, and funding R&D, she nevertheless concluded that “PAEs exploit underlying problems in the patent system to the detriment of innovation and consumers.” And despite the admitted lack of data, Ramirez stressed “the critical importance of continuing the effort on patent reform to limit the costs associated with some types of PAE activity.”

This position is duplicitous: If the costs and benefits of PAEs are still unknown, what justifies Ramirez’s rushed call for immediate action? While benefits have to be weighed against costs, it’s clear that she’s already jumped to the conclusion that the costs outweigh the benefits. In another speech a few months later, Ramirez noted that the “troubling stories” about PAEs “don’t tell us much about the competitive costs and benefits of PAE activity.” Despite this admission, Ramirez called for “a much broader response to flaws in the patent system that fuel inefficient behavior by PAEs.” And while Ramirez said that understanding “the PAE business model will inform the policy dialogue,” she stated that “it will not change the pressing need for additional progress on patent reform.”

Likewise, in an early 2014 speech, Commissioner Julie Brill ignored the study’s inherent limitations and exploratory nature. She predicted that the study “will provide a fuller and more accurate picture of PAE activity” that “will be put to good use by Congress and others who examine closely the activities of PAEs.” Remarkably, Brill stated that “the FTC and other law enforcement agencies” should not “wait on the results of the 6(b) study before undertaking enforcement actions against PAE activity that crosses the line.” Even without the study’s results, she thought that “reforms to the patent system are clearly warranted.” In Brill’s view, the study would only be useful for determining whether “additional reforms are warranted” to curb the activities of PAEs.

It appears that these Commissioners have already decided—in the absence of any reliable data on the systemic effects of PAE activity—that drastic changes to the patent system are necessary. Given their clear bias in this area, there is little hope that they will acknowledge the deep limitations of the study once it is released.

Commentators Jump the Gun

Unsurprisingly, many supporters of the study have filed comments with the FTC arguing that the study is needed to fill the huge void in empirical data on the costs and benefits associated with PAEs. Some even simultaneously argue that the costs of PAEs far outweigh the benefits, suggesting that they have already jumped to their conclusion and just want the data to back it up. Despite the study’s serious limitations, these commentators appear primed to use it to justify their foregone policy recommendations.

For example, the Consumer Electronics Association applauded “the FTC’s efforts to assess the anticompetitive harms that PAEs cause on our economy as a whole,” and it argued that the study “will illuminate the many dimensions of PAEs’ conduct in a way that no other entity is capable.” At the same time, it stated that “completion of this FTC study should not stay or halt other actions by the administrative, legislative or judicial branches to address this serious issue.” The Internet Commerce Coalition stressed the importance of the study of “PAE activity in order to shed light on its effects on competition and innovation,” and it admitted that without the information, “the debate in this area cannot be empirically based.” Nonetheless, it presupposed that the study will uncover “hidden conduct of and abuses by PAEs” and that “it will still be important to reform the law in this area.”

Engine Advocacy admitted that “there is very little broad empirical data about the structure and conduct of patent assertion entities, and their effect on the economy.” It then argued that PAE activity “harms innovators, consumers, startups and the broader economy.” The Coalition for Patent Fairness called on the study “to contribute to the understanding of policymakers and the public” concerning PAEs, which it claimed “impose enormous costs on U.S. innovators, manufacturers, service providers, and, increasingly, consumers and end-users.” And to those suggesting “the potentially beneficial role of PAEs in the patent market,” it stressed that “reform be guided by the principle that the patent system is intended to incentivize and reward innovation,” not “rent-seeking” PAEs that are “exploiting problems.”

The joint comments of Public Knowledge, Electronic Frontier Foundation, & Engine Advocacy emphasized the fact that information about PAEs “currently remains limited” and that what is “publicly known largely consists of lawsuits filed in court and anecdotal information.” Despite admitting that “broad empirical data often remains lacking,” the groups also suggested that the study “does not mean that legislative efforts should be stalled” since “the harms of PAE activity are well known and already amenable to legislative reform.” In fact, they contended not only that “a problem exists,” but that there’s even “reason to believe the scope is even larger than what has already been reported.”

Given this pervasive and unfounded bias against PAEs, there’s little hope that these and other critics will acknowledge the study’s serious limitations. Instead, it’s far more likely that they will point to the study as concrete evidence that even more sweeping changes to the patent system are in order.

Conclusion

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general. The study is simply not designed to do this. It instead is a fact-finding mission, the results of which could guide future missions. Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected. And it’s crucial not to draw policy conclusions from it. Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

Copyright Scholars: Courts Have Disrupted the DMCA’s Careful Balance of Interests

Cross-posted from the Center for the Protection of Intellectual Property (CPIP) Blog.

The U.S. Copyright Office is conducting a study of the safe harbors under Section 512 of the DMCA, and comments are due today. Working with Victor Morales and Danielle Ely from Mason Law’s Arts & Entertainment Advocacy Clinic, we drafted and submitted comments on behalf of several copyright law scholars. In our Section 512 comments, we look at one narrow issue that we believe is the primary reason the DMCA is not working as it should: the courts’ failure to properly apply the red flag knowledge standard. We argue that judicial interpretations of red flag knowledge have disrupted the careful balance of responsibilities Congress intended between copyright owners and service providers. Instead of requiring service providers to take action in the face of red flags, courts have allowed them to turn a blind eye and bury their heads in the sand.

Whether Section 512’s safe harbors are working as intended is a hotly contested issue. On the one hand, hundreds of artists and songwriters are calling for changes “to the antiquated DMCA which forces creators to police the entire internet for instances of theft, placing an undue burden on these artists and unfairly favoring technology companies and rogue pirate sites.” On the other hand, groups like the Internet Association, which includes tech giants such as Google and Facebook, claim that the safe harbors are “working effectively” since they “strike a balance between facilitating free speech and creativity while protecting the interests of copyright holders.” The Internet Association even claims that “the increasing number of notice and takedown requests” shows that the DMCA working.

Of course, it’s utter nonsense to suggest that the more takedown notices sent and processed, the more we know the DMCA is working. The point of the safe harbors, according to the Senate Report on the DMCA, is “to make digital networks safe places to disseminate and exploit copyrighted materials.” The proper metric of success is not the number of takedown notices sent; it’s whether the internet is a safe place for copyright owners to disseminate and exploit their works. The continuing availability of huge amounts of pirated works should tip us off that the safe harbors are not working as intended. If anything, the increasing need for takedown notices suggests that things are getting worse for copyright owners, not better. If the internet were becoming a safer place, the number of takedown notices should be decreasing. It’s not surprising that service providers enjoy the status quo, given that the burden of tracking down and identifying infringement doesn’t fall on them, but this is not the balance that Congress intended to strike.

Our comments to the Copyright Office run through the relevant legislative history to show what Congress really had in mind—and it wasn’t copyright owners doing all of the work in locating and identifying infringement online. Instead, as noted in the Senate Report, Congress sought to “preserve[] strong incentives for service providers and copyright owners to cooperate to detect and deal with copyright infringements that take place in the digital networked environment.” The red flag knowledge standard was a key leverage point to encourage service providers to participate in the effort to detect and eliminate infringement. Unfortunately, courts thus far have interpreted the standard so narrowly that, beyond acting on takedown notices, service providers have little incentive to work together with copyright owners to prevent piracy. Even in cases with the most crimson of flags, courts have failed to strip service providers of their safe harbor protection. Perversely, the current case law incentivizes service providers to actively avoid doing anything when they see red flags, lest they gain actual knowledge of infringement and jeopardize their safe harbors. This is exactly the opposite of what Congress intended.

The Second and Ninth Circuits have interpreted the red flag knowledge standard to require knowledge of specific infringing material before service providers can lose their safe harbors. While tech giants might think this is great, it’s terrible for authors and artists who need service providers to carry their share of the load in combating online piracy. Creators are left in a miserable position where they bear the entire burden of policing infringement across an immense range of services, effectively making it impossible to prevent the deluge of piracy of their works. The Second and Ninth Circuits believe red flag knowledge should require specificity because otherwise service providers wouldn’t know exactly what material to remove when faced with a red flag. We argue that Congress intended service providers with red flag knowledge of infringing activity in general to then bear the burden of locating and removing the specific infringing material. This is the balance of responsibilities that Congress had in mind when it crafted the red flag knowledge standard and differentiated it from the actual knowledge standard.

But all hope is not lost. The Second and Ninth Circuits are but two appellate courts, and there are many others that have yet to rule on the red flag knowledge issue. Moreover, the Supreme Court has never interpreted the safe harbors of the DMCA. We hope that our comments will help expose the underlying problem that hurts so many creators today who are stuck playing the DMCA’s whack-a-mole game when their very livelihoods are at stake. Congress intended the DMCA to be the cornerstone of a shared-responsibility approach to fighting online piracy. Unfortunately, it has become a shield that allows service providers to enable piracy on a massive scale without making any efforts to prevent it beyond acting on takedown notices. The fact that search engines can still index The Pirate Bay—an emblematic piracy site that even has the word “pirate” in its name—without concern of losing their safe harbor protection is a testament to how the courts have turned Congress’ intent on its head. We hope that the Copyright Office’s study will shed light on this important issue.

To read our Section 512 comments, please click here.

Changes to Patent Venue Rules Risk Collateral Damage to Innovators

Cross-posted from the Center for the Protection of Intellectual Property (CPIP) Blog.

Advocates for changing the patent venue rules, which dictate where patent owners can sue alleged infringers, have been arguing that their remedy will cure the supposed disease of abusive “trolls” filing suit after suit in the Eastern District of Texas. This is certainly true, but it’s only true in the sense that cyanide cures the common cold. What these advocates don’t mention is that their proposed changes will weaken patent rights across the board by severely limiting where all patent owners—even honest patentees that no one thinks are “trolls”—can sue for infringement. Instead of acknowledging the broad collateral damage their changes would cause to all patent owners, venue revision advocates invoke the talismanic “troll” narrative and hope that nobody will look closely at the details. The problem with their take on venue revision is that it’s neither fair nor balanced, and it continues the disheartening trend of equating “reform” with taking more sticks out every patent owner’s bundle of rights.

Those pushing for venue revision are working on two fronts, one judicial and the other legislative. On the judicial side, advocates have injected themselves into the TC Heartland case currently before the Federal Circuit. Though it has no direct connection to the Eastern District of Texas, advocates see it as a chance to shut plaintiffs out of that venue. Their argument in that case is so broad that it would drastically restrict where all patentees can sue for infringement—even making it impossible to sue infringing foreign defendants. Yet they don’t mention this collateral damage as they sell the “troll” narrative. On the legislative side, advocates have gotten behind the VENUE Act (S.2733), introduced in the Senate last Thursday. This bill leaves open a few more venues than TC Heartland, though it still significantly limits where all patent owners can sue. Advocates here also repeat the “troll” mantra instead of offering a single reason why it’s fair to change the rules for everyone else.

With both TC Heartland and the VENUE Act, venue revision advocates want to change the meaning of one word: “resides.” The specific patent venue statute, found in Section 1400(b) of Title 28, provides that patent infringement suits may be brought either (1) “in the judicial district where the defendant resides” or (2) “where the defendant has committed acts of infringement and has a regular and established place of business.” On its face, this seems fairly limited, but the key is the definition of the word “resides.” The general venue statute, found in Section 1391(c)(2) of Title 28, defines residency broadly: Any juridical entity, such as a corporation, “shall be deemed to reside, if a defendant, in any judicial district in which such defendant is subject to the court’s personal jurisdiction with respect to the civil action in question.” Taken together, these venue statutes mean that patent owners can sue juridical entities for infringement anywhere the court has personal jurisdiction over the defendant.

The plaintiff in TC Heartland is Kraft Foods, a large manufacturer incorporated in Delaware and headquartered in Illinois that runs facilities and sells products in Delaware. The defendant is TC Heartland, a large manufacturer incorporated and headquartered in Indiana. TC Heartland manufactured the allegedly-infringing products in Indiana and then knowingly shipped a large number of them directly into Delaware. Kraft Foods sued TC Heartland in Delaware on the theory that these shipments established personal jurisdiction—and thus venue—in that district. TC Heartland argued that venue was improper in Delaware, but the district court rejected that argument (see here and here). TC Heartland has now petitioned the Federal Circuit for a writ of mandamus, arguing that the broad definition of “reside” in Section 1391(c)(2) does not apply to the word “resides” in Section 1400(b). On this reading, venue would not lie in Delaware simply because TC Heartland did business there.

TC Heartland mentions in passing that its narrow read of Section 1400(b) is favorable as a policy matter because it would prevent venue shopping “abuses,” such as those allegedly occurring in the Eastern District of Texas. Noticeably, TC Heartland doesn’t suggest any policy reasons why Kraft Foods should not be permitted to bring an infringement suit in Delaware, and neither do any of the amici supporting TC Heartland. The amicus brief by the Electronic Frontier Foundation (EFF) et al. argues that Congress could not have intended “to permit venue in just about any court of the patent owner’s choosing.” But why is this hard to believe? The rule generally for all juridical entities is that they can be sued in any district where they chose to do business over matters relating to that business. This rule has long been regarded as perfectly fair and reasonable since these entities get both the benefits and the burdens of the law wherever they do business.

The EFF brief goes on for pages bemoaning the perceived ills of forum shopping in the Eastern District of Texas without once explaining the relevancy to Kraft Foods. It asks the Federal Circuit to “restore balance in patent litigation,” but its vision of “balance” fails to account for the myriad honest patent owners like Kraft Foods that nobody considers to be “trolls.” The same holds true for the amicus brief filed by Google et al. that discusses the “harm forum shopping causes” without elucidating how it has anything to do with Kraft Foods. Worse still, the position being urged by these amici would leave no place for patent owners to sue foreign defendants. If the residency definitions in Section 1391(c) don’t apply to Section 1400(b), as they argue, then a foreign defendant that doesn’t reside or have a regular place of business in the United States can never be sued for patent infringement—an absurd result. But rather than acknowledge this collateral damage, the amici simply sweep it under the rug.

The simple fact is that there’s nothing untoward about Kraft Foods filing suit in Delaware. That’s where TC Heartland purposefully directed its conduct when it knowingly shipped the allegedly-infringing products there. It’s quite telling that venue revision advocates are using TC Heartland as a platform for changing the rules generally when they can’t even explain why the rules should be changed in that very case. And this is the problem: If there’s no good reason for keeping Kraft Foods out of Delaware, then they shouldn’t be advocating for changes that would do just that. Keeping patent owners from suing in the Eastern District of Texas is no reason to keep Kraft Foods out of Delaware, and it’s certainly no reason to make it impossible for all patent owners to sue foreign-based defendants that infringe in the United States. Advocates of venue revision tacitly admit as much when they say nothing about this collateral damage. This isn’t fair and balanced; it’s another huge turn of the anti-patent ratchet disguised as “reform.”

The same is true with the VENUE Act, which copies almost verbatim the venue provisions of the Innovation Act. This bill would also severely restrict where all patent owners can sue by making it so that a defendant doesn’t “reside” wherever a district court has personal jurisdiction arising from its allegedly-infringing conduct. To its credit, the VENUE Act does include new provisions allowing suit where an inventor conducted R&D that led to the application for the patent at issue. It also allows suit wherever either party “has a regular and established physical facility” and has engaged in R&D of the invention at issue, “manufactured a tangible product” that embodies that invention, or “implemented a manufacturing process for a tangible good” in which the claimed process is embodied. Furthermore, the bill makes the same venue rules applicable to patent owners suing for infringement and accused infringers filing for a declaratory judgment, and it solves the problem of foreign-based defendants by stating that the residency definition in Section 1391(c)(3) applies in that situation.

While the proposed changes in the VENUE Act aren’t as severe as those sought by venue revision advocates in TC Heartland, they nevertheless take numerous venues off of the table for patentees and accused infringers alike. But rather than acknowlede these wide-sweeping changes and offer reasons for implementing them, advocates of the VENUE Act simply harp on the narrative of “trolls” in Texas. For example, Julie Samuels at Engine argues that the “current situation in the Eastern District of Texas makes it exceedingly difficult for defendants” to enforce their rights and that we need to “level the playing field.” Likewise, Elliot Harmon at the EFF Blog suggests that the VENUE Act will “finally address the egregious forum shopping that dominates patent litigation” and “bring a modicum of fairness to a broken patent system.” Yet neither Samuels nor Harmon explains why we should change the rules for all patent owners and accused infringers—especially the ones that aren’t forum shopping in Texas.

The VENUE Act would simply take a system that is perceived to favor plaintiffs and replace it with one that definitely favors defendants. For instance, an alleged infringer with continuous and systematic contacts in the Eastern District of Virginia can currently be sued there, but the VENUE Act would take away this option since it’s based on mere general jurisdiction. Likewise, the current venue rules allow suits anywhere the court has specific jurisdiction over the defendant—potentially in every venue for a nationwide enterprise—yet the VENUE Act would make dozens of these venues improper. Furthermore, patentees can now bring suits against multiple defendants in a single forum, saving time and money for all involved, but the VENUE Act would make this possibility much less likely to occur.

The “troll” narrative employed by venue revision advocates may sound appealing on the surface, but it quickly becomes clear that they either haven’t considered or don’t care about how their proposed changes would affect everyone else. If we’re going to talk about abusive litigation practices in need of revision, we should talk about where they’re occurring across the entire patent system. This discussion should include the practices of both patent owners and alleged infringers, and we should directly confront the systemic collateral damage that any proposed changes would cause. As it stands, there’s little hope that the current myopic focus on “trolls” will lead to any true reform that’s fair and balanced for everyone.

No Consensus That Broad Patent ‘Reform’ is Necessary or Helpful

Adam Mossoff and I have an op-ed that was published on The Hill. Here’s a brief excerpt:

Two recent op-eds published in The Hill argue that broad patent legislation—misleadingly labeled “reform”—is needed because the U.S. patent system is fundamentally broken. In the first, Timothy Lee contends that opponents “cannot with a straight face” argue that we don’t need wide-sweeping changes to our patent system. In the second, Michele Boldrin and David K. Levine maintain that there is “consensus among academic researchers” that the system is “failing.”

Both op-eds suggest that there are no principled reasons, whether legal or economic, to object to the overhaul of the patent system included in the Innovation Act. Both op-eds are wrong.

To read the rest of this op-ed, please visit The Hill.