Recurring Myths About the Legal Obligations of Online Platforms

Cross-posted from Ammori.org

In recent months, some copyright holders, pharmaceutical companies, and state attorneys general have made allegations against Internet companies that help users find and share information. In short, they claim that because some users engage in copyright infringement, sell counterfeit products, or otherwise encourage potentially criminal activity on the Internet, the users’ Internet platforms should be held responsible for these misdeeds. That is, Google should be punished for any user’s copyright infringement on YouTube, Facebook for any user’s harassing post, and Twitter for any user’s slanderous tweet. According to the critics, that is, these companies should screen all users’ speech and take on the role of editors or publishers, rather than being open platforms for the speech of millions.

Many of these allegations focus exclusively on the biggest company in the space, Google, even though Google already invests considerable resources in reducing infringement, counterfeiting, and unlawful activity on its platforms. One state attorney general accused Google of “a failure to stop illegal sites from selling stolen intellectual property,” as though Google has the obligation or even the ability to stamp out copyright infringement on every “site” on the Internet.

For those who follow Internet policy, these types of arguments should sound familiar, stale, and still misguided. These arguments have failed repeatedly in federal courts, Congress, and the court of public opinion. One wonders why, like zombies in a classic horror movie, these arguments just keep coming back from the dead.

As recently as 2011, some in Congress supported a now-infamous bill called SOPA designed to target Internet intermediaries for their users’ copyright misdeeds. SOPA’s co-sponsors also targeted Google and similarly served on committees focused on intellectual property—committees that often show an unbalanced attentiveness to the copyright industry’s concerns over those of average users and over important principles of free speech more generally.

To ensure digital platforms for user expression, Congress has wisely held that speech platforms should generally not be guilty of their users’ misdeed. Congress has done so through established and widely praised laws such as section 230 of the Communications Decency Act and Section 512 of the Digital Millennium Copyright Act. Courts have construed 230 of the CDA “broadly in all cases arising from the publication of user-generated content.”

Nonetheless, every few years, we see attempts to undermine intermediary immunity. While many such attempts might be well-intentioned they are deeply flawed and would threaten the Internet’s role as an engine of free expression for hundreds of millions of Americans.

In this post, I respond to the recent allegations by rights-holders and state attorneys general. These critics mistakenly accuse companies of turning a blind eye to users’ potentially illegal behavior on search engines and video platforms. They also advance legal claims that technology platforms should be liable for any abuse on any of its services, despite a lack of support for such claims in the case law (and considerable support for the opposite position). As many of these arguments are specific to Google, I reply to those arguments and explain how my responses apply more broadly to other Internet companies.

 

The allegations fall into four categories:

(1) that Google does little to identify and remove objectionable content and is uncooperative with law enforcement;

(2) that the law should punish Internet intermediaries for hosting or linking to any objectionable content;

(3) that search autocomplete predictions and search results “aid and abet” violations of law;

(4) and that placing advertising alongside harmful content demonstrates Google’s intention to profit from harmful activity.

Each of these arguments is mistaken, both in fact and principle, and accepting their legal implications would undermine free expression and legitimate economic activity online.

Myth 1: Google does little to identify and remove objectionable content and does not cooperate with law enforcement.

Google owns YouTube, the most popular site for individuals to upload and share videos online. The Digital Citizens Alliance, an advocacy group with the backing of pharmaceutical companies and the copyright industry, has pointed to videos that promote illegal activity, such as forging passports or buying prescription drugs illegally. Based on those videos appearing on YouTube, DCA claims that YouTube is “hosting evil on its servers.” Moreover, one state attorney general has suggested that Google is unwilling to make “meaningful reforms” to address harmful activity.

Reality: Google devotes considerable resources to removing objectionable content and working well with law enforcement.

Google cooperates with a host of partners and implements a wide range of tools to minimize objectionable content, while also balancing the interests of hundreds of millions of users to legitimately share and consume speech. A “zero-tolerance” policy punishing speech-platforms for the most objectionable content uploaded by the least sympathetic users would cripple YouTube, Twitter, Facebook, and other platforms that comprise today’s digital town square.

YouTube supports the speech of billions of users. Over a billion users watch or share videos on YouTube every month, for free. One-hundred hours of video are uploaded to YouTube every single minute. These videos include cute kittens, protests in Tahrir Square, breaking news reports, and commentary in war zones. Users can share their videos, respond to videos posted by others, express themselves, and provide instant news on such a massive scale only because Google does not screen every second of every uploaded video before posting. Google’s general search engine indexes almost every publicly accessible webpage—30 trillion of them. It handles more than a million searches per minute.

Minimizing objectionable content while enabling billions of speakers is a difficult task. YouTube’s rules, outlined in its Terms of Service and Community Guidelines, forbid the posting of child exploitation and other sexual content, hate speech, copyright infringement, and illegal activities that have an inherent risk of serious physical harm or death, among other things. YouTube removes content violating these rules and may also terminate the offending user’s account.

YouTube has a multifaceted process, involving technology and community reporting, to remove content violating its rules. YouTube developed a tool called Content ID that identifies copyrighted content and empowers rights holders to earn ad revenue from the video or request its removal from the site.

Google has also devised robust tools for its search engine. In August 2013 alone, according to its Transparency Report, Google processed over 18 million requests to remove URLs from its search results based on copyright concerns. According to Google’s numbers, during the period from July 2011 to December 2011, it removed 97% of the requested URLs.

That said, current technology doesn’t provide for perfect filtering tools at the scale at which companies like Google, Facebook, and Twitter operate. Google offers robust community flagging tools to law enforcement, rights-holders, and the public at large. If a video violating YouTube’s rules is available on the site, a law enforcement officer can flag it with a click. Google will then review the video, confirm that it should be removed, and remove the content within 24 hours. If law enforcement (or any other interested party) seeks to remove any content, Google has provided a step-by-step complaint tool to make such a request. Google also provides the AdWords Counterfeit Goods Complaint Form for rights-holders to report advertisements for counterfeit good. Copyright holders can file notices based on the procedure set out in section 512 of the Digital Millennium Copyright Act, requiring online service providers to take down specific content following a simple notice-and-takedown procedure.

With over a billion users as potential flaggers, community flagging is a central part of YouTube’s policy enforcement processes. It has proven to be the most effective way to address illegal content and is the simplest, most effective way for law enforcement or anyone else to request harmful videos be removed from YouTube.

Many in the law enforcement community already use these tools. Between 2011 and 2012, requests from law enforcement resulted in the removal of approximately 18,000 videos from YouTube.

Moreover, in addition to the availability of these tools, Google cooperates directly with law enforcement. In 2012, Google joined forces with the Food and Drug Administration, INTERPOL, and a variety of other organizations in “Operation Pangea.” The operation was a major international law enforcement effort targeting online sales of illegal counterfeit drugs.

The DCA and State Attorneys General have found a number of outrageous videos that may be inappropriate and violate YouTube’s community guidelines. This is unsurprisingly considering the tens of millions of videos on YouTube. Flagging inappropriate videos for removal is a narrow, measured approach that can properly target harmful or illegal content while protecting controversial content that is otherwise permissible under the law. One wonders why the DCA didn’t just flag objectionable videos.

Myth 2: Online services should be held responsible for any objectionable content in search or should pre-screen all links and content on its sites.

A state attorney general has accused Google of “a failure to stop illegal sites from selling stolen intellectual property” and pointed to “the prevalence of illegal drugs and other products that are promoted and even sold through Google platforms” to suggest Google is breaking the law. This attorney general also complained that enterprising users can find ways to “purchase drugs without a prescription through Google.”

Reality: Imposing strict liability on Internet intermediaries would cripple free speech online and conflict with decades of federal law.

Imposing liability on speech platforms and forcing them to pre-screen and filter all content would threaten free expression online. Congress, the courts, and the American public have repeatedly rejected it as opposed to our nation’s profound commitment to freedom of speech.

If a service were legally responsible for every video displayed even once, or any website in its search directory, then it would have to pre-screen every upload and every link. That would require screening hundreds of hours of video on YouTube every single minute. It would also include trillions of links to everything that Google indexes on the web.

Despite desires of some regulators, accurate real-time filtering technology does not exist. Algorithms underlying Google’s Content ID can spot some (by no means all) copyright infringement. Those algorithms must merely match specific sound sequences or video images. But algorithms cannot reliably evaluate the harmfulness of specific videos, even if DCA wishes that Google can use its “vaunted analytical systems” to identify and remove questionable videos. For example, in CDT v. Pappert, a federal court struck down a Pennsylvania state law requiring filtering of child pornography sites. While the intention of the law was obviously noble, and every decent person would agree that we should rid the world of such sites, the filtering technology available blocked 1,190,000 innocent websites and less than 400 child pornography websites. That is, according to district court findings, 99.9% of the blocked sites were innocent; almost 3,000 innocent sites were blocked for every one criminal site. The court that struck down that law for “significant overblocking.” (Nonetheless, of course, companies like Google and Facebook do actively combat and report to authorities instances of child pornography found on their systems.)

Even if it were possible, such large-scale pre-screening of every video and link would dramatically impact the free flow of information online. It would impose huge delays and costs on users, transforming YouTube’s immediate and free nature, or closing down the service.

Imposing liability for any posted content would cripple every one of today’s top digital speech platforms. If Twitter were responsible for the illegality in any text tweet, photo, or video, it too would have to pre-screen its’ users’ content. That pre-screening would likely force Twitter to dramatically change its service, as Twitter now processes 400 million tweets each day and serves 200 million active users around the world each month. Facebook would have an even bigger challenge if punished for any user’s post. Facebook has 1.15 billion monthly active users who upload 350 million photos a day. Tumblr users upload 86 million posts per day. Dropbox has over one hundred million users, with one billion files uploaded ever 24 hours, millions of which are shared with others. For these companies, and others, to pre-screen all content would transform their businesses, suppress user expression, and put many of them out of business.

To ensure that hundreds of millions of Americans can continue to use powerful digital platforms for speech, intermediaries should not be liable for every piece of content that slips through their existing processes for minimizing objectionable content.

Indeed, Congress, the courts, and the American public have repeatedly chosen to provide immunity to speech platforms—through sections 230 of the CDA and 512 of the DMCA, through court decisions broadly interpreting those immunities, and through popular public uprisings to clumsy attempts to impose intermediary liability through bills like SOPA. Indeed, imposing such liability could go against hundreds of years of American law. For example, phone companies carry the speech of others and they have never been liable for every drug deal, fraud, or price-fixing transacted on their lines.

Enabling law enforcement and rights-holders easily to flag videos after uploading—rather than punishing intermediaries and forcing pre-screening of all content—properly balances the need to address harmful content with the Internet’s revolutionary ability to empower the average speaker.

Myth 3: Google “aids and abets” rights-violations through certain autocomplete suggestions in search.

Some critics have complained about Google’s autocomplete, a search-engine function that proposes commonly searched phrases or words based on a user’s initial keystrokes. One critic has even gone so far as to claim that Google is somehow “aiding and abetting [crime] by allowing autocomplete to lead users to legally dubious websites and even encourage its users to illegal activity.”

Reality: Google removes many objectionable phrases from autocomplete even though “aiding and abetting” liability cannot conceivably apply here.

Google removes many objectionable phrases from autocomplete, autocomplete does not encourage crime, and the standard for “aiding and abetting” crime is understandably far higher than an autocomplete suggestion.

Google removes many objectionable phrases from autocomplete. Autocomplete is a common feature offered by Google, Bing, DuckDuckGo, and Yahoo search engines. Type “Barack” and Google or Bing will suggest “Obama” to complete the query. Technically, autocomplete relies on complex algorithms based on users’ past searches and common searches by other users. A Slate writer explains that “[a]utocomplete is one of those modern marvels of real-time search technology that almost feels like it’s reading your mind.” The writer pointed to two obvious benefits: “efficiency gains of not having to type as much” and suggestions that “can be serendipitous and educational, spurring alternative query ideas.”

Search engines exclude some terms from autocomplete. In doing so, they must balance ease-of-use and access to information with a principle not to make it easier for users to find harmful or illegal activity. Google has explained that search is the “least restrictive” of its services as search results “are a reflection of the content of the web.” Consistent with that philosophy, Google excludes from autocomplete “a narrow class of search queries related to pornography, violence, hate speech, and copyright infringement.” While some rights-holders continue to complain, as early as January 2011, Google announced it was working to modify the algorithm to reduce the appearance of terms that are frequently associated with online piracy. Other search engines also exclude only a small class of terms and phrases, putting the choice in the hands of users. Google and Bing both offer autocomplete features, and their excluded terms have considerable overlap. No evidence suggests one search engine clearly excludes more terms.

There is no evidence that autocomplete encourages users to engage in illegal activity. Nor does evidence or logic suggest that law-abiding, non-hypnotized users will search out illegal drugs or untrustworthy foreign pirating sites merely because one of the autocomplete suggestions might lead them to choose that autocompleted query, then click on particular results to that query, and then order or download the illegal products they were not already seeking.

Finally, autocomplete clearly does not “aid and abet” criminal activity. To my knowledge, no court has found “aiding and abetting” liability in any case remotely similar to autocomplete suggestions.

Generally someone must “in some sort associate himself with the venture, that he participate in it as something that he wishes to bring about, that he seek by his action to make it succeed.” The law is clear that a company does not aid and abet crime even if it is foreseeable that a small portion of it users will use its services for illegal purposes. The same is true when someone merely provides information that may lead to illegal activity.

The Supreme Court and other courts have repeatedly provided “broad latitude of immunity” in these situations. As a result, Exxon is not liable for the acts of gasoline-using arsonists. Ford is not liable for reckless drivers and getaway cars. AT&T is not liable for customers calling prostitutes. Apple is not liable for theater-goers recording new film. Elmer’s is not liable for users who sniff glue.

Finally, companies cannot be liable for aiding and abetting someone who does not commit a crime. Many of the videos on YouTube advocating for illegal activity are themselves legal. While it may be illegal to smoke marijuana, it is legal for random people (or famous rock stars) to encourage others to smoke marijuana. In Watts v. United States, the Supreme Court has decided a case making this principle clear: firing a gun at the president is illegal but saying that you would like to shoot the president can be perfectly legal, protected political speech. Conduct may be illegal while speech about the conduct is perfectly legal. Where Google and other platforms are hosting or linking to controversial or distasteful speech, it is often not illegal.

4. Myth: Google deliberately profits from illegal activities, evidenced by the fact that Google’s platform sometimes has advertisements for illegal products or has lawful advertising accompanying illegal content .

The Digital Citizens Alliance also claims that Google and YouTube are “advertising partners” with counterfeit drug peddlers and copyright infringers. In the case of search results leading to online pharmacies selling counterfeit drugs, some law enforcement officials have used rhetoric suggesting that Google is “an accessory before fact to the sale of counterfeit items.” In short, they argue that Google sometimes places some legitimate advertising alongside illegal videos and that Google sometimes places advertising for illegal products alongside legal videos. They suggest this is evidence that Google aims deliberately to profit from illegal activity.

Reality: Google makes considerable efforts to minimize advertising for illegal products or placing advertising alongside illegal content, though a few outliers fall through the cracks.

This myth is just another variation of earlier arguments that Google should be strictly liable for any content—advertising or otherwise—on its sites. Google is not liable. Nonetheless, Google has robust policies to avoid advertising on or for illegal products and services. With billions of advertisements, videos, searches, and page views, some abuses slip through the cracks but Google tools enable interested parties to easily flag the abuses for removal.

Google is not at all liable for the misdeeds of its advertisers, thanks to the First Amendment and the U.S. Supreme Court. In the 1962 case of Manual Enterprises, Inc. v. Day, the Supreme Court held that a small magazine appealing to homosexuals could not be suppressed by the Post Office—even if the Court found the content disgusting and published with “sordid motives.” In the decision, the Court made it clear that the even publishers “cannot practicably be expected to investigate each of their advertisers.” If they were required to do so, then publishers “might refrain from accepting advertisements from those whose own materials could conceivably be deemed objectionable.” Unlike the magazine in that case, Google is held to a far lower liability standard than a publisher and has far more advertisers. If Google were liable for every advertisement, then it too would reject ads from legally protected content that could even “conceivably” be deemed objectionable.

Nonetheless, Google does not want to advertise products alongside illegal content nor advertise for illegal products. Google adopts practices to minimize the likelihood that it will do so.

On YouTube, the placement of ads with videos is dependent on a variety of factors. When a YouTube user uploads a video, users provide metadata about the video (e.g., video title) and can categorize their video based on its subject matter. This metadata is then used to algorithmically determine what ads are placed with the video. The control afforded to the millions of users uploading videos means that, in some cases, the system will be abused with the initial upload and advertisements will wrongly accompany infringing videos.

There are mechanisms to address that abuse, however, such as the community flagging system and ContentID. Aggrieved copyright holders and other interested parties are in the best position to help end the monetization of inappropriate content by flagging the offending videos. Rights-holders can choose to either have the video taken down or instead to authorize the use and share in the advertising revenue generated from the content. Finally, users can apply to become a YouTube partner, which allows them to monetize videos. Becoming a partner requires users to comply with the YouTube Terms of Use and Community Guidelines. Violating those policies can result in a termination of the partnership and removal of the content in question. This means there is a built-in safeguard against profiting from illegal or harmful videos.

Search advertising on Google works a bit differently. Google employs highly restrictive policies for its advertising products “because they are commercial products intended to generate revenue.” Google’s policies forbid the advertising of any illegal products (including fake passports, illegal drugs, and counterfeit goods) and restricts the use of terms relating to prescription drugs.

These policies aren’t simply window dressing – Google enforces them, using a combination of automated and manual systems to monitor ad content and identify potential violators. According to Google, in 2011 it disabled over 130 million ads it found to be in violation of its various advertising policies, and managed to shutdown 15,000 accounts that were attempting to advertise counterfeit goods, and another 65,000 accounts that were otherwise violating Google’s policies. That same year, Google introduced a complaint form for brand owners to notify Google of ads for counterfeit goods; Google responds to complaints within 24 hours.

Conclusion

The allegations against Google misunderstand basic facts about Google’s practices and rest on mistaken legal theories. Google’s practices balance the rights of billions of users to share and access billions of videos and trillions of websites while minimizing content that violates its guidelines and the law. Moreover, Google should not be liable for any content that slips through the cracks. Such a principle would cripple YouTube, Facebook, Twitter, and other platforms for free expression and political debate. It would also fly in the face of decades of established law that have ensured robust digital speech platforms for millions of Americans.

While I focused on Google, other speech platforms are also working very hard to promote a safer Internet for all users while preserving what makes the Internet so valuable to so many people. Law enforcement, copyright holders, and other interested parties can work cooperatively with these technology platforms rather than rushing to the wrong factual conclusions and making the same tired, flawed arguments to upset established law for speech platforms.

Disclosure: I am a First Amendment lawyer and I advise several companies, including Google, on free expression and public policy.

Add new comment