Governor Jerry Brown recently signed S.B. 1001 , a new law requiring all “bots” used for purposes of influencing a commercial transaction or a vote in an election to be labeled. The bill, introduced by Senator Robert Hertzberg, originally included a provision that would have been abused as a censorship tool, and would have threatened online anonymity and resulted in the takedown of lawful human speech. EFF urged the California legislature to amend the bill and worked with Senator Hertzberg's office to ensure that the bill’s dangerous elements were removed. We’re happy to report that the bill Governor Brown signed last week was free of the problematic language.
This is a crucial victory. S.B. 1001 is the first bill of its kind, and it will likely serve as a model for other states. Here’s where we think the bill went right.
First, the original bill targeted all bots, regardless of what a bot was being used for or whether it was causing any harm to society. This would have swept up one-off bots used for parodies or art projects—a far cry from the armies of Russian bots that plagued social media prior to the 2016 election or spambots deployed at scale used for fraud or commercial gain. It’s important to remember that bots often represent the speech of real people, processed through a computer program. The human speech underlying bots is protected by the First Amendment, and such a broadly reaching bill raised serious First Amendment concerns. An across-the -board bot-labeling mandate would also predictably lead to demands for verification of whether individual accounts were controlled by an actual person, which would result in piercing anonymity. Luckily, S.B. 1001 was amended to target the harmful bots that prompted the legislation—bots used surreptitiously in an attempt to influence commercial transactions or how people vote in elections.
Second, S.B. 1001’s definition of “bot”—“an automated online account where all or substantially all of the actions or posts of that account are not the result of a person”—ensures that use of simple technological tools like vacation responders and scheduled tweets won’t be unintentionally impacted. The definition was previously limited to online accounts automated or designed to mimic an account of a natural person, which would have applied to parody accounts that didn’t even involve automation, but not auto-generated posts from fake organizational accounts . This was fixed.
Third, earlier versions of the bill required that platforms create a notice and takedown system for suspected bots that would have predictably caused innocent human users to have their accounts labeled as bots or deleted altogether. The provision, inspired by the notoriously problematic DMCA takedown system, required platforms to determine within 72 hours for any reported account whether to remove the account or label it as a bot. On its face, this may sound like a positive step in improving public discourse, but years of attempts at content moderation by large platforms show that things inevitably go wrong in a panoply of ways . As a preliminary matter, it is not always easy to determine whether an account is controlled by a bot, a human, or a “centaur” ( i.e ., a human-machine team). Platforms can try to guess based on the account’s IP addresses, mouse pointer movement, or keystroke timing, but these techniques are imperfect. They could, for example, sweep in individuals using VPNs or Tor for privacy. And accounts of those with special accessibility needs who use speech to text input could be mislabeled by a mouse or keyboard heuristic .
This is not far-fetched: bots are getting increasingly good at sneaking their way through Turing tests . And particularly given the short turnaround time, platforms would have had little incentive to make sure to always get it right—to ensure that a human reviewed and verified every decision their systems made to take down or label an account—when simply taking an account offline would have fulfilled any and all legal obligations
What’s more, any such system—just like the DMCA—would be abused to censor speech. Those seeking to censor legitimate speech have become experts at figuring out precisely how to use platforms’ policies to silence or otherwise discredit their opponents on social media platforms. The targets of this sort of abuse have been the sorts of voices the supporters of S.B. 1001 would likely want to protect—including Muslim civil rights leaders , pro-democracy activists in Vietnam, and Black Lives Matter activists whose posts were censored due to efforts by white supremacists. It is naive to think that online trolls wouldn't figure out how to game S.B. 1001’s system as well.
The takedown regime would also have been hard to enforce in practice without unmasking anonymous human speakers. While merely labeling an account as a bot does not pierce anonymity, platforms might have required identity verification in order for a human to challenge their decisions about whether to takedown an account or label it as a bot.
Finally, as enacted, S.B. 1001 targets large platforms—those with 10 million or more unique monthly United States visitors. The problems this new law aims to solve are caused by bots deployed at scale on large platforms, and limiting the law to large platforms ensures that it will not unduly burden small businesses or community-run forums.
As with any legislation—and particularly with legislation involving technology—to avoid unintended negative consequences, it is important that policy makers take the time to think about the specific harms they seek to address and tailor legislation accordingly. We thank the California legislature for hearing our concerns and doing that with S.B. 1001.