Close this menu

California’s Online Transparency Law Hands Scammers a Blueprint to Flood Your Feed

Last September, California Governor Gavin Newsom signed AB 587, an editorial transparency law intended to combat “hate and disinformation” online. The law requires social media companies to report detailed data on how they identify offensive content, especially as related to “disinformation, harassment and extremism.” But rather than reducing such content online, AB 587 will make it impossible for online services to deal with bad actors—disinformation peddlers, harassers, spammers and scammers alike.

To comply with AB 587, online services must make detailed disclosures about their terms of service. The definition of “terms” is vague and broad and must include an explanation of the “the user behavior and activities that are permitted…and the user behavior and activities that may subject the user…to being actioned.” This section also requires online services to publish any changes in the way they interpret their terms before implementing a change.

Further, online services must submit detailed quarterly reports to the California Attorney General about how they moderate content. (The first is due on Jan. 1, 2024.) Reports must include explanations of: 

  • “How automated content moderation systems enforce terms of service,” 
  • “How the social media company would remove individual pieces of content, users, or groups that violate the terms of service,” and
  • “The number of times actioned items of content were shared, and the number of users that viewed the content before it was actioned.” 

Taken together, these requirements make it impossible for online services to effectively respond to scammers and spammers. By making it illegal for online services to adapt to ever-changing threats without first telling the bad actors exactly how they plan to do it, AB 587 hands bad actors a blueprint to game the system and harm other users online.

Transparency laws like California’s are often overlooked in the debate over content moderation regulations. Disclosure requirements are shrugged off as harmless when compared to laws which directly force services to host offensive content. This misunderstanding is not unreasonable. After all, many familiar transparency requirements, like nutrition labels, have proved both uncontroversial and beneficial. 

However, if we extend this analogy, AB 587’s requirements would force food manufacturers to allow vandals and miscreants to tamper with their product during the manufacturing process. . This is because the detailed disclosures AB 587 provides to bad actors will enable online services’ entire content moderation processes to be reverse engineered—inevitably leading to a proliferation of precisely the type of hate and disinformation California would like to limit. Whereas food labeling supports consumer safety, California’s transparency bill directly harms it. 

Online services that host user-generated content of any kind face a constant battle against bad actors. In fact, the vast majority of content online services remove is fake accounts, spam and sexually-explicit material. To stop the deluge of malicious content, online services invest heavily in content moderation tools as security systems. 

Malicious actors will always try to evade detection. But for the same reason banks do not disclose details of how they detect fraud, online services do not disclose detailed editorial data their security systems use to identify scams, spam and other violative content. Information that is routinely kept confidential includes the amount of times a user must post to be designated as spam and the methods for identification of certain abusive content. Yet by forcing services to disclose “how automated content moderation systems enforce terms of service” and “the number of users that viewed the content before it was actioned,” AB 587 effectively hands spammers and scammers a blueprint to circumvent security.

There are valid reasons to support transparency practices, but AB 587 is a botched attempt at achieving them. Any successful effort to promote transparency in content moderation must recognize the unique security threats social media services face. States seeking to promote safety online should take care not to follow California’s example.