from the bad-idea,-worse-implementation dept
Following in the footsteps of misguided European lawmakers, California legislators have introduced a time-sensitive “remove speech or else” law targeting social media sites.
They’ve introduced a bill that would give online platforms such as Facebook and Twitter three days to investigate whether a given account is a bot, to disclose that it’s a bot if it is in fact auto-generated, or to remove the bot outright.
The bill would make it illegal for anyone to use an automated account to mislead the citizens of California or to interact with them without disclosing that they’re dealing with a bot. Once somebody reports an illegally undisclosed bot, the clock would start ticking for the social media platform on which it’s found. The platforms would also be required to submit a bimonthly report to the state’s Attorney General detailing bot activity and what corrective actions were taken.
This is ridiculous for a number of reasons. First, it assumes the purpose of most bots is to mislead, hence the “need” for upfront disclosure. The ridiculousness of this part of the law’s many faulty premises is only further underscored by a bot created by the legislator behind the bill, Bob Hertzberg. His bot’s bio says [emphasis added]:
I am a bot. Automated accounts like mine are made to misinform & exploit users. But unlike most bots, I’m transparent about being a bot! #SB1001 #BotHertzberg
Hertzberg’s bot must have been made to “misinform and exploit users,” at least according to its own Twitter bio. And yet, the account’s tweets appear to disseminate actual correct info, like subcommittee webcasts and community-oriented info. It’s good the bot is transparent. But it’s terrible because the transparency immediately follows a line claiming automated accounts are made apparently solely to misinform people.
Plenty of automated accounts never misinform or exploit users. Techdirt’s account automatically tweets each newly-published post. So do countless other bots tied into content-management systems. But the bill — and bill creator’s own words — paint bots as evil, even while deploying a bot in an abortive attempt to make a point.
Going on from there, the bill demands sites create a portal for bot reporting and starts the removal clock when a report is made. User reporting may function better than algorithms when detecting bots spreading misinformation (putting bots in charge of bot removal), but this still puts social media companies in the uncomfortable position of being arbiters of truth. And if they make the “wrong” decision and leave a bot up, the government is free to punish them for noncompliance.
The bill also provides no avenue for those targeted to challenge a bot report or removal. (And no option for sites to challenge the government’s determination that they’ve failed to remove bots.) This is a key omission which will lead to unchecked abuse.
Finally, there’s the motivation for the bill. Some of it stems from a desire to punish “fake news,” a term no government has ever clearly defined. Some of it comes from evidence of Russian interference in the last presidential election. But much of the bill’s impetus is tied to vague notions of “rightness.” Hertzberg himself exhumes a long-dead catchphrase to justify his bill’s existence.
“We need to know if we are having debates with real people or if we’re being manipulated,” said Democratic State Senator Bob Hertzberg, who introduced the bill. “Right now we have no law and it’s just the Wild West.”
So, summary executions of bots by social media posse members? Is that the “Wild West” you mean, one historically notorious for its lack of due process and violent overreactions?
Here’s the other excuse for bad lawmaking, via an advocate for terrible legislation.
“California feels a bit guilty about how our hometown companies have had a negative impact on society as a whole,” said Shum Preston, the national director of advocacy and communications at Common Sense Media, a major supporter of Hertzberg’s bill. “We are looking to regulate in the absence of the federal government. We don’t think anything is coming from Washington.”
So, secondhand guilt justifies the direct regulation of third-party service providers? That’s almost worse than no reason at all.
And this isn’t the only bad bot bill being considered. Assemblymember Marc Levine wants all bots to tied to verified human beings. The same goes for any online advertising purchases. Levine feels his bill will help fight the bot problem, but his belief is predicated on a profound misunderstanding of human behavior.
By identifying bots, users will be better informed and able to identify whether or not the power of a group’s influence is legitimate. This will mitigate the promulgation of misinformation and influence of unauthentic social media campaigns.
Yes, telling people the stuff they think is legitimate isn’t legitimate always results in people ditching “illegitimate” news sources. Especially when that info is coming from a government they don’t like presiding over a state many wish would just fall into the ocean. Trying to fight a bot problem largely associated with alt-right groups with legislation from coastal elites is sure to win hearts and minds.
A bot-reporting portal with no recourse provisions — and a possibile “real name” requirement added into the mix — will become little more than a handy tool for harassment and hecklers. The cost of these efforts will be borne entirely by social media companies, which also will be held responsible for the mere existence of bots the Californian government feels might be misleading its residents. It’s bad lawmaking all around, propelled by misplaced guilt and overstated fears about the democratic process.