InfoEcon's Techdirt Profile

InfoEcon

About InfoEcon

InfoEcon's Comments comment rss

  • Oct 31, 2021 @ 07:31pm

    Re: Re: Re: Why should triggering a panic be legal?

    Cathy, thx for the interesting history but, honestly, you overstate the case. David has a valid point. The fairest true statement about falsely shouting fire in a theater is that it hasn’t been adjudicated, not that it would (or should) be protected. If it were adjudicated, it would likely fail as protected speech. As proof, look no further than the very case you cite, Brandenburg v. Ohio. Elaborating on the majority decision, Justice Douglas addressed this specific carve out in his concurrence (with Justice Black further concurring):

    ``The line between what is permissible and not subject to control and what may be made impermissible and subject to regulation is the line between ideas and overt acts. The example usually given by those who would punish speech is the case of one who falsely shouts fire in a crowded theatre. This is, however, a classic case where speech is brigaded with action. … They are indeed inseparable and a prosecution can be launched for the overt acts actually caused.'' [456-457 -- emphasis added]
    So, judging by the judges who judged it, this would not be protected speech. The reason is that it is both false and panic-inducing –- imminent lawless action. Justices Douglas and Black are among the staunchest free speech absolutists so their exception in this landmark case makes the decision a reasonably sure bet. Speaking personally, not legally, I think the right solution is that (1) government has no business regulating specific speech at this level but (2) the families of anyone injured or killed by a false shout of fire in a theater has a cause of action proportional to the damage caused. No damage? Then, no harm, no foul. Someone died? Then the speaker needs to answer for that. Citizens injured rather than government censors ought to have the say.

  • Aug 17, 2021 @ 09:27am

    A solution with a better critique

    Greetings Mike, I’m a fan of your writings so when they include a critique, I pay attention. Thanks also for acknowledging our prior work and also even prior praise (https://www.techdirt.com/articles/20090219/0248373834.shtml).

    None of your criticisms, however, address the fundamental question: how do you hold a platform accountable for misinformation that it amplifies? The problem with S230 is that by providing (almost) absolute immunity to being an accessory to a crime, it “accessorizes” a lot more crime. The infodemic of antivaxx misinformation is a case in point. Platforms don’t produce this content but they have given it reach and influence and they have monetized the engagement that has attended it.

    Paraphrasing your conclusion, you mostly assert the downside of changing S230 outweighs the upside. Still, you don’t assert that there’s no problem.

    As a tech (or econ or legal) designer, we should always ask the question “can we do better?” Is there a superior design that accomplishes these mutually conflicting goals?

    So let me poke a hole in one of your best arguments, that it’s “impossible to do this well at scale”. We agree that checking every single message just isn’t feasible. But, that doesn’t mean no better design exists. Let me propose one:

    If we recognize the “infodemic” as a pollution problem, then we take statistical samples just like we sample factory air for the presence of sulphur dioxide or water for the presence of DDT. We don’t measure every cubic centimeter of effluent as that’s just not practical. A doctor doesn’t check your cholesterol by checking all your blood, she/he takes a sample.

    The beauty here is that by a property of the central limit theorem from statistics, we can be extremely confident how much pollution afflicts a given platform. Do we want 90% confidence? 95% confidence? 99% confidence? We just take bigger samples to be sure. Even if folks disagree on the falseness or harm of a specific claim, people will agree on average. One study found 95% agreement among fact checking organizations (https://science.sciencemag.org/content/359/6380/1146). In fact, in computer science, it’s possible to create highly accurate assessments with much lower agreement among deciders than this.

    Under a modified S230, with a duty of care, we just hold platforms accountable for pollution levels above a reasonable threshold. Facebook, for example, already reports such things as incidence of cancer misinformation on its platform (https://www.facebook.com/AMJPublicHealth/posts/3316836535095688). Now, we just hold them publicly accountable. This isn’t impossible at all -- we just need to connect the existing dots.

    We’ve tried to think carefully about such issues and avoid polemics. I have a partial working paper “Platforms, Free Speech & The Problem of Fake News” with more nuance (https://www.dropbox.com/s/ypphlhw43efnslj/Platforms%2C%20Free%20Speech%20%26%20the%20Problem%20of%20Fake%20News%20v0.3%20-%20dist.pdf?dl=0). Honestly, I have not shared it widely yet outside friends and family as there is much more to be done but this hue and cry prompts me to disclose it earlier than I’d planned. Your further thoughts are welcome and invited.

    To succeed, a good critique needs to convince us that (a) no problem exists and (b) no better design exists. Respectfully, the above critiques fall short on both counts.