HotHead 's Techdirt Comments

Latest Comments (214) comment rss

  • Intelligence Assessment Shows Trump Admin’s Venezuelan Gang War Claims Are Lies

    HotHead ( profile ), 23 Apr, 2025 @ 06:03pm

    I neglected to cite my source

    I copy pasted the formal identifiers of the Sedition Act of 1918 from the top of the Wikipedia article. (I also accidentally excluded a closing parenthesis.)

    The Sedition Act of 1918 (Pub. L. 65–150, 40 Stat. 553, enacted May 16, 1918) was an Act of the United States Congress that extended the Espionage Act of 1917 to cover a broader range of offenses, notably speech and the expression of opinion that cast the government or the war effort in a negative light or interfered with the sale of government bonds.[1]
    The Debate and Enactment section ends with:
    The U.S. Supreme Court upheld the Sedition Act in Abrams v. United States (1919),[29] as applied to people urging curtailment of production of essential war materiel. Oliver Wendell Holmes used his dissenting opinion to make a commentary on what has come to be known as "the marketplace of ideas". Subsequent Supreme Court decisions, such as Brandenburg v. Ohio (1969), make it unlikely that similar legislation would be considered constitutional today.
    And the Repeal section says:
    As part of a sweeping repeal of wartime laws, Congress repealed the Sedition Act on December 13, 1920.[4][30][31] In 1921, president Woodrow Wilson offered clemency to most of those convicted under the Sedition Act.[32]

  • Intelligence Assessment Shows Trump Admin’s Venezuelan Gang War Claims Are Lies

    HotHead ( profile ), 23 Apr, 2025 @ 05:30pm

    Section 4, Act of April 20, 1918, 40 Stat. 533
    Also known as The Sedition Act of 1918 (Pub. L. 65–150, 40 Stat. 553, enacted May 16, 1918. Is this Koby's idea of good law? Congress repealed it in 1920. I think I know what's going on. When Koby calls one of his claims a fact check, it's actually the remaining shred of Koby's sanity calling out for help. We hear you, Koby. Thanks for letting us know that Trump's executive order is double illegal.

  • Donald Trump Thinks He Can End The Fentanyl Problem By… Hitting Drug Smugglers With Tariffs

    HotHead ( profile ), 10 Apr, 2025 @ 11:02am

    Trump routinely gives "reasons" that make less sense than "ice cream causes drowning", but there's no shortage of people who sanewash and craft parallel constructions for every terrible decision and pretense Trump and Elon Musk make. That's before you even consider the psychopaths who knowingly cheer on human trafficking, respond to simple facts with long-debunked "nuh-uh"s, and make their whole personality about asking (also debunked) "do you still beat your wife"-type loaded questions.

  • Mississippi Judge Goes Full Prior Restraint, Allows City To Demand Removal Of Op-Ed Criticizing It

    HotHead ( profile ), 24 Feb, 2025 @ 05:42pm

    For sure. Knight First Amendment Institute v. Trump established an even stronger rule regarding government speech on a personal account, so an official government account that blocks comments is an unequivocal First Amendment violation.

  • Back Our Kickstarter For One Billion Users, The Social Media Card Game

    HotHead ( profile ), 24 Nov, 2024 @ 07:05am

    The real-world inspirations for the in-game social media networks seem to be as follows. TapTap is inspired by TikTok. HireMe, LinkedIn. Friendlink, TheFacebook— I mean, Facebook. Skyline, BlueSky. The Hellsite, Twitter (sometimesly known as X). The Hellsite is very on the nose, and is more reminiscent of 4chan or Kiwi Farms. Considering that the name and logo of X reminds people of porn sites, perhaps The Hotsite would be a better name. Tangent: Hank Green has a video explaining why he believes that Twitter is the accurate name for the social network Twitter.

  • The American Privacy Rights Act’s Hidden AI Ban

    HotHead ( profile ), 29 Oct, 2024 @ 04:38pm

    This piece is just the same vague gesturing toward “innovation”, from an industry firm that doesn’t actually care about consumers or users or anything outside of profit. Just last year, they wrote a piece called “The Case For Right To Repair Has Not Been Made“.
    Thanks for the context. That article is a huge red flag. It praises section 1201 of the DMCA, which chills not only independent repairs, but also security research, accessibility, and creative remixing of videos.

  • Ctrl-Alt-Speech: Is This The Real Life? Is This Just Fakery?

    HotHead ( profile ), 28 Sep, 2024 @ 03:48pm

    Not related, but here's an AI liability article I found interesting

    Eugene Volokh put out an article about what liability the First Amendment does and doesn't protect AI companies from. I was wondering what Mike Masnick would think about it. But I have some half-baked opinions (mostly disagreements) to share about it. Regarding defamation occuring when e.g. a user shares a false statement of fact generated by an LLM:

    Naturally, everyone understands that AI programs aren’t perfect. But everyone understands that newspapers aren’t perfect either, and some are less perfect than others—yet that can’t be enough to give newspapers immunity from defamation liability; likewise for AI programs. And that’s especially so when the output is framed in quite definite language, often with purported quotes from respected publications. To be sure, people who are keenly aware of the “large libel models” problem might be so skeptical of anything AI programs output that they wouldn’t perceive any of the programs’ statements as factual. But libel law looks at the “natural and probable effect” of assertions on the “average lay reader,” not at how something is perceived by a technical expert.
    ...
    To be sure, there are some narrow and specific privileges that defamation law has developed to free people to repeat possibly erroneous content without risk of liability, in particular contexts where such repetition is seen as especially necessary. For instance, some courts recognize the “neutral reportage” privilege, which immunizes “accurate and disinterested” reporting of “serious charges” made by “a responsible, prominent organization” “against a public figure,” even when the reporter has serious doubts about the accuracy of the charges. But other courts reject the privilege. And even those that accept it apply it only to narrow situations: Reporting false allegations remains actionable—even though the report makes clear that the allegations may be mistaken—when the allegations relate to matters of private concern, or are made by people or entities who aren’t “responsible” and “prominent.” Such reporting certainly remains actionable when the allegations themselves are erroneously recalled or reported by the speaker.
    I feel as if Volokh's interpretation would allow the following kind of site to be liable for defamation: Keep in mind that people can detect lies about as reliably as a coin flip can. Regarding Section 230:
    A lawsuit against an AI company would thus aim to treat it as a publisher or speaker of information provided by itself. And the AI company would thus itself be a potentially liable “information content provider.” Under Section 230, such providers—defined to cover “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service” (emphasis added)—can be legally responsible for the information they help create or develop.
    ...
    For instance, courts have read § 230 as protecting even individual human decisions to copy and paste particular material that they got online into their own posts: If I post to my blog some third-party-written text that was intended for use on the internet (for instance, because it’s already been posted online), I’m immune from liability. But if instead I myself write a new defamatory post about you, I lack § 230 immunity even if I copied each word from a different web page and then assembled them together: I’m responsible in part (or even in whole) for creating the defamatory information. Likewise for AI programs.
    Volokh thinks that courts should regard an LLM's output as being created in part by the AI company because the AI company made the LLM and that therefore the AI company can be held at least partially liable of a user shares the LLM's output with other people. Volokh doesn't claim that an AI company categorically loses Section 230 protection due to training or adjust an LLM, but I feel as Volokh should've written a specific example about when being the party to train/adjust an LLM would be sufficient, rather than necessary but not sufficient, to take on liability for otherwise unprotected LLM outputs that a user chose to share with other people. Regarding prevention of generation of unprotected speech:
    Libel law famously requires “actual malice”—knowledge or recklessness as to falsehood—for lawsuits brought by public officials, public figures, and some others. But while that element wouldn’t be satisfied for many AI hallucinations, it might be satisfied once the person about whom the falsehoods are being generated alerts the AI company to the error. If the AI company doesn’t take reasonable steps to prevent that particular falsehood from being regenerated, then it might well be held liable when the particular hallucination is conveyed again in the future: At that point, the company would indeed know that its software is spreading a particular falsehood. Such knowledge plus failure to act to prevent such repeated spread of the falsehood would likely suffice to show actual malice.
    And regarding prevention of generation of pornographic deepfakes:
    In any event, if the images are indeed constitutionally unprotected, then the developers might potentially be held liable for such output. But any such liability would, I think, require a showing that the AI product developers know their products are being used to create such images and fail to institute reasonable measures that would prevent that result without unduly interfering with the creation of constitutionally protected material.
    I'm hoping that courts will be cautious about finding "failure to act to prevent repeated spread". Preventing an LLM from repeating a bad output means dealing with the general impossibility of content moderation. If an LLM generates millions of outputs a day and 100 outputs get reported for defamation or deepfakes every day, what will courts and legislators expect the AI company to do to handle the large volume? And what will courts think when AI companies encounter the same problems that YouTube does with Content ID, especially the impossibility of detecting slight variations of previous bad outputs? I don't want a repeat of the DMCA safe harbor, which is implemented in a way to incentive websites hosting user-generated content to treat the content referred to in an infringement notification as presumptive infringement.

  • Sample Library Company Copyright Strikes YouTuber Over Showing Their ToS

    HotHead ( profile ), 19 Jul, 2024 @ 10:08am

    The wrong basis for copyright

    I can KIND OF see the justification of things like ASTM standards being protected by copyright, since they’re a work product that’s being sold in book and electronic form.
    The "sweat of the brow" doctrine was rejected by the US Supreme Court. The fact that something is a sellable work product does not qualify it for copyright protections.
    Feist Publications, Inc., v. Rural Telephone Service Co., 499 U.S. 340 (1991), was a landmark decision by the Supreme Court of the United States establishing that information alone without a minimum of original creativity cannot be protected by copyright.
    Copyright arises from creative expression. The text of an ASTM standard might contain sufficient creative expression for copyright protections. Additionally, the way in which a collection of such standards is presented (the images, the annotations, the appearances of the covers, etc.) may also have sufficient creative expression. However, for standards that become law (standards incorporated by reference), the public interest in public access to the standards can weigh in favor of fair use in some cases.

  • Utah Locals Are Getting Cheap 10 Gbps Fiber Thanks To Local Governments

    HotHead ( profile ), 16 May, 2024 @ 11:06am

    You're right, but you're also missing the point

    If you don't need 10 Gbps (which is most people right now), then you can get less bandwidth for cheaper. SenaWave, one of the ISPs listed on the UTOPIA site, offers 250 Mbps for $65 and 1Gbps for $70 in supported locations (after factoring in the site's higher-end estimate of additional monthly fees).

  • TikTok Users Challenge Court To Save Their Favorite App

    HotHead ( profile ), 15 May, 2024 @ 12:23pm

    TikTok ban could be made Constitutional: rather than explicitly forcing divestment, the law could use as an enforcement mechanism an exemption from Section 230. So TikTok could be nominally allowed to continue operating in the United States, but it would be counted as a publisher of user content.
    Are you sure you aren't begging the question? What makes your proposed idea not a First Amendment violation? TikTok's moderation decisions of user speech are protected by the First Amendment, and Section 230 makes the First Amendment's protections of moderation a practical reality by providing an early dismissal opportunity for lawsuits that would already violate the First Amendment with respect to editorial actions on third-party speech. If you remove Section 230 protections then TikTok would go bankrupt to those lawsuits. The US TikTok users would lose their speech platform of choice, thereby turning a violation of TikTok's speech rights into a violation of users's speech rights.
    (Obviously the better alternative would be regulating large social media platforms as “Common Carriers” under the Telecommunications Act, but Google and Meta would never allow that to happen.)
    The First Amendment, too, would not allow it to happen. (In theory. Unconstitutional legislation tends to be faster than corrective court cases.) https://www.techdirt.com/2022/02/25/why-it-makes-no-sense-to-call-websites-common-carriers/

  • Italian Government Says Carmaker Can’t Make Its “Milano” Vehicle Outside Of Italy

    HotHead ( profile ), 25 Apr, 2024 @ 07:47am

    If US-made parmesan is sold in the US and not Italy
    At the moment, your premise is wrong. A US-made parmesan sold in the US as a "parmesan" is okay, but selling the same product as a "parmesan" in the European Union is illegal.
    Within the European Union, the term Parmesan may only be used, by law, to refer to Parmigiano Reggiano itself, which must be made in a restricted geographic area, using stringently defined methods. In many areas outside Europe the name Parmesan has become genericised and may denote any of a number of hard Italian-style grating cheeses.[33][34] These cheeses, chiefly from the US and Argentina, are often commercialised under names intended to evoke the original, such as Parmesan, Parmigiana, Parmesana, Parmabon, Real Parma, Parmezan, or Parmezano.[2] After the European ruling that "parmesan" could not be used as a generic name, Kraft renamed its grated cheese "Pamesello" in Europe.[35]

  • Italian Government Says Carmaker Can’t Make Its “Milano” Vehicle Outside Of Italy

    HotHead ( profile ), 25 Apr, 2024 @ 07:36am

    No, Italy does not have fair dealing

    That’s because they don’t have fair use in Italy (that’s a US thing), but rather fair dealing
    No. Italy doesn't have fair dealing either.
    Italian copyright law does not have an equivalent to fair use or fair dealing provisions. Limitations and exceptions are set out individually and are interpreted restrictively by the courts, as one would expect in an author's rights regime.[1]

  • Anti-Porn Clusterfucks: Pornhub Blocks Texas, Indiana Adopts Age Verification

    HotHead ( profile ), 17 Mar, 2024 @ 08:04am

    There is also the question “what is porn?”, as Justice Potter Stewart put it in Jacobellis v Ohio:
    In a concurrence in Jacobellis_v._Ohio (1964), the question Justice Potter Stewart was answering with "I know it when I see it" was "What is hard-core pornography?" as compared to "What is non-hard-core (softcore?) pornography?", not "What is porn?" as compared to "What is not porn?" Justice Stewart considered porn to be obscenity, hardcore porn to be unprotected obscenity, and non-hardcore porn to be protected obscenity. In Miller v. California (1973), the majority made a test; anything which failed the test would be obscenity and unprotected speech, while everything else would be not obscenity (and would be protected speech, excepting other ways for speech to be unprotected). In other words, obscenity is unprotected speech by definition and porn in general is not obscenity (regardless of the colloquial definitions of obscenity).
    It all comes down to “states could not ban the sale, advertisement, or distribution of obscene materials to consenting adults”
    No, a US state absolutely can ban distribution of obscenity (obscenity = any speech which fails the Miller test):
    The Miller ruling, and particularly the resulting Miller test, was the Supreme Court's first comprehensive explication of obscene material that does not qualify for First Amendment protection and thus can be banned by governmental authorities. Furthermore, due to the three-part test's stringent requirements, very few types of content can now be completely banned, and material that is appropriate for consenting adults can only be partially restricted per delivery method.[13]

  • MSCHF Asks The Supreme Court To Say Its Parody Of Vans Shoes Is Free Speech

    HotHead ( profile ), 13 Mar, 2024 @ 04:04pm

    Please, anything but that!

    Mr. Congress, can we pass a law banning the prisoners from eating each other consensually?

  • Error Message Exposes Vending Machine’s Use Of Facial Recognition Tech

    HotHead ( profile ), 28 Feb, 2024 @ 10:30am

    But given that there are no vending machines in the bathroom with a camera, what is the concrete articulable “injury” caused by the vending machine camera other than hurt feelings?
    You keep going on about feelings. At least acknowledge this part of my previous comment:
    The mere presence of a capability more privacy-invasive than necessary is the problem. Just as a bathroom doesn’t need a photo-capturing-capable camera inside for the building’s security, a vending machine doesn’t need a face-detection-capable camera for the machine’s security, never mind making sales transactions.
    In a strong privacy legal framework, the burden of justification should be on the advocate of observation to demonstrate that an additional invasive capability is necessary (photo cameras vs. motion detectors.in bathrooms, face detection cameras vs. regular cameras on vending machines), not on the subject of observation to demonstrate that the additional invasive capability is harmful. Privacy is primarily about being able to consent, withdraw consent and withhold consent to sharing personal data with other people. Hiding personal data from people who would abuse it is secondary but also important. Would you bar people from suing over privacy violations until data brokers have already distributed personal data collected without consent to someone who will actually try to blackmail, impersonate, dox, threaten, rob, etc. the data subjects? That's a bad model. The burden of proof should be on the data collectors / data users to demonstrate that they acted with unambiguous consent or needed to use the data in a specific way to fulfill contractual obligations with the respective data subjects (or just legal obligations), not on the data subjects to demonstrate that concrete harm will definitely happen to themselves. But here's a generalized concrete harm: people behave differently when they notice or believe that they are being observed. It's the Hawthorne effect, a contributor to chilling effects. If you use personal data that you have no consent to collect or use in such a way and have no strictly necessary contractual/legal obligation to do so, then you have an unjust power, a power to change the way other people behave, even if the degree to which you can control their behavior is limited.

  • Error Message Exposes Vending Machine’s Use Of Facial Recognition Tech

    HotHead ( profile ), 27 Feb, 2024 @ 04:52pm

    Supposedly the tech acts like a motion sensor, doing nothing more than informing the machine that someone intends to make a purchase. But a motion sensor is way different than a camera with facial recognition tech attached.
    Someone who might be the same Benjamin J. Barber who distributed revenge porn took issue with the above excerpt. In response to Barber's coincidentally-revenge-porn-enabling rant, another commenter mentioned a great example of the problem:
    So I can put a camera in your bathroom is what you are telling me?
    Using a security camera as a motion sensor in a bathroom absolutely should be treated differently from using a mere motion sensor in a bathroom. The mere presence of a capability more privacy-invasive than necessary is the problem. Just as a bathroom doesn't need a photo-capturing-capable camera inside for the building's security, a vending machine doesn't need a face-detection-capable camera for the machine's security, never mind making sales transactions. Anyway, I highly doubt that Invenda's vending machines with face detection cameras are GDPR-compliant. Gender, age, and race easily fall under the GDPR's definition of personal data:
    ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;
    Here's the GDPR's definition of processing data (including recording, adaptation, alteration, and use of data):
    ‘processing’ means any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction;
    The GDPR applies to processing (including local production of aggregate statistics, the same action or less invasive version of what every analytics service does):
    Furthermore, the GDPR only applies to personal data processed in one of two ways: Personal data processed wholly or partly by automated means (or, information in electronic form); and Personal data processed in a non-automated manner which forms part of, or is intended to form part of, a ‘filing system’ (or, written records in a manual filing system).
    Even if the machines delete direct recordings, the machines still made recordings in the first place, and also extracted data from those recordings. Invenda's vending machines don't ask for consent from each user to record the respective user. Sounds like a GDPR violation to me, but IANAL.

  • Sir, This Is A Supreme Court (Not A Wendy’s)

    HotHead ( profile ), 27 Feb, 2024 @ 03:29pm

    You have a right to speak. You do not have a right to force social media sites to help you speak.

    That’s not — I mean, that is Orwell, right? So, for me, the answer is, for these kind of things like telephones or telegraphs or voluntary communications on the next big telephone/telegraph machine, those kind of private communications have to be able to exist somewhere. You know, the expression like, you know, sir, this is a Wendy’s. There has to be some sort of way where we can allow people to communicate —
    It’s possible he was using it as an example to say that people want places to sound off and to express themselves, as epitomized by that meme. That’s the most generous version of it I can come up with.
    Texas Solicitor General Aaron Nielson, it is not Orwellian for a website owner to control what users can and cannot do on the website. It's not the users' website. Stop spouting fascism. Stop compelling others to let you use their property to distribute your speech. Stop compelling speech. The size of the website does not allow the government to give control of the website to someone other than the website owners. You want a place where people won't tell you, "This is a Wendy's?" You don't need a social media site. You can make your own simple website or use a social media website that accepts your speech (such as TWITTER or a Mastodon fork, like TRUTH SOCIAL). Wordpress exists. Neocities exists. Hugo exists. You don't need to learn HTML: Markdown exists. I don't recommend using a proprietary website service like Wix, but I will mention that I had peers who made a website with Wix (no HTML knowledge needed) in middle school. Wix, as the hoster of the website, had a right to terminate the website at any time. The internet is the public square. No US state has a right to turn a specific website into a public square.

  • Panda Express Opposes Trademark For ‘Trash Panda Vegan’ Food Truck

    HotHead ( profile ), 09 Feb, 2024 @ 09:38pm

    “Panda Restaurant Group owns the trademark for the word ‘Panda’ for use in any restaurant service and have engaged in standard industry practice
    When I read insane statements like that I wish I could declassify- oops, I mean, disbar lawyers with my mind. I certainly hope law schools aren't purposefully teaching students to think that trademark law is "I own words in the dictionary". If your trademark relies uses regular words then it's kinda your fault if someone else happens to use some of the same words. And trademark confusion over regular words doesn't happen very often anyway.

  • South Korean Man Sentenced For Refusing Military Service, In Part Because He Plays PUBG

    HotHead ( profile ), 07 Feb, 2024 @ 08:10am

    Does the law have common sense, judges?

    A. I play a war game in which my side is not my country. Do I hate my country, judges? B. I play a sci-fi game in which I perform unethical experiments on other characters. Do I support unethical experiments, judges? C. I play a fantasy game in which I own humanoid slaves. Do I support slavery, judges? D. I play a dystopia game in which the premise is that my character willingly accepts a brain parasite or a brain chip. Do I want a chip in my brain, judges? E. I play a religious game in which I exorcise demons with crosses. Am I Christian, judges?

  • Politicians Are Using Kids As Props To Pass Terrible, Harmful Legislation. Don’t Let Them Get Away With It

    HotHead ( profile ), 05 Feb, 2024 @ 05:07pm

    And, apparently got reporters to try to ruin boyd’s reputation:
    I learned this lesson hardcore fifteen years ago when I naively provided a literature review on the risks young people faced to the then-attorney general of Connecticut. He didn’t like what the summation of hundreds of studies showed; he barked at me to find different data. A few months later, I learned that a Frontline reporter was tasked with “proving” that I was falsifying data. After investigating me, she warned me that I had pissed off a lot of powerful people. Le sigh.
    What’s left unsaid in this paragraph is that the Attorney General in question was… Richard Blumenthal. Who is now a Senator from Connecticut and the author and lead sponsor of KOSA.
    Blumenthal wants to find different data, huh? That's as bad as Trump's "find 11,780 votes".

Next >>