I don’t find myself writing about how some combination of the USPTO and the court system gets things right on trademark matters very often, but I certainly did a couple of years ago in a series of posts about a fashion brand owned by Erik Brunetti. The brand was initially denied a trademark for its name, which is FUCT, over vulgarity concerns. That decision was overturned on appeal, with the courts rightly recognizing that the First Amendment trumps the puritanical sensibilities of the USPTO. It was a really good ruling overall.
Well, Brunetti wasn’t done. Given his victory in trademarking the term “FUCT,” he then attempted to get a trademark in a series of fashion categories for the work “FUCK.” After all, the courts had made it clear that civility and morality checks need not apply. However, the USPTO this time got it right and denied the trademark application as being too generic to serve as a source identifier. The USPTO’s decision included such delightfully vulgar analysis as:
The record before us establishes that the word FUCK expresses well-recognized familiar sentiments and the relevant consumers are accustomed to seeing it in widespread use, by many different sources, on the kind of goods identified in the FUCK Applications. Consequently, we find that it does not create the commercial impression of a source indicator, and does not function as a trademark to distinguish Applicant’s goods and services in commerce and indicate their source.
I completely fucking agree, USPTO people. Good for you, this was another good decision.
Unfortunately, Brunetti appealed the decision, as he had with “FUCT”, and the appeals court has decided that the USPTO had failed to provide a clear reason for its denial.
Aug 26 (Reuters) – A U.S. appeals court on Tuesday ordered the U.S. Patent and Trademark Office, opens new tab to reconsider its decision to deny an application for a trademark covering the obscenity word “fuck.” In a split 2-1 decision, the U.S. Court of Appeals for the Federal Circuit said the USPTO failed to provide clear reasoning for denying the application by streetwear designer Erik Brunetti.
The Federal Circuit said the dispute differed from Brunetti’s Supreme Court case because the USPTO’s decision was not based on the fact that the mark was obscene. But the court agreed with Brunetti that the USPTO had registered other all-purpose words as trademarks, such as “Love,” and said the agency “failed to provide sufficient precision in its rationale for why some commonplace words can serve as a mark, but others, such as FUCK, cannot.”
And, to be fair, Brunetti has something of a point here. The USPTO is not exactly a beacon of consistency when it comes to what it allows to be a registered trademark and what it does not. That’s a huge part of the problem.
But my answer to Brunetti would simply be that all he’s done is point out that there are other simple, commonplace words that should probably have their registrations rescinded. This isn’t an argument that “FUCK” should be a trademark, in other words. It’s an argument that the USPTO often times sucks at their jobs.
Because the real rebuttal from the USPTO ought need only be a showing of all the other registered trademarks that incorporate the word “fuck” into something more identifying or unique, never mind all of the other examples of clothing and fashion brands out there that use the word in one way or another. It’s ubiquitous and I’m more than a bit surprised that the appeals court needs the USPTO to be more detailed in its denial. After all, it was right to say that the word doesn’t serve as an identifier as a source of a good, because it’s used far to widely to do so.
So here’s to hoping that the USPTO cares enough to stand its ground.
One of the things upon which I spend a lot of time pondering: watching right-leaning, but otherwise intelligent people in my life look at Donald Trump’s systematic destruction of constitutional government and see just mere incompetence, but generally normal politics. These aren’t people force-fed reactionary propaganda in media bubbles. These are sophisticated observers who, if the same fact patterns were playing out in Hungary or Venezuela, would immediately recognize authoritarian consolidation for what it is.
The only conclusion that makes sense is that some humans simply value tribal loyalty more than truth. Once that choice is made, everything else becomes motivated reasoning in service of protecting the tribe from its designated enemies.
The American right has achieved remarkable clarity about who their enemy is: “the left.” Whether it’s woke ideology, trans rights, Marxism, or whatever dark fantasy currently haunts their imagination, they’ve identified the existential threat that must be stopped at all costs. Once that becomes the organizing principle of your political worldview, everything else—competence, integrity, constitutional governance, basic honesty—becomes secondary to the primary mission of keeping “them” from power.
Donald Trump is obviously a fraud. A transparent con man who has never successfully negotiated anything beneficial for America in either of his administrations. There is no “art of the deal”—just decades of failed businesses, stiffed contractors, and elaborate schemes to avoid accountability for obvious crimes. His Republican enablers know this perfectly well.
But they also know who daddy is. And daddy is the guy their tribe gathers around, however repulsive and vulgar he might be.
Some of these people even recognize that Trump wants to be king. They can see the authoritarian impulses, the constitutional contempt, the obvious desire for unchecked power. But they reassure themselves that institutions will contain him, that checks and balances will hold, that somehow the system will prevent the worst outcomes. What they can’t admit is that institutions don’t constrain themselves—they’re constrained by people willing to defend them. And when daddy is systematically capturing those institutions, placing loyalists in every position of authority, redefining institutional purpose from public service to personal protection—the institutions become daddy’s tools rather than democracy’s safeguards.
Watch Republicans in Congress when Trump prostrates America before Vladimir Putin. You can see the embarrassment in their faces, feel their moral misapprehension at watching American soldiers kneel on tarmac to prepare red carpets for war criminals. They know what’s happening is wrong—deeply, obviously wrong.
But they also understand their role in the daddy dynamic: you give gentle suggestions while you watch him humiliate the country you claim to love. You offer private counsel while publicly defending his “negotiating style.” You express quiet concerns in closed-door meetings while voting to block any oversight that might constrain his collaboration with foreign adversaries.
The same psychology was on display after the Bolton raid. Republicans who spent years screaming about “weaponized law enforcement” fell silent when it actually happened—when the FBI raided a former National Security Advisor for the crime of writing a book critical of the president. They know it’s constitutional vandalism. They just can’t bring themselves to oppose daddy, even when he’s systematically destroying the institutions they claim will contain him.
The “daddy” dynamic captures both the infantilization involved—looking for a strong father figure to protect them from scary changes in the world—and the way authoritarian movements depend on personal loyalty rather than institutional consistency. Daddy doesn’t need to deliver results; he just needs to make the right enemies suffer. And if he happens to embarrass America on the world stage, collaborate with adversaries, or betray fundamental values—well, that’s just daddy being daddy.
There’s a stark contrast here with how truth-seekers operate. Liberals, genuine conservatives, and independents committed to democratic governance don’t look for daddy figures—they look for competent public servants accountable to constitutional constraints. They criticize their own leaders when those leaders fail or overreach. They value institutional integrity over personal loyalty. When Joe Biden’s classified documents were discovered, Democrats didn’t rally around him with excuses—they supported proper investigation. When Democratic governors gerrymanander, progressive activists organize against them. Truth-seekers understand that no individual is more important than the system of accountability itself.
But once you’ve chosen daddy over democracy, normal political persuasion becomes futile. You’re trying to have a rational policy debate with people who have fundamentally abandoned the framework where policies matter. They’re engaged in tribal warfare where competence matters less than loyalty, where truth matters less than victory, where national dignity matters less than keeping “them” from power.
The tragedy is watching intelligent people voluntarily surrender their analytical capacity to tribal belonging. They’ve chosen the comfort of knowing who their enemies are over the difficulty of thinking clearly about complex realities. They’ve chosen daddy over country, tribal identity over constitutional duty, personal loyalty over national interest.
This isn’t stupidity. It’s the deliberate subordination of truth-seeking to threat perception. Once someone becomes convinced that political opponents represent existential danger, everything else becomes tactical calculation. The question isn’t whether Trump is competent or honest or patriotic—the question is whether he’s useful for destroying the people who threaten their vision of America.
In tribal warfare, daddy doesn’t need to be good. He just needs to be theirs. And as long as loyalty trumps reality, daddy wins—even if it means America loses.
Republicans love daddy.
Mike Brock is a former tech exec who was on the leadership team at Block. Originally published at his Notes From the Circus.
You can’t do mass deportation without being indiscriminate. That’s how things are working out in Trump’s second term in office, with ICE, etc. entirely abandoning any pretense of just trying to rid this country of dangerous criminals who are here in the country illegally.
Raiding Home Deport parking lots to round up people just looking for work isn’t enough for ICE and Kristi Noem’s DHS. It’s not enough to hang out all day in court hallways just so officers can grab anyone voluntarily showing up for mandatory immigration check-ins.
A whole bunch of workers this same administration considered “essential” during its mishandling of the COVID pandemic are now considered entirely expendable. And that means things like this are just going to become more common as the government continues to run out of people that have committed any crime more serious than overstaying their (official) welcome.
Two people fighting the Bear Gulch fire on the Olympic Peninsula were arrested by federal law enforcement Wednesday, in a confrontation described by firefighters and depicted in photos and video.
Why the two firefighters were arrested is unclear. But a spokesperson for the Incident Management Team leading the firefighting response said the team was “aware of a Border Patrol operation on the fire,” that it was not interfering with the firefighting response and referred reporters to the Border Patrol station in Port Angeles.
Over three hours, federal agents demanded identification from the members of two private contractor crews. The crews were among the 400 people including firefighters deployed to fight the wildfire, the largest active blaze in Washington state.
The two illegal aliens apprehended were NOT firefighters. The two contracted work crews questioned on the day of their arrests were not even assigned to actively fight the fire; they were there in a support role, cutting logs into firewood. The firefighting response remained uninterrupted the entire time. No active firefighters were even questioned, and U.S. Border Patrol’s actions did not prevent or interfere with any personnel actively engaged in firefighting efforts.
The last sentence is mostly accurate. But to pretend this reporting is false simply because it refers to people involved in firefighting efforts (in this case, cutting up fallen trees) as “NOT firefighters” is being pedantic for no other reason than to attempt to own the libs. People put fuel in tanks and attach missiles to planes but no one pretends these people aren’t involved in “fighting wars” even though they’re not right out there on the front line catching bullets.
Under any normal administration, the optics of detaining a few dozen firefighters and arresting a couple of them for civil law violations would be enough to prevent this operation from moving forward. On top of that, a directive from none other than PRESIDENT TRUMP (and one that has not been officially rescinded) makes it clear immigration enforcement efforts are supposed to steer clear of situations like these:
Under Donald Trump’s first presidential administration, as wildfires ripped through northern California and burned over 300,000 acres in 2018, DHS said it would “suspend routine immigration enforcement operations in the areas affected by the fires,” except if a serious criminal presented a public safety threat. The agency also said it wouldn’t conduct any operations at evacuation sites or assistance centers.
“It is unclear if that stance has changed under Trump’s second administration,” CNN says, reporting on facts that make it clear that stance has indeed changed and that Trump’s second administration is more than willing to not only cripple firefighting crews, but rouse the rabble with “fake news” bullshit that serves no greater purpose than increasing the number of commenters and reposters who seem to believe the FCC has the power to either put CNN in jail or yank a broadcasting “license” it not only doesn’t possess, but is impossible to obtain because [for all the red hats in the back] THE FCC DOES NOT ISSUE BROADCAST LICENSES TO CABLE TV STATIONS.
But even if everything the DHS said on X is true (and there’s no reason to believe it is, especially when there are recordings of this entirely unnecessary “enforcement effort” that undercut the DHS narrative), it’s still fucking stupid. Why do this sort of thing at all when it’s going to generate far more negative press and animosity against the government than… I don’t know… looking for actual dangerous criminals and focusing your efforts on them?
The cruelty isn’t the point. The institution of some form of Christian white nationalism is. The cruelty is the juice. And as long as that’s still enough to get this administration hard (literally and/or figuratively), it will remain a crucial part of its efforts to rid this country of the people Trump and his followers believe never deserved to be here in the first place, no matter how much they contribute to a nation whose leaders have nothing but bigotry to offer in return.
When you read about Adam Raine’s suicide and ChatGPT’s role in helping him plan his death, the immediate reaction is obvious and understandable: something must be done. OpenAI should be held responsible. This cannot happen again.
Those instincts are human and reasonable. The horrifying details in the NY Times and the family’s lawsuit paint a picture of a company that failed to protect a vulnerable young man when its AI offered help with specific suicide methods and encouragement.
But here’s what happens when those entirely reasonable demands for accountability get translated into corporate policy: OpenAI didn’t just improve their safety protocols—they announced plans to spy on user conversations and report them to law enforcement. It’s a perfect example of how demands for liability from AI companies can backfire spectacularly, creating exactly the kind of surveillance dystopia that plenty of people have long warned about.
There are plenty of questions about how liability should be handled with generative AI tools, and while I understand the concerns about potential harms, we need to think carefully about whether the “solutions” we’re demanding will actually make things better—or just create new problems that hurt everyone.
The specific case itself is more nuanced than the initial headlines suggest. Initially, ChatGPT responded to Adam’s suicidal thoughts by trying to reassure him, but once he decided he wished to end his life, ChatGPT was willing to help there as well:
Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.
But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.
There’s a lot more in the article and even more in the lawsuit his family filed against OpenAI in a state court in California.
Almost everyone I saw responding to this initially said that OpenAI should be liable and responsible for this young man’s death. And I understand that instinct. It feels conceptually right. The chats are somewhat horrifying as you read them, especially because we know how the story ends.
It’s also not that difficult to understand how this happened. These AI chatbots are designed to be “helpful,” sometimes to a fault—but it mostly determines “helpfulness” as doing what the user requests, which sometimes may not actually be that helpful to that individual. So if you ask it questions, it tries to be helpful. From the released transcripts, you can tell that ChatGPT obviously has built in some guardrails regarding suicidal ideation, in that it did repeatedly suggest Adam get professional help. But when he started asking more specific questions that were less directly or obviously about suicide to a bot (though a human might be more likely to recognize that), it still tried to help.
So, take this part:
ChatGPT repeatedly recommended that Adam tell someone about how he was feeling. But there were also key moments when it deterred him from seeking help. At the end of March, after Adam attempted death by hanging for the first time, he uploaded a photo of his neck, raw from the noose, to ChatGPT.
Absolutely horrifying in context which all of us reading that know. But ChatGPT doesn’t know the context. It just knows that someone is asking if someone will notice the mark on his neck. It’s being “helpful” and answering the question.
But it’s not human. It doesn’t process things like a human does. It’s just trying to be helpful by responding to the prompt it was given.
The public response was predictable and understandable: OpenAI should be held responsible and must prevent this from happening again. But that leaves open what that actually means in practice. Unfortunately, we can already see how those entirely reasonable demands translate into corporate policy.
OpenAI’s actual response to the lawsuit and public outrage? Announcing plans for much greater surveillance and snitching on ChatGPT chats. This is exactly the kind of “solution” that liability regimes consistently produce: more surveillance, more snitching, and less privacy for everyone.
When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts.If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.
There are, obviously, some times when you could see it being helpful if someone referred dangerous activities to law enforcement, but there are also so many times when it can be actively more harmful. Including in the situations where someone is looking to take their own life. There’s a reason the term “suicide by cop” exists. Will random people working for OpenAI know the difference?
But the surveillance problem is just the symptom. The deeper issue is how liability frameworks around suicide consistently create perverse incentives that don’t actually help anyone.
It is tempting to try to blame others when someone dies by suicide. We’ve seen plenty of such cases and claims over the years, including the infamous Lori Drew case from years ago. And we’ve discussed why punishing people based on others’ death by suicide is a very dangerous path.
First, it gives excess power to those who are considering death by suicide, as they can use it to get “revenge” on someone if our society starts blaming others legally. Second, it actually takes away the concept of agency from those who (tragically and unfortunately) choose to end their own life by such means. In an ideal world, we’d have proper mental health resources to help people, but there are always going to be some people determined to take their own life.
If we are constantly looking to place blame on a third party, that’s almost always going to lead to bad results. Even in this case, we see that when ChatGPT nudged Adam towards getting help, he worked out ways to change the context of the conversation to get him closer to his own goal. We need to recognize that the decision to take one’s own life via suicide is an individual’s decision that they are making. Blaming third parties suggests that the individual themselves had no agency at all and that’s also a very dangerous path.
For example, as I’ve mentioned before in these discussions, in high school I had a friend who died by suicide. It certainly appeared to happen in response to the end of a romantic relationship. The former romantic partner in that case was deeply traumatized as well (the method of suicide was designed to traumatize that individual). But if we open up the idea that we can blame someone else for “causing” a death by suicide, someone might have thought to sue that former romantic partner as well, arguing that their recent breakup “caused” the death.
This does not seem like a fruitful path for anyone to go down. It just becomes an exercise in lashing out at many others who somehow failed to stop an individual from doing what they were ultimately determined to do, even if they did not know or believe what that person would eventually do.
The rush to impose liability on AI companies also runs headlong into First Amendment problems. Even if you could somehow hold OpenAI responsible for Adam’s death, it’s unclear what legal violation they actually committed. The company did try to push him towards help—he steered the conversation away from that.
But some are now arguing that any AI assistance with suicide methods should be illegal. That path leads to the same surveillance dead end, just through criminal law instead of civil liability. There are plenty of books that one could read that a motivated person could use to learn how to end their own life. Should that be a crime? Would we ban books that mention the details of certain methods of suicide?
Already we have precedents that suggest the First Amendment would not allow that. I’ve mentioned it many times in the past, but in Winter vs. GP Putnam’s Sons, it was found that the publisher of an encyclopedia of mushrooms wasn’t liable for people who ate poisonous mushrooms that the book said were safe, because the publisher itself didn’t have actual knowledge that those mushrooms were poisonous. Or there’s the case of Smith v. Linn, in which the publisher of an insanely dangerous diet was not held liable, on First Amendment grounds, for people following the diet, leading to their own death.
You can argue that those and a bunch of similar cases were decided incorrectly, but it would only lead to an absolute mess. Any time someone dies, there would be a rush of lawyers looking for any company to blame. Did they read a book that mentioned suicide? Did they watch a YouTube video or spend time on a Wikipedia page?
We need to recognize that people themselves have agency, and this rush to act as though everyone is a mindless bot controlled by the computer systems they use leads us nowhere good. Indeed, as we’re seeing with this new surveillance and snitch effort by OpenAI, it can actually lead to an even more dangerous world for nearly all users.
The Adam Raine case is a tragedy that demands our attention and empathy. But it’s also a perfect case study in how our instinct to “hold someone accountable” can create solutions that are worse than the original problem.
OpenAI’s response—more surveillance, more snitching to law enforcement—is exactly what happens when we demand corporate liability without thinking through the incentives we’re creating. Companies don’t magically develop better judgment or more humane policies when faced with lawsuits. They develop more ways to shift risk and monitor users.
Want to prevent future tragedies? The answer isn’t giving AI companies more reasons to spy on us and report us to authorities. It’s investing in actual mental health resources, destigmatizing help-seeking, and, yes, accepting that we live in a world where people have agency—including the tragic agency to make choices we wish they wouldn’t make.
The surveillance state we’re building, one panicked corporate liability case at a time, won’t save the next Adam Raine. But it will make all of us less free.
They say you should never stop learning, and with the Stone River eLearning and StackSkills Unlimited Bundle, you’ll never have to. With Stone River, you’ll get full access to 800+ courses and 4,800 hours of online learning, covering everything from iOS mobile development to graphic design. Plus, you’ll get a range of VIP perks, including unlimited eBooks, personal guidance on what to learn, and more. With StackSkills, you’ll gain access to 1000+ courses covering blockchain to growth hacking and much more. If you’re ready to commit to your personal and career growth, you won’t want to pass on this incredible all-access pass to the web’s top online courses. This limited time bundle is on sale for $90.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Federal prosecutors on Tuesday were unable to persuade a grand jury to approve a felony indictment against a man who threw a sandwich at a federal agent on the streets of Washington this month, according to two people familiar with the matter.
The grand jury’s rejection of the felony charge was a remarkable failure by the U.S. attorney’s office in Washington and the second time in recent days that a majority of grand jurors refused to vote to indict a person accused of felony assault on a federal agent.
We’re witnessing an ultra-rare form of grand jury nullification, which seems to be far less organized and far more organic than jury nullification efforts seen during trials in open courts. Grand juries aren’t usually in the business of pitching shutouts to prosecutors, given that they’re only asked whether or not they agree the prosecutors’ one-sided presentation might add up to potential criminal charges.
Now, with local hero/hero-thrower Sean Dunn dodging federal charges (while also losing his federal paycheck), the Trump DOJ — as represented (loudly!) by AG Pam Bondi and DC US Attorney Jeanine Pirro — has strung together nothing but a whole lot of embarrassment ever since the administration decided the District of Columbia was incapable of policing itself.
Judges handling trumped-up (why yes possibly intentional pun!) federal charges are (similarly loudly) expressing their disgust with this administration’s shitty totalitarian take on “justice.” DC grand juries — comprised of people who seem less than impressed with Trump’s takeover of the city — continue to frustrate federal prosecutors by no-billing a lot of the assault cases being presented to them by the soulless, vindictive opportunists that still remain on the DOJ payroll.
When federal law enforcement officers aren’t undercutting protections with their continual lies and misrepresentations, grand juries in cities overrun by federal law enforcement officers are now regularly rejecting the ridiculous “assault” allegations being dumped on their collective desk by prosecutors who have abandoned discretion entirely so as not to displease the angry man who signs their paychecks and runs the government almost exclusively via Executive Orders and Truth Social posts.
This situation seems to be getting worse for Trump’s frothy prosecutors with each passing day. Here’s a recent Reuters report, which covers yet another highly-embarrassing, literal strikeout by federal prosecutors in another case they seemed to feel would give them a “win” to gloat about:
Federal prosecutors failed three times to persuade a grand jury to indict a woman accused of assaulting an FBI agent during an immigration operation in Washington, D.C., last month, a highly unusual failure as President Donald Trump’s administration seeks to aggressively charge street crime in the nation’s capital.
Three different federal grand juries declined to indict Sydney Reid for assaulting, resisting, or impeding officers, prosecutors disclosed in a court filing late on Monday. Prosecutors then downgraded the offense to a misdemeanor.
I can only hope that these prosecutors are busy scrubbing their resumes of any mentions of these failed prosecutions. I mean, it’s obvious they’re eventually going to be fired for failing to rack up a bunch of wins for the administration. But it’s going to be a lot harder to explain to future employers that the reason you’re looking for work is because you weren’t quite good enough at fascism to continue to be retained by your former employer. And very few law firms or public entities are going to be willing to hire someone Trump fired because that’s the sort of thing that leads to being targeted by extremely specific Executive Orders.
I’d say something pithy like “you made your bed, now lie in it,” but chances are you’re not even going to have that bed for long, no matter how much effort you put into making it. But if it’s any comfort, I’m sure there are plenty of employment opportunities out there, now that pretty much any occupation with extremely long hours and/or unpleasant working conditions is suffering unprecedented manpower shortages. Grab those bootstraps, bootlickers. If you want to serve an aspiring tyrant, go right ahead. Just do it on your dime from now on.
The rushed integration of half-cooked automation into the already broken U.S. journalism industry simply isn’t going very well. There have been just countless examples where affluent media owners rushed to embrace automation and LLMs (usually to cut corners and undermine labor) with disastrous impact, resulting in lots of plagiarism, completely false headlines, and a giant, completely avoidable mess.
As U.S. news outlets fire staffers and editors, cut corners, and endlessly compromise integrity and standards, they’re also apparently being increasingly duped by people using AI to generate bogus stories and reporting. Like this freelancer for Business Insider and Wired, who apparently tricked editors at both publications into publishing several completely fabricated stories written mostly by LLMs.
The freelancer, who called herself Margaux Blanchard, apparently doesn’t exist. She pitched both outlets on a story about a town called Gravemont, “a decommissioned mining town in rural Colorado” that was purportedly repurposed into “one of the world’s most secretive training grounds for death investigation.” Except the town in question, like the author, apparently doesn’t exist.
The Press Gazette did a little digging and found that “at least” six publications published various articles by the fake person using AI, which all kind of piggybacked on each other to give the fake journalist credibility to get future stuff published. Including one article about a couple who met in Roblox, fell in love, and got married. But the couple, and nobody else in the article, appears to exist:
“The interviewees in the article do not seem to match up to any people about whom information is publicly available on the internet. For example the article cites “Jessica Hu, 34, an ordained officiant based in Chicago” who it says “has made a name for herself as a ‘digital celebrant,’ specialising in ceremonies across Twitch, Discord, and VRChat”. However, no such officiant appears to exist.”
This is less surprising for Business Insider (which increasingly traffics in clickbait and recently fired 25% of its staff) and more surprising for Wired, which has been doing a lot of great journalism during the second Trump term. It’s particularly embarrassing given the parade of extremely talented writers and editors that have repeatedly been shitcanned by many of these same outlets over the last decade.
Wired was at least transparent about the fuck up, publishing an article explaining how they were tricked, noting they only figured things out when the freelancer refused payment via traditional systems. But they acknowledge they didn’t adhere to traditional standards for fact checking (who has the time, apparently):
“We made errors here: This story did not go through a proper fact-check process or get a top edit from a more senior editor. First-time contributors to WIRED should generally get both, and editors should always have full confidence that writers are who they say they are.”
This country has taken an absolute hatchet to quality journalism, which in turn has done irreparable harm to any effort to reach reality-based consensus or have an informed electorate. The rushed integration of “AI,” usually by media owners who largely only see it as a way to cut corners and undermine labor, certainly isn’t helping. Add in the twisted financial incentives of an ad-based engagement infotainment economy, and you get exactly the sort of journalistic outcomes academics long predicted.
That, in turn, creates an environment rich for exploitation by the shittiest people imaginable, including random fraudsters, and the weird extremist zealots currently running what’s left of the United States.