I would be interested to see what a decentralized Stone Temple Pilots looks likeYou might like Velvet Revolver, Talk Show or Army of Anyone.
It's not our rights in the e pluribus unum sense. The Red Tribe believes in rights only for people who deserve to have them.
Techies and journalists aren't real?
Ever since the Peter Thiel Memorial Bridge opened in 2009, the libertarians of the pre-internet and dot-com boom have slouched over to neoreaction. Not a hard pivot, but a slouch. Hans Hermann Hoppe and Curtis (Mencius Moldbug) Yarvin resolved the contradictions that kept such a low ceiling on libertarianism as a political force.
That particular comic strip is hands down the best explanation of free speech.
Reminder: Most appeals to free speech are moral hostage-taking.
Not related to this specific thread unless one of them was involved, but Karl Bode's famous word led someone to generate AI art of brunchlords.
What do tech and communism have in common? Both operate on five-year plans. Tech is a hype cycle with a lifespan of about five years. AI is the tech at the center of the current hype cycle, picking up after cryptocurrency/NFTs. Before that, it was the gig economy/"Uber for [thing]", and so on. AI is the magic word of the moment to cadge money out of investors.
"Tech press" is a misnomer because very, very few tech journalists can gather news independently of the sources they cover. Most of the tech press exists as a service industry to further the tech hype cycle or the interests of its newsmakers.
Reminder: When you are for absolute free speech, this is what you are supporting.
It's bad form to mix in my comments along with Mick's without attribution. I will reply to the one about my original comment by Anonymous Coward @ May 1, 2024 5:10 a.m.
What I described has happened several times in real life, to the ridicule of the companies caught serving up AI-generated content. From a Cory Doctorow post in August 2023: 1. An Ottawa travel listicle recommended tourists try the food bank ("Go on an empty stomach"!) 2 and 3. Other travel listicles that would recommend some of the most basic food items everyone is familiar with, like hamburgers and seafood, then explain to readers the dictionary definitions of what hamburgers and seafood are. This was Microsoft's AI, too. I've also seen examples shared on social media of things like an article about a football game. It exposed the AI as being trained on the box score and recapping the game chronologically. What does a human sportswriter do? Organize the article by reporting the outcome of the game and naming the players and the plays that led to the outcome. An AI article by the Columbus Dispatch just named the two teams and wrote it in the style of a book report by a kid who didn't read the book and had to write the assignment 15 minutes before class began. The AI just mentioned the two teams playing, saucing the copy with intensifying adverbs -- but since it couldn't watch the game itself, it made no mention of the players or the plays. It's not just journalism, where news institutions are foundering and management will desperately grasp at "journalism without journalists" to keep the lights on. According to a CIO article: 1. Air Canada got taken to court by a customer who asked a chatbot about bereavement fare discounts, given erroneous information, and was denied by the airline, which was ordered to pay the customer. 2. A lawyer used ChatGPT to cite precedents to make his case, but the LLM made up at least a half-dozen nonexistent cases. 3. AI-enabled tools for decision-making displayed discrimination against Blacks, older applicants, and women. Two weak-tea rebuttals are "But it will get better in the future", and it's always five years away, or the rank relativism of "human decision makers fail just as much". Neither bolster the merits of AI.“Fears that AI will serve as a wholesale replacement of labor (eliminate entire categories of workers) lead to ill-informed decision makers (read: bosses) buying AI for that dubious promise.”If AI is truly overhyped, but is actually bad then what should happen is that the AI will underperform (because it is bad and not intelligent enough), the company will not get its job done and will be outperformed by those more efficient and competent forcing it to “fire” AI and hire back everyone who was let go, lest it wants to lose clients and go bankrupt.
There's no such thing as an unbiased assumption. All assumptions are colored by the biases of the observer.
There's a bigger, less surmountable issue beyond to paywall or not to paywall. Once the zone has been flooded with shit, can the zone be unflooded? There's more than one problem here besides the feculent flood. The other problem is the context collapse brought upon by large internet communities and fueled by algorithms. Fitzwishing (phonetic of FTZWS) convinced the world that objective truth is merely a value judgment, no better or worse than falsehoods or bullshit. Nothing has to be true or false anymore. Nothing is true and everything is possible, and anything is true if you want it to be true.
What's the UK process for pulling a charity's registration number? Something similar to the IRS process to challenge a 501(c)(3) for misrepresentation.
It could also be that "effective altruism" as a notion is a misdirection tactic to sow FUD (fear, uncertainty and doubt). Like the Coffee Talk lady on "Saturday Night Live" would say, "Effective altruism is neither effective nor altruism. Talk amongst yourselves." Like what happens when effectiveness and altruism are at odds? That's kind of the point. Effectiveness can be an excuse to withhold altruism. And altruism can be used to quash debates over effectiveness if the motives are pure.
Putting the full quote in doesn't change the context. The original point still stands. (Though of course, if you take the arguments about x-risk seriously "Of course" intensifies the argument, and if you take the arguments about x-risk seriously poisons the well. This asserts that alleviating global poverty is folly, the unserious position. Then there's the matter of conflating something concrete, like global poverty, with something abstract. With global poverty, we have material conditions that could be observed and evaluated. We also have material choices and options to reduce poverty and observe and evaluate policies. Existential risk mitigation is abstract and must be formulated, argued, debated and evaluated before it can be reified into an actual condition that can be observed and evaluated. (Debate has physical consequences, as time and resources devoted to fully forming a theory competes with claims for reducing global poverty that have already surpassed that process. This debate and physical constraints of time and resources is a stalling tactic and can and should be recognized as bad faith.)
That's where criti-hype comes in. It's useful to cool the temperature of the conversation. Recognize that "AGI is going to take over the world" is a sales pitch, not an argument. Bosses are smart enough to understand but dumb enough to fall for it. Incentives govern boss behavior. The reward-punishment structure compels them to go all-in on labor-devouring AI-as-a-solution.
Mainstream debates around AI frame the two poles of thought around effective acceleration (e/acc) versus effective altruism (EA). To play in the AI sandbox means to choose one of the extremes or stake out a middle position. Both poles and middle positions all serve to inflate the hype cycle around AI. Lee Vinsel calls it criti-hype, where criticism of a technology has the effect of bolstering hype or escalating its street credibility. Fears that AI will serve as a wholesale replacement of labor (eliminate entire categories of workers) lead to ill-informed decision makers (read: bosses) buying AI for that dubious promise. Boss brain: You're telling me this AI will mean I never have to deal with payroll or HR again? I'll take 10! These debates serve as a sales pitch for AI because they hype AI beyond its capabilities or leave organizations unprepared for cleaning up after the consequences. Cory Doctorow has a great explanation of criti-hype.
The strategic ambiguity Mollie Gleiberman describes is something I noticed in broader rightwing rhetoric. I've called it space roaching. It's to take a word with a commonly understood meaning, like say "freedom" or "family", hollow it out and substitute a completely different meaning, then use the coded word and pretend like a substitution never occurred in the first place. Space roaching was inspired by the first "Men in Black" movie. The archvillain was a roachlike being that crash lands on Earth, and his first interaction was with Vincent D'Onofrio's misanthropic farmer character. The roach devours the farmer from the inside out, but wears the farmer's skin as a disguise throughout the movie.
Mastodon and Bluesky might be doing well because they are devolutionary. The dinosaurs went extinct but birds and reptiles survive. Internet communities are re-fragmenting. There's a graphic going around showing that young adults (18-29) are disengaging from social media in general. Twitter engagement is plummeting hard, and even young-trending networks like TikTok, Snap and Instagram are seeing declines. Facebook's demographics trend old and are closer to cable news viewership than the population as a whole. Facebook's audience is shrinking in developed countries too.