Back in July, Reuters released a bombshell report documenting how Tesla not only spent a decade falsely inflating the range of their EVs, but created teams dedicated to bullshitting Tesla customers who called in to complain about it. If you recall, Reuters noted how these teams would have a little, adorable party every time they got a pissed off user to cancel a scheduled service call. Usually by lying to them:
“Inside the Nevada team’s office, some employees celebrated canceling service appointments by putting their phones on mute and striking a metal xylophone, triggering applause from coworkers who sometimes stood on desks. The team often closed hundreds of cases a week and staffers were tracked on their average number of diverted appointments per day.”
The story managed to stay in the headlines for all of a day or two, quickly supplanted by gossip surrounding a non-existent Elon Musk Mark Zuckerberg fist fight.
But here in reality, Tesla’s routine misrepresentation of their product (and almost joyous gaslighting of their paying customers) has caught the eye of federal regulators, who are now investigating the company for fraudulent behavior:
“federal prosecutors have opened a probe into Tesla’s alleged range-exaggerating scheme, which involved rigging its cars’ software to show an inflated range projection that would then abruptly switch to an accurate projection once the battery dipped below 50% charged. Tesla also reportedly created an entire secret “diversion team” to dissuade customers who had noticed the problem from scheduling service center appointments.”
This pretty clearly meets the threshold definition of “unfair and deceptive” under the FTC Act, so this shouldn’t be that hard of a case. Of course, whether it results in any sort of meaningful penalties or fines is another matter entirely. It’s very clear Musk historically hasn’t been very worried about what’s left of the U.S. regulatory and consumer protection apparatus holding him accountable for… anything.
Still, it’s yet another problem for a company that’s facing a flood of new competitors with an aging product line. And it’s another case thrown in Tesla’s lap on top of the glacially-moving inquiry into the growing pile of corpses caused by obvious misrepresentation of under-cooked “self driving” technology, and an investigation into Musk covertly using Tesla funds to build himself a glass mansion.
So, how is it that cops still think the “odor of marijuana” allows them to search vehicles? At best, it means they’re tossing cars because they observed a misdemeanor offense. At worst, they’re tossing cars because someone possessed an entirely legal substance.
Sure, cops can still go after impaired drivers. But the odor of marijuana is rarely indicative of anything, especially when it’s pretty much legal pretty much everywhere.
The state of Minnesota recently legalized recreational marijuana use/possession. Though the state won’t be moving forward with recreational weed sales until 2025, medical marijuana has been legal since 2014. This legalization of marijuana — even on a limited scale — means the “odor of marijuana” is no more indicative of illegal activity than, say, the “odor of gasoline” when it comes to traffic stops. (h/t West Central Tribune)
The ruling [PDF] issued by the state’s top court (one that affirms the rulings handed down by the previous two courts handling this case) says what everyone but cops are thinking: the odor of a legal substance does not allow cops to engage in warrantless searches of vehicles.
A Litchfield (MN) police officer pulled over Adam Togerson because he believed a light bar mounted on the grill of Torgerson’s car was “too bright.” Already we’re wading deep into subjective waters. And, of course, it wasn’t the real reason for the stop. The real reason was to try to find some reason to search the car.
The officer stated that he smelled marijuana and asked Torgerson if there was any reason for the odor. Torgerson answered no, stated he did not have marijuana on him, and denied ever having marijuana in the vehicle.
The officer and Torgerson spoke briefly about the vehicle’s light bar before the officer returned to his squad car with Torgerson’s license and registration. While the officer verified Torgerson’s license and registration, a second officer arrived on the scene. The first officer explained to the second officer that he thought he smelled marijuana coming from the vehicle and that Torgerson denied possessing marijuana. The second officer approached the vehicle and spoke briefly with Torgerson and his wife before asking if there was marijuana in the vehicle, noting that he and his partner could both smell marijuana coming from inside the vehicle. The couple, again, denied possessing marijuana, but Torgerson admitted to smoking marijuana in the distant past. The second officer stated that the marijuana odor gave them probable cause to search the vehicle and directed everyone to exit the vehicle.
Despite both officers claiming they smelled marijuana (and swearing in court they smelled marijuana), no marijuana was found during the warrantless search.
The first officer searched the vehicle and found a film cannister, three pipes, and a small plastic bag in the center console. The plastic bag contained a powdery, white substance, and the film cannister contained a brown crystal-like substance. A field test of the brown crystal-like substance tested positive for methamphetamine. The officers arrested Torgerson for possession of a controlled substance after he admitted ownership of the contraband.
To be clear, meth (whether smoked or not) does not smell like marijuana. These officers simply claimed they smelled marijuana and claimed that was all the permission they needed (either from Torgerson or the US Constitution) to search the car.
Both cops testified (vaguely) they could smell weed when they approached the car that contained no weed. Neither officer was able to recall whether they noted any signs of impairment in Torgerson.
The trial court suppressed the evidence from the warrantless car search and dismissed the charges. The state appealed. The state appellate court took a look at the case and arrived at the same conclusion. The state appealed again.
Three strikes. You’re out.
It is undisputed that the only indication that evidence of a crime or contraband may be found in Torgerson’s vehicle was the odor of marijuana emanating from the vehicle. The first officer testified that he “could smell a strong odor of burnt marijuana” emanating from the vehicle, that he could not smell the odor before approaching the vehicle, and that the odor’s strength ranked as a five on a scale of one to ten. The second officer testified that he “could immediately [smell] the odor of marijuana coming from inside the vehicle,” the odor “was strong enough that [he] immediately recognized it when [he] got to the window,” and that the odor “definitely wasn’t the faintest” marijuana odor he had ever smelled, but “it definitely wasn’t the strongest.” Neither officer articulated any other circumstance contributing to their probable cause analysis.
There was nothing in Torgerson’s actions to give suspicion that he was under the influence while driving, no drug paraphernalia or other evidence to indicate that the marijuana was being used in a manner, or was of such a quantity, so as to be criminally illegal, and no evidence showing that any use was not for legal medicinal purposes. In the absence of any other evidence as part of the totality of the circumstances analysis, the evidence of the medium-strength odor of marijuana, on its own, is insufficient to establish a fair probability that the search would yield evidence of criminally illegal drug-related contraband or conduct.
The evidence that was rejected two decisions ago is rejected yet again. And now it appears the only valid criminal act is these officers’ violations of Torgerson’s Fourth Amendment rights.
Where weed is legal, it’s ridiculous to assume the odor of a legal substance — without any other evidence of criminal activity — is probable cause. At some point, no cop will be able to use this convenient excuse to engage in constitutional violations. Let’s hope that point is sooner, rather than later.
We’ve spilled a great deal of ink discussing the GDPR and its failures and unintended consequences. The European data privacy law that was ostensibly built to protect the data of private citizens, but which was also expected to result in heavy fines for primarily American internet companies, has mostly failed to do either. While the larger American internet players have the money and resources to navigate GDPR just fine, smaller companies or innovative startups can’t. The end result has been to harm competition, harm innovation, and build a scenario rife with harmful unintended consequences. A bang up job all around, in other words.
And now we have yet another unintended consequence: hacking groups are beginning to use the GDPR as a weapon to threaten private companies in order to get ransom money. You may have heard that a hacking group calling itself Ransomed.vc is claiming to have compromised all of Sony. We don’t yet have proof that the hack is that widespread, but hacking groups generally both don’t lie about that sort of thing or it ruins their “business” plan, and Ransomed.vc has also claimed that if a buyer isn’t found for Sony’s data, it will simply release that data on September 28th. So, as to what they have, I guess we’ll just have to wait and see.
The hack was reported by Cyber Security Connect, which said that a group calling itself Ransomed.vc claimed to have breached Sony’s systems and accessed an unknown quantity of data. “We have successfully compromissed [sic] all of Sony systems,” Ransomed.vc wrote on its leak sites. “We won’t ransom them! we will sell the data. due to sony not wanting to pay. DATA IS FOR SALE … WE ARE SELLING IT.”
The site said the hackers posted some “proof-of-hack data” but described it as “not particularly compelling,” and also said that the file tree for the alleged hack looks small, given the group’s claim that it had compromised “all of Sony’s systems.” A price for the hacked data isn’t posted, but Ransomed.vc did list a “post date” of September 28, which is presumably when it will release the data publicly if no buyers are found.
But what really caught my attention was the description of how this particular group was going about issuing threats to its victims in order to collect ransoms. And part of the group’s reputation is that it compromises its victims and then hunts for GDPR violations, building ransom requests that are less consequential than what the GDPR violation fines would be.
While the hackers say they’re not going to ransom the data, Ransomed.vc apparently does have a history of doing so, with a unique twist: Cybersecurity site Flashpoint said in August that Ransomed takes “a novel approach to extortion” by using the threat of the European Union’s General Data Protection Regulation (GDPR) rules to convince companies to pony up. By threatening to release data that exposes companies to potentially massive GDPR fines, the group may hope to convince them that paying a little now is better than paying a whole lot later.
“The group has disclosed ransom demands for its victims, which span from €50,000 EUR to €200,000 EUR,” Flashpoint explained. “For comparison, GDPR fines can climb into the millions and beyond—the highest ever was over €1 billion EUR. It is likely that Ransomed’s strategy is to set ransom amounts lower than the price of a fine for a data security violation, which may allow them to exploit this discrepancy in order to increase the chance of payment.”
And so because of the mess that the GDPR is, combined with its remarkable level of fines, the end result is that in some respects the EU has empowered rogue hacking groups to act as its enforcement wing for GDPR. And that both sucks and certainly isn’t what the EU had in mind when it came up with this legislative plate of spaghetti.
Frankly, this has some parallels to other unintended boondoggles we’ve seen. What is making the hacking industry such a rich endeavor? Well, in part it’s the cyber-insurance industry and its habit of paying out the bad actors because it’s cheaper than helping their customers recover from ransomware and other attacks. All of which encourages more hacking groups to compromise more people and companies. GDPR appears to now operate in the same way for bad actors.
Well meaning or otherwise, when legislation purported to protect private data and interests instead proves to be a weapon in the hands of the very people most interested in compromising those private data and interests, it’s time to scrap the thing and send it back to the shop to be rebuilt, or discarded.
As to what this Sony hack actually is, for that we’ll have to wait and see.
Accessing consensually created and distributed online pornography is a human right. Do you know why? The consensual production and viewing of porn online is a protected form of sexual expression between two or more adults.
Laugh your asses off, sure. But there is a point to my ludicrous statement. It’s not about porn per se. Accessing content freely on the internet, including porn, is a human right in many variations.
The freedom of the internet and access to its content is at risk, especially in the United States.
Religious conservatives and far-right populists wish to suspend First Amendment protections for all online content dealing with LGBTQ+ subject matter.
They view adult entertainment content that’s consensually produced, distributed, and viewed as pure obscenity. And they wish to block a platform’s right to moderate content and stand by its user base. By no means is that freedom. And by no means is this content technically “porn.”
Countries like China, Russia, Iran, and North Korea block or heavily restrict access to the web.
These governments demonize content they view as counter-revolutionary, haram, or a product of Western supremacy over a nation’s culture.
With the passage of the Online Safety Bill, the United Kingdom chose to wither away encryption and moderate content through a controversial “legal but harmful” doctrine.
State legislatures and executive branches controlled predominantly by some of the hardest right flanks of the Republican Party across the United States are headed down similar paths.
Florida, Missouri, Texas, and Utah, among other states, have chosen to levy content restrictions on public libraries and schools.
Advocacy groups, myself, journalists, and researchers have pointed out how book bans, content restrictions, and fights against teachers have added a new dimension to the “everything is porn” belief in modern culture that several high-profile, far-right conservatives openly maintain.
Utah and Arkansas implemented age verification mandates to access mainstream social media platforms. Utah, Arkansas, Louisiana, Texas, Virginia, Mississippi, and Montana all have age verification requirements for adult content on the books, too. And it doesn’t stop with just hard-right Republicans. Hard-left flights of the Democratic Party are not much better.
A lawmaker in Maine wants to implement age verification requirements after adopting the Nordic model of partial decriminalization of sex work that is overwhelmingly proven to make the profession much more dangerous. Plus, 16 states in the union still characterize pornography as a “public health” crisis while now lumping LGBTQ+ rights and content into a singular target of what these states erroneously view as “pornographic material.” This is the environment.
Whether it’s having equitable access to social media or a middle student’s right to read a young adult novel by a queer author, the ability to access and consume information some consider to be pornographic, including legally produced porn itself, is genuinely something all human beings should have. Attempts to restrict this online material harms people of all ages, not just adults.
Michael McGrady covers the tech side of the online porn business. He is a contributing editor at AVN.com.
What is it with real life stories matching satirical online TV shows lately? We just had a story match one from The Office, and now we’ve got one (that’s much dumber) that is copied from a Futurama episode about how dating robots will lead to the downfall of civilization:
First of all, most of the article is just warmed over stale leftovers from previous moral panics about how porn was causing dudes to not want real girlfriends, or how video games were removing men’s interest in sex. None of that was ever true, and it’s not true now. I dare you to find anyone who says that they wouldn’t prefer a real human relationship to an AI girlfriend.
But Vittert, who apparently is a professor of data science, is sure it’s happening. There are four paragraphs that start out by saying that an AI girlfriend “seems so ridiculous,” followed by her explaining why it “might sound enticing,” because you can set your own preferences. But, um, that’s… not why people actually date. Or likely why they are interested in an AI companion.
It seems likely that the reason many are interested in AI companions is loneliness. But, there’s little evidence anyone is using it as a substitute for a human companion. It’s there for those who are lonely and need someone to talk to, and a virtual AI one is better than nothing.
Yet, Vittert (again, who apparently teaches data science) takes this to mean that men are going for AI girlfriends instead of real girlfriends, and therefore, they’re not making babies. And without babies, there will be no one to pay for Medicare or Social Security.
While the concept of an AI girlfriend may seem like a joke, it really isn’t that funny. It is enabling a generation of lonely men to stay lonely and childless, which will have devastating effects on the U.S. economy in less than a decade.
They are choosing AI girlfriends over real women, meaning they don’t have relationships with real women, don’t marry them and then don’t have and raise babies with them. America desperately needs people to have more babies, but all the signs are pointing toward fewer relationships, fewer marriages and fewer babies.
I know that there are some people (hi Elon!) who keep insisting we need more babies, but… there is basically nothing scientific that supports this argument. The population of the earth continues to grow. We are not at risk of running out of people.
And, if the argument is just that, say, the US needs more people, there’s an easy way to do that: lessen our ridiculous restrictions on immigration.
Either way, there is no way that “AI girlfriends” are leading people to have fewer babies. I guarantee you that sex with an actual human being is way, way, way better than sexting with an imaginary companion. No one is “remaining childless” because they think they’d prefer a bot on a phone to a real human being.
Also, for a “data scientist” you’d think that this argument would be supported with actual data. Except, what data is put in there is the kind of data you use to obfuscate a point, rather than strengthen one.
She points to Pew’s regular study of how many young people say they’re single, noting that way more young men say they are than young women:
Let’s look at the hard numbers. More than 60 percent of young men (ages 18-30) are single, compared to only 30 percent of women the same age. One in five men report not having a single close friend, a number that has quadrupled in the last 30 years. The amount of social engagement with friends dropped by 20 hours per month over the pandemic and is still decreasing.
So, I hate that I have to explain this to a true data science professor, but, um, if the women aren’t single, then there’s less of a problem on the baby front, because they’re the ones who make the babies. But really, the number discrepancy (you can see the actual data) seems like there’s a much more logical explanation that is not “AI girlfriends” and it is… that men and women view relationships differently. It seems that a much more reasonable explanation of why 63% of 18 to 29 year old men and only 34% of 18 to 29 year old women say they are single… is that some of the women in that age group consider themselves in a relationship with men who think they’re actually casually dating around.
And, yes, that could be considered kinda sad. But, it has nothing to do with AI girlfriends, and it’s hard to see how that has any impact on likelihood of babies, let alone the impact on social security and medicare, as the article suggests. I mean, basic logic suggests the final sentence below has fuck all to do with everything that precedes it:
Put another way, we don’t have enough people to work, and therefore we won’t be able to pay our bills, not just to other countries, but to ourselves. We spent more than $1.6 trillion in 2021 on Medicare and Medicaid, with the number of Americans on Medicare expected to increase by 50 percent by 2030, to more than 80 million people. But over the same period, we will have only 10 million more Americans joining the workforce.
And that is just health care. In 1940, there were 42 workers per beneficiary of Social Security. Today, there are only 2.8 workers per beneficiary, and that number is getting smaller. We are going broke, and the young men who will play a huge role in determining our nation’s future are going there with AI girlfriends in their pockets.
Again, if our concern is not enough people to work, and not enough people contributing to social safety nets, immigration is right there. AI has nothing to do with it. Also, if this is such a concern now, why are you sharing trends from decades ago when AI companions really only became a thing in the past year?
And, must we even get into how wrong this article is about how AI works?
By definition, the AI learns from your reactions and is capable of giving you exactly what you want to hear or see, every single time.
AI might learn from you, but, um, it is not capable of “giving you exactly what you want to hear or see, every single time.” Especially not as an alternative to an actual live human being.
Look, I get it, there are all sorts of moral panics we’re hearing about AI these days, but can we at least keep them in the realm of possibility, and not in the form of “Futurama, but real”?
Two years ago, the Government Accountability Office (GAO) released its initial review of federal use of facial recognition tech. That report found that at least half of the 20 agencies examined were using Clearview’s controversial facial recognition tech.
A follow-up released two months later found even more bad news. In addition to widespread use of Clearview’s still-unvetted tech, multiple DHS components were bypassing internal restrictions by asking state and local agencies to perform facial recognition searches for them.
On top of that, there was very little oversight of this use at any level. Some agencies, which first claimed they did not use the tech, updated their answer to “more than 1,000 searches” when asked again during the GAO’s follow-up.
While more guidelines have been put in place since this first review, it’s not clear those policies are being followed. What’s more, it appears some federal agencies aren’t ensuring investigators are properly trained before setting them loose on, say, Clearview’s 30+ billion image database.
For instance, here’s the FBI’s lack of responsibility, which gets highlighted on the opening page of the GAO report.
FBI officials told key internal stakeholders that certain staff must take training to use one facial recognition service. However, in practice, FBI has only recommended it as a best practice. GAO found that few of these staff completed the training, and across the FBI, only 10 staff completed facial recognition training of 196 staff that accessed the service.
The FBI told the GAO it “intends” to implement a training requirement. But that’s pretty much what it said it would do more than a year ago. Right now, it apparently has a training program. But that doesn’t mean much when hardly anyone is obligated to go through it.
This audit may not have found much in the way of policies or requirements, but it did find the agencies it surveyed prefer to use the service offered by an industry pariah than spend taxpayers’ money on services less likely to make them throw up in their mouths.
Yep. Six out of seven federal agencies prefer Clearview. The only outlier is Customs and Border Protection, although that doesn’t necessarily mean this DHS component isn’t considering adding itself to a list that already includes (but is not limited to) the FBI, ATF, DEA, US Marshals Service, Homeland Security Investigations, and the US Secret Service.
We also don’t know how often this tech is used. And we don’t know this because these federal agencies don’t know this.
Six agencies with available data reported conducting approximately 63,000 searches using facial recognition services from October 2019 through March 2022 in aggregate—an average of 69 searches per day. We refer to the number of searches as approximately 63,000 because the aggregate number of searches that the six agencies reported is an undercount. Specifically, the FBI could not fully account for searches it conducted using two services, Marinus Analytics and Thorn. Additionally, the seventh agency (CBP) did not have available data on the number of searches it performed using either of two services staff used.
In most cases, neither the agency nor the tech provider tabulated searches. Thorn only tracked the last time a source photo was searched against, not every time that photo had been searched. And, as the GAO notes, its 2021 report found some agencies couldn’t even be bothered to track which facial recognition tech services were being used by employees, much less how often they were accessed.
Most of the (undercounted) 63,000 searches ran through Clearview. Almost every one of these searches was performed without adequate training.
[W]e found that cumulatively, agencies with available data reported conducting about 60,000 searches—nearly all of the roughly 63,000 total searches—without requiring that staff take training on facial recognition technology to use these services.
All of the surveyed agencies have been using facial recognition tech since 2018. And here’s how they’re doing when it comes to handling things like mandated privacy impact assessments and other privacy-focused prerequisites that are supposed to be in place prior to the tech’s deployment. In this case, green means ok [“agency addressed requirement, but not fully”], baby blue means completed fully, and everything else means incomplete.
If there’s any good news to come out of this, it’s that the US Secret Service, DEA, and ATF have all halted use of Clearview. But just because Clearview is the most infamous and most ethically dubious provider of this tech doesn’t mean the other options are so pristine and trustworthy, these agencies should be allowed to continue blowing off their training and privacy impact mandates. These agencies have had two years to get better at this. But it appears they’ve spent most of that time treading water, rather than moving forward.
You may be young and modern in your thinking but you are going to love this Olden Golden Retro Mini Gramophone Bluetooth Speaker. This vintage style Bluetooth speaker is fun to have on your desk while you do your work or as a part of the decor in your den while you are enjoying Sunday brunch and lounge around the house. You can link two speakers together for greater sound, and it has a mic so you can pick up calls for hands-free talking. It comes in 4 different colors and is on sale for $40.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
There is no doubt that it’s not always easy to figure out what social media websites should do about election disinformation. There are those who believe that websites need to very actively remove such content, but there’s little evidence that straight removal does very much productive, which is why it wasn’t that surprising that YouTube (for example) has stopped policing lies about the 2020 election (the last Presidential election, which doesn’t mean they won’t pay attention to upcoming election).
That said, even if you think that removing content (even election disinformation) is counterproductive, that doesn’t mean there isn’t a clear role for election integrity teams at various platforms. Remember, the main way that old Twitter handled election misinformation in 2020 was to focus on providing more information (that is, adding to the marketplace of ideas) which some people, very wrongly, called “censorship.” Adding more information is not censorship, it’s enabling the so-called marketplace of ideas to function better.
Indeed, reports show that the only times Twitter was actually removing information regarding elections was in the most extreme circumstances, such as cases where you had people impersonating election officials on Twitter in an attempt to mislead voters into not voting (such as by telling them the election was on a different day), or where there were out and out frauds.
That kind of thing is still really important.
But apparently not to Elon Musk. Earlier this week it was reported that exTwitter had disabled the feature that let users “report” election misinformation as part of its reporting tools. That already got some people worried about how a Musk-run exTwitter would handle many upcoming elections.
As if to confirm this was absolutely intentional, that same day, the Information revealed that Elon fired half of the remaining “Election Integrity Team” at exTwitter. This is despite him recently promising to expand that effort. Rolling Stone has way more info on all of this, including details about what likely happened here and it’s dumber than you could have imagined.
It began, as so much nonsense does these days, with gullible Elon falling for complete and utter nonsense peddlers on his own site. A month ago, Aaron Rodericks, who worked on the “threat disruption” team, and is a holdover from pre-Elon Twitter, announced that he was hiring 8 new people for civic integrity and elections work:
Again, this work is not about “censoring,” but about actually understanding various threats to actual elections (not just garden variety political misinfo) and figuring out ways to counter them.
But the nonsense peddlers on exTwitter that have Elon’s ear convinced him that it was a sneaky plot behind Elon’s back… supported with “evidence” that Aaron had, at times, liked some tweets that mocked Elon and Linda Yaccarino. From the Rolling Stone piece:
In a quote tweet, Benz replied to Raichik’s sarcastic question: “No, it’s being run by Yoel Roth’s former colleague, who still somehow works at X despite appearing to think Musk is a ‘f*cking dipshit’ — His name is Aaron Rodericks.” In the post, Benz shared screenshots of his many recent criticisms of Rodericks on the platform. In those tweets, Benz called Rodericks one of CEO Linda Yaccarino’s “censorship shills” and noted that Rodericks had apparently liked another user’s tweet using the aforementioned epithet to describe Musk.
Again, remember that for all of Elon’s talk about “free speech” on his platform, and a promise to fund any lawsuit for someone fired for their activities on exTwitter, he has a thing where he is very quick to fire any employee who even hints at not wanting to lick his boots. And thus he fired Roderick and many of his team despite having approved expanding his team, almost certainly because someone called his attention to Rodericks once liking a tweet that called Musk a fucking dipshit.
Aaron Rodericks, who is the co-lead of Threat Disruption at X, the social media formerly known as Twitter secured the order against his employer.
He claims that he is being subjected to a process that is “a complete sham” over allegations that he “demonstrated hostility” to the company for allegedly liking tweets by third parties that are critical of X, Mr Musk and the firm’s CEO Linda Yaccarine.
That RTE article notes, correctly, that Rodericks feels unfairly targeted for using the platform that he’s been told stands for free speech:
Shortly afterwards he claims he was the subject of a meetings and a disciplinary process that has seen him suspended from his job for allegedly liking disparaging posts about X, Mr Musk, and Ms Yaccarino.
He said that he was very surprised over the allegations, as the company had adopted a strong position on the freedom of speech on the platform, and is not aware of any requirement the precludes employees from liking material posted on X.
Rodericks should, you know, share this tweet, and ask Elon to pay for his lawyers.
Anyhoo, Elon can’t now publicly admit that he fired Rodericks for like a tweet calling him a fucking dipshit, so he’s now trying to retcon in some fucking nonsense that Rodericks was undermining election integrity. Which is nonsense, based on nothing but the say-so of a known political operative with a history of nonsense.
It seems pretty clear that Musk was embarrassed by Rodericks’ liking of a tweet and fired him, and then likely fired a few remaining employees in the department who were there from pre-Elon days.
Hours later, when asked about all of this at the Code Conference, CEO-in-name-only Linda Yaccarino lied through her teeth and said that the election integrity team was “growing.” On stage she claimed that “It’s an issue we take very seriously. And contrary to the comments that were made, there is a robust and growing team at X that is wrapping their arms around election integrity.”
It’s not growing when you fire half the team for recognizing the emperor has no clothes.
When it comes to the early implementation of “AI,” it’s generally been the human beings that are the real problem.
Case in point: the fail-upward incompetents that run the U.S. media and journalism industries have rushed to use language learning models (LLMs) to cut corners and attack labor. They’ve made it very clear they’re not at all concerned about the fact that these new systems are mistake and plagiarism prone, resulting in angry employees, a lower-quality product, and (further) eroded consumer trust.
While AI certainly has many genuine uses for productivity, many VC hustlebros see AI as a way to create an automated ad engagement machine that effectively shits money and undermines already underpaid labor. The actual underlying technology is often presented as akin to science fiction or magic; the ballooning server costs, environmental impact, and $2 an hour developing world labor powering it are obscured from public view whenever possible.
But however much AI hype-men would like to pretend AI makes human beings irrelevant, they remain essential for the underlying illusion and reality to function. As such, a growing number of Silicon Valley companies are increasingly hiring poets, English PHDs, and other writers to write short stories for LLMs to train on in a bid to improve the quality of their electro-mimics:
“A string of job postings from high-profile training data companies, such as Scale AI and Appen, are recruiting poets, novelists, playwrights, or writers with a PhD or master’s degree. Dozens more seek general annotators with humanities degrees, or years of work experience in literary fields. The listings aren’t limited to English: Some are looking specifically for poets and fiction writers in Hindi and Japanese, as well as writers in languages less represented on the internet.”
LLMs like Chat GPT have struggled to accurately replicate poetry. One study found that after being presented with 17 poem examples, the technology still couldn’t accurately write a poem in the style of Walt Whitman. While Whitman’s poems are often less structured, Chat GPT kept trying to produce poems in traditional stanzas, even when explicitly being told not to do that. The problem got notably worse in languages other than English, driving up the value, for now, of non-English writers.
So it’s clear we still have a long way to go before these technologies actually get anywhere close to matching both the hype and employment apocalypse many predicted. LLMs are effectively mimics that create from what already exists. Since it’s not real artificial intelligence, it’s still not actually capable of true creativity:
“They are trained to reproduce. They are not designed to be great, they try to be as close as possible to what exists,” Fabricio Goes, who teaches informatics at the University of Leicester, told Rest of World, explaining a popular stance among AI researchers. “So, by design, many people argue that those systems are not creative.”
That, for now, creates additional value for the employment of actual human beings with actual expertise. You need to hire humans to train models on, and you need editors to fix the numerous problems undercooked AI creates. The homogenized blandness of the resulting simulacrum also, for now, likely puts a premium on thinkers and writers who actually have something original to say.
The problem remains that while the underlying technology will continuously improve, the folks rushing to implement it without thinking likely won’t. Most seem dead set on using AI primarily as a bludgeon against labor in the hopes the public won’t notice the drop in quality, and professional writers, editors, and creatives won’t mind increasingly lower pay and tenuous position in the food chain.
As you may recall, starting a little over 3 years ago we discussed Stone Brewing’s transformation from one-time icon of the craft brewing scene into a trademark bully. What kicked this whole thing off was Stone’s win in a trademark lawsuit against macro-brewer Molson Coors (then Miller Coors, but I will be using the company’s current name throughout the rest of this post). That suit was filed over Molson Coors changing the branding for its Keystone line of beers such that the word “STONE” became the focal point of the branding by way of font size and its prominence on the packaging. Stone Brewing argued this amounted to trademark infringement, something with which I absolutely disagree is the case, but also something for which a jury found in favor of Stone Brewing to the tune of $56 million. As someone who is quite familiar with both products and beer in general, the idea that anyone was mistakenly buying Keystone thinking it was a Stone Brewery product, well, just no. But that, I suppose, is why juries are made up of 12 peers and not 1 Timothy Geigner.
In any case, that wasn’t quite the end of the story. Both companies petitioned the court to keep this going. Stone Brewing amazingly wanted a new trial because that jury found that Molson Coors’ infringement was not willful, limiting the damages. You really would have thought $56 million and Keystone having to nix the new branding would have been enough, but I suppose once your company is bought by the much, much larger Sapporo Breweries out of Japan, all that matters now is getting those sweet returns for investors in any way you can.
Meanwhile, Molson Coors asked for either the court to simply nix the jury results and rule from the bench or issue a new trial as well on grounds that the jury found for Stone Brewing in error and/or that the award was simply out of line with the jury result. That request is more reasonable, I believe, but fairly unlikely, as courts are typically hesitant to simply throw out the result of a jury trial. Given that both parties requested a new trial, however, I had thought there might be some chance of it occurring.
U.S. District Judge Roger Benitez said that Molson Coors was not entitled to a new trial or a court ruling in its favor, rejecting its arguments that the evidence did not support the verdict.
Benitez also denied Stone Brewing’s motion for a new bench trial on its allegations that Molson Coors used the “Stone” name in bad faith, which could have justified additional damages.
Molson Coors spokesperson Rachel Dickens said the company disagreed with the decision and is evaluating its options, including a potential appeal. Representatives for Stone Brewing did not immediately respond to a request for comment on the decision.
There are some interesting and silly tidbits in the ruling document itself (embedded below), as well as some very frustrating redactions.
For instance, Stone Brewing complained that it was not allowed to perform discovery on a late-added witness called by Molson Coors as one of its reasons for wanting a new trial. The court in turn points out that, uh, actually an offer for discovery of that witness was made and Stone declined it. Oops. There’s more like that, as this thing is 28 pages.
On Molson Coors’ end, it gets more interesting. The lower court, in its ruling documents, included some language that certainly called into question whether the jury reached the correct conclusion. Molson Coors pointed this out as a reason it should get a new trial or a bench ruling. The court not only disagrees, but repeats this language again, pointing out that the jury is tasked with fact-finding and the court’s opinion on the matter is non-material.
While Benitez previously said in an order that he would have ruled for Molson Coors on the question had he “been in the position of fact-finder,” he upheld the verdict on Monday because it was not “unreasonable or against the ‘great weight’ of the evidence.”
And, as the ruling lays out, that is 9th Circuit precedent. But then there’s some really juicy sounding bits in which Molson Coors suggests that, and the court sort of agrees, Stone Brewing withheld… something.
MillerCoors argues Stone’s failure to disclose [REDACTED] was prejudicial discovery misconduct that warrants a new trial. The [REDACTED] was not disclosed until [REDACTED]. The Court’s questioning revealed [REDACTED]. MillerCoors requested an instruction that the jury disregard the evidence, and the Court granted MillerCoors’ request.
MillerCoors argues that evidence of [REDACTED] would have undermined key aspects of Stone’s case, including that Stone’s damages are based on consumer’s negative associations between Stone and a “big beer company” like MillerCoors. Because [REDACTED] MillerCoors argues it could have used the [REDACTED] to undermine Stone’s arguments. Stone argues in turn that MillerCoors received exactly the remedy it asked for during trial (a limiting instruction), and there is a presumption that “curative instructions…[are] followed by the jury.”
Ultimately, the Court concludes that non-disclosure of [REDACTED] does not warrant a new trial. Although the Court feels Stone engaged in a certain level of gamesmanship through this concealment, the Court also finds the offer does not directly affect the issues presented at trial in the way MillerCoors argues. Mr. Koch testified [REDACTED].
Other than maybe knowing for sure who shot JFK, I don’t know that there is another thing in my life I have wanted to know more than what is in those redacted sections.
Ultimately it appears not to matter a great deal, however. The court declined the motions from both sides of this. I suppose Molson Coors could make good on its claim to want to appeal the decision; I certainly think it should have won the original case myself. Still, the company is starting to pile up losses on this whole thing and it might be time to simply cut its losses and put this to bed.