For years, law enforcement agencies converted themselves into quasi-military agencies with the assistance of the Defense Department. Whatever the military no longer needed, cops could have for cheap or free, as long as they remembered to say things about “national security” when filling out their 1033 program requisitions.
Unsurprisingly, the acquisition of warrior gear (camouflage uniforms, assault rifles, mine resistant vehicles) made cops feel more like warriors, rather than protectors. Violence increased as cops began to look less and less like cops. Ever since law enforcement rolled into anti-police violence protests following the killing of Michael Brown looking for all the world like an occupying force, the 1033 program has faced increased scrutiny.
But scrutiny does not always result in action. Lasting reform of the program hasn’t really happened. Presidential administrations have enacted changes just to see them rolled back by the next guy in the Big Chair. State and local level efforts have stayed in place long enough to avoid generating headlines, but once the news cycle moves on, it’s back to business as usual.
This somewhat depressing introduction leads up to the latest effort by lawmakers to curb abuse of this easily abused program. As the Tenth Amendment Center reports, New York legislators are hoping to achieve what so many before them have failed to accomplish.
Sen. Nathalia Fernandez (D) introduced Senate Bill 3527 (S3527) on Jan. 31. The bill would prohibit New York state and local law enforcement agencies from receiving or purchasing certain property from a military equipment surplus program operated by the federal government. These items include;
Drones that are armored, weaponized, or both
Aircraft that are combat configured or combat coded
grenades or similar explosives and grenade launchers
silencers
militarized armored vehicles
camouflaged uniforms
bayonets
riot gear
firearms or ammunition
explosives or pyrotechnics
chemical incapacitants
The proposed laws would also require law enforcement agencies to publish a notice on their publicly accessible website within 14 days of requesting allowable military equipment from a federal program.
This is a pretty solid proposal. It doesn’t just hit the expected targets: grenade launchers, armored vehicles, things the military would never sell to cops, like weaponized aircraft and drones. It also forbids the acquisition of military uniforms, something cops just love to wear when performing daily work like warrant service. That will force more of that state’s cops to dress like cops, making them at least bear some resemblance to the public servants they are, rather than special forces operatives they perceive themselves as.
The other benefit of passing this bill? The portion of the US population that doesn’t reside in the state of New York will no longer be asked to foot the bill for arming and outfitting New York law enforcement officers. This removes the federal hookup, keeping spending local. And there’s been quite a bit of it. According to information obtained by the Marshall Project, New York agencies have acquired at least $26 million in military surplus since the program’s inception.
Expect New York agencies and (especially) their union reps to start getting angrier about this the further it moves forward in the legislation. If there’s anything cops don’t like, it’s being told they can’t have things by elected leaders who actually make the occasional effort to respect the will of the people.
Karl just wrote about CNET, a once-vaunted resource for tech journalism, absolutely stepping on every rake it could find by using AI-generated content that was absolutely laughable: the content tended to be inaccurate, plagiarized, or otherwise so full of mistakes that an army of editors had to rework the content, largely wiping away any cost savings the site was hoping to achieve. Good times all around.
Now, while it’s difficult to pin this down completely, it sure looks like Sports Illustrated is going down the same path. At the same time that Arena Group, the parent company for SI and Men’s Journal, announced that it was going to embrace AI-created content, SI is also laying off more wetware-based journalists.
“After seven and a half years of writing about the NHL, NBA, NFL, MLB, LPGA, World Cup, Olympics and more, I, too, have been laid off by Sports Illustrated this morning,” rejoined Alex Prewitt, a former senior writer.
According to an internal memo obtained by Awful Announcing, Arena Group has laid off a sizable 17 employees and created 12 openings to “reflect the new needs of the SI business.” (Something tells us those “new needs” might involve accommodating the generative AI the parent company has been brandishing at Men’s Journal.)
The state of American journalism is nothing more than an absolute travesty. The complete lack of value media companies and, to some extent the public, have placed in having real, professional, and human journalists is mindboggling. There is less local journalism now per capita than there has been for a long, long time. And now national journalism outfits are seeking to outsource journalism to SkyNET? C’mon.
And once again, the output of this AI journalism leaves much to be desired.
And on the accuracy front, Arena Group’s AI-guided dreck isn’t doing any better. Futurism, with the help of a medical expert, found that its very first AI article for Men’s Journal, titled “What All Men Should Know About Low Testosterone,” contained at least 18 factual errors, despite the authoritative tone of its synthesized prose. Not what you’d want out of something that’s supposed to be giving health advice to the site’s vast readership.
In response, the article was hastily and extensively rewritten to account for the inaccuracies. Some still slipped through the cracks.
That didn’t seem to bother Arena, though. A spokesperson from the group stated in a statement provided to Futurism that the company was “confident in the articles.”
Sure, express confidence in your error-riddled word-salads you call journalism. Why not? It’s only the reputation you have with readers, otherwise known as the entire reason you have a business, that we’re talking about here.
To be clear, SI has not yet used AI created content, as far as it has admitted publicly. But these layoffs create a vacuum that has to be filled by someone… or something. Given the route that Arena Group is going with its other properties, that AI is going to be employed here too is, at worst, an educated guess.
And here I was thinking that the last few months of Twitter shenanigans with Elon Musk at the helm had done something nearly impossible: made Mark Zuckerberg’s leadership of Meta (Facebook/Instagram) look thoughtful and balanced in comparison. But then, on Sunday, Zuckberg announced that Meta is following Musk down the dubious road of making “verification” an upsell product people can buy. This is a mistake for many reasons, just as it was a mistake when Musk did it.
To be clear, as with Twitter Blue, I have no issue with social media companies creating subscription services in which they provide users with more benefits / features etc. Indeed, I’ve been surprised at how little most social media companies have experimented with such subscription programs. Hell, even here at Techdirt, we’ve long had some cool perks and extra features for people willing to subscribe (if you don’t yet subscribe, check it out).
But, any such upsell / premium subscription offering has to be about actually providing real value to the end users. And, it should never involve undermining trust & safety for users. But, really, that’s what this is doing. As we wrote when Musk first floated the idea of charging for verification, it’s important to understand the history and the reasons social media companies embraced verification in the first place.
It wasn’t about providing value to that individual user, but rather about increasing the trust and safety of the entire platform, so that users wouldn’t be confused or fooled by impostors or inauthentic users. The goal, then, is to benefit everyone else using the platform to interact with the verified users, more than it is to benefit the verified users themselves.
But, in shifting it to a subscription service, as we’ve seen with Twitter, it seems to do plenty to undermine the trust and safety other users have regarding the platform, making it so they feel less comfortable recognizing verified users as legitimate.
Meta’s more detailed announcement, following Zuck’s posting it to an Instagram group, only serves to show how backwards this is, and how similar it is to Twitter Blue’s disastrous adaptations.
With Meta Verified, creators get:
A verified badge, confirming you’re the real you and that your account has been authenticated with a government ID.
More protection from impersonation with proactive account monitoring for impersonators who might target people with growing online audiences.
Help when you need it with access to a real person for common account issues.
Increased visibility and reach with prominence in some areas of the platform– like search, comments and recommendations.
Exclusive features to express yourself in unique ways.
We can walk through each one of these to show why it looks like Meta is just running out of ideas, and desperate to squeeze users.
Those first two items should never be paid premium services. As explained, verification is not so much for the user’s benefit but for the wider platform’s. Making it so only those with the means to do so get verified actually takes away much of the value of being verified. As for “more protection from impersonation,” it feels like… maybe that isn’t the kind of product you should be selling, but rather is kind of an indictment of a platform’s inability to protect its users.
“We failed to stop people from pretending to be you, so pay us to now protect you” is not exactly a strong sales pitch, Mark.
And, sure, there are services that let you pay for more urgent access to customer support, but again, this mostly just highlights just how terrible Meta customer support has been for years.
But, the last two points deserve special attention. Increased visibility in search, comments, and recommendations based on paying up is also something that Musk has done with Twitter Blue, but seems like a terrible idea that just encourages spammers and other bad actors to use this as a cheap way of being able to get more prominent attention for their spam and scams and the like. It also calls into serious question all the promises we’ve been hearing from Zuck for years now about the company’s increasing focus on relevance in its feeds. If they’re moving away from that to encourage paying up to reach people, it seems like we’re only moving further into the enshittification death spiral.
As for “exclusive features to express yourself in unique ways,” at first glance that sounds like maybe something that could be a useful thing as an upsell or premium offering, but the details (in a footnote) make it pretty clear this was a rushed afterthought.
We’ll offer exclusive stickers on Facebook and Instagram Stories and Facebook Reels, and 100 free stars a month on Facebook so you can show your support for other creators.
How… utterly unexciting.
Anyway, this definitely fits back in with the nature of Cory Doctorow’s enshittification death cycle. Remember how it works:
Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
Nothing in this announcement really benefits users. It just squeezes more money out of them. Yes, Meta is presenting it as if there are real benefits for users, but users aren’t that dumb.
I’m sure that a decent number of people will sign up for this. And it’s certainly likely that the rollout won’t be as chaotic and embarrassing as Twitter’s paid verification program. But it seems quite likely to me that Meta is going to find the end result of this underwhelming, just as Twitter did.
To fend off a ban in the U.S., TikTok lobbyists have attempted to put on a doomed charm offensive in DC, spending a record $5.4 million on U.S. lawmaker influence last year. The effort has even involved opening “transparency centers” in DC designed to “educate” lawmakers on content moderation and the steps TikTok is apparently taking to assuage privacy and security concerns.
This week, Chew met with at least two lawmakers in Washington, Sens. Michael Bennet (D-Colo.) and Roger Wicker (R-Miss.), who have voiced concern about Americans’ exposure to the popular video-sharing platform. Neither walked away swayed.
The influence campaign has had a particularly hard time penetrating the GOP. Not because the GOP cares so much about privacy and security (as we’ve well documented, the party actively created the oversight-optional regulatory landscape TikTok — and everyone else — exploits) but because it’s pretty clear the GOP goal has always been to force a sale of TikTok and its fat ad revenues to a U.S. Republican ally.
As we’ve noted a few times, a ban doesn’t actually fix the problem. Because the problem isn’t just TikTok. The problem is our comical, corrupt failure to implement privacy legislation or competent regulatory oversight of numerous data-hoovering companies (including the dodgy data brokers that cavalierly sell access to everything from your daily location data to your mental health issues).
That’s not to say TikTok is some innocent daisy undeserving of scrutiny. The company, like so many modern Internet companies, has routinely abused consumer privacy, even going so far as to spy on journalists and violate kids’ privacy laws. It’s a fairly typical, greedy, giant corporation which views U.S. consumer privacy as a distant afterthought.
That said, TikTok’s charm offensive can never sway the GOP, because the GOP isn’t actually interested in fixing the problems it’s pretending to be upset about (unchecked online political propaganda, consumer privacy).
The GOP supports everything they accuse TikTok of doing (online political propaganda campaigns, reckless collection and monetization of consumer data) but only if they, or a U.S. company, are the ones doing it. That fusion of patriotism, hypocrisy and corruption makes it hard to craft any serious, meaningful policy to address the potential harms the party is hyperventilating over.
The result is a big, dumb performance designed to appear as if the primary interest is privacy, national security, and Chinese influence. But like most GOP agendas, the real goal is the accumulation of wealth and money. Namely the transfer of TikTok to a Republican-allied U.S. company like Walmart or Oracle, who’ll then allow the GOP to exploit the app in all the ways the party currently accuses China of.
Democrats haven’t been much better. Countless Democrats hyperventilating about TikTok have also opposed meaningful privacy legislation and meaningful FTC regulatory oversight of data brokers. And the Biden advisors’ big “fix” for TikTok is to tether the company tightly to Oracle, a Republican-allied tech giant with its own long history of privacy violations and dodgy political choices.
U.S. politicians can’t fix the TikTok problem because they can’t (or won’t) even identify the actual problem (their support for largely nonexistent oversight of numerous, interconnected data monetization markets). That leaves TikTok lobbyists flailing about in a xenophobic soup trying to strike deals with folks whose only real goal is something TikTok won’t support (a full transfer of all assets to U.S. ownership):
“I don’t think there’s anything they can say. It’s all about what they do, and what they do is pretty alarming,” said Sen. Brian Schatz (D-Hawaii), who sits on the Commerce Committee and has been a key negotiator in discussions around data privacy legislation on Capitol Hill.
Sen. Josh Hawley (R-Mo.), one of TikTok’s most outspoken and long-standing critics, said the company’s engagement shows it’s “scared” of looming regulation. Hawley last year spearheaded a successful campaign to prohibit federal employees from downloading the app on government devices, and has proposed legislation to ban it for consumers nationwide.
It’s unclear if this mess ever results in anything productive. I could see it resulting in a nationwide ban should relations between the U.S. and China deteriorate further. But as noted countless times, a ban of TikTok won’t stop a universe of other barely-regulated apps, data brokers, and telecoms from abusing consumer privacy. And it certainly won’t stop Chinese intelligence from obtaining all of this data.
TikTok is a problem we created via lax privacy policies we have no serious interest in fixing. It’s a result of our own greed, and conscious choice to prioritize making money over national security, market health or consumer welfare. Mix in general bigotry and oodles of corruption, and you’ve got a ridiculous policy soup that’s more romper room than serious adult policymaking.
Aspiring filmmakers, YouTubers, bloggers, and business owners alike can find something to love about the Complete Video Production Super Bundle. Video content is fast changing from the future marketing tool to the present, and in these 10 courses you’ll learn how to make professional videos on any budget. From the absolute basics to the advanced shooting and lighting techniques of the pros, you’ll be ready to start making high-quality video content and driving viewers to it in no time. This bundle will teach you how to make amazing videos, whether you use a smartphone, webcam, DSLR, mirrorless, or professional camera. It’s on sale for $35.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
A lot of people freaked out on Friday after the news came out that Twitter was going to make SMS two-factor authentication (2FA) only available to paid Twitter Blue subscribers. The news was first broken, like so much Twitter news these days, by Platformer reporter Zoe Schiffer.
It’s understandable that people were up in arms over this, as one read of this is that it meant that keeping your account secure was a luxury item that you had to pay extra for. But, the details matter here, and I actually think many people are overreacting to this. There are actually fundamentally good reasons to move away from SMS-based 2FA: mainly in that it’s woefully insecure, and runs the risk of making people think they’re way more secure than they are. If you follow cybersecurity news, there are tons of articles talking about why SMS 2FA is not a good idea and you should ditch it if you can. Some have argued it’s actually worse than just having a good password, though I think that very much depends on your threat model, and for most users it’s not true (i.e., it is probably true for targeted individuals, and probably not true if there’s more of a brute force hacking effort). Years back, Microsoft actually told everyone to move away from SMS-based 2FA. Google started transitioning people off of SMS-based 2FA all the way back in 2017, which was slightly after NIST deprecated it from its recommended multi-factor authentication list. But, at least there was a clear transition plan.
Soon after Schiffer’s tweet, Twitter released a blog post explaining the decision (though, bizarrely, despite coming out on Friday afternoon, the blog post was backdated to Wednesday?!?):
While historically a popular form of 2FA, unfortunately we have seen phone-number based 2FA be used – and abused – by bad actors. So starting today, we will no longer allow accounts to enroll in the text message/SMS method of 2FA unless they are Twitter Blue subscribers. The availability of text message 2FA for Twitter Blue may vary by country and carrier.
Non-Twitter Blue subscribers that are already enrolled will have 30 days to disable this method and enroll in another. After 20 March 2023, we will no longer permit non-Twitter Blue subscribers to use text messages as a 2FA method. At that time, accounts with text message 2FA still enabled will have it disabled. Disabling text message 2FA does not automatically disassociate your phone number from your Twitter account. If you would like to do so, instructions to update your account phone number are available on our Help Center.
We encourage non-Twitter Blue subscribers to consider using an authentication app or security key method instead. These methods require you to have physical possession of the authentication method and are a great way to ensure your account is secure.
It also helps to understand a bit of the background here. First, Twitter was (like in so many other areas) somewhat late to the 2FA game. When it added SMS-based 2FA in 2013, there were headlines about how it had “finally” done so. And, it was only in 2019 that the company let you turn on non-SMS 2FA without a phone number, again leading to headlines that included the word “finally.” And, the lack of security with SMS 2FA was pretty damn clear when someone hacked Jack Dorsey‘s own Twitter account using SIM swapping, the easiest way to get around SMS 2FA.
On top of that, I’ve spoken with former Twitter employees who say that the blog post above is not wrong when it says that SMS 2FA is often abused by bad actors in a manner that generates a ton of SMS messages, and is actually extremely costly for Twitter. Even if Elon is no longer paying any of Twitter’s bills, there may be legitimate business reasons for ending support for SMS 2FA (also if, hypothetically, Musk had stopped paying the bills for their SMS 2FA provider, it’s possible that vendor was threatening to cut Twitter off entirely, which might also explain the short timeline here).
So, I think that many of the headlines and tweets decrying this as being about making security a “luxury,” for only paying subscribers is not fair and not accurate. There are lots of things (obviously) that I criticize Musk about, but I think there are perfectly legitimate reasons to end support for SMS 2FA, and at least some of the freakout people had was an overreaction.
That said… I do still have many concerns with how this was rolled out, and it wouldn’t surprise me if the FTC has some concerns as well. While it’s a bit out of date, Twitter’s last transparency report on security (covering the second half of 2021) shows that only 2.6% of Twitter users even have 2FA-enabled, which is really not great. And of those that have it enabled, nearly 75% are using SMS based authentication:
So, there’s a legitimate fear that in simply killing off SMS 2FA and not providing a very clear and very straightforward transition to an authenticator app (or security key) the percentage of people using any 2FA at all may go down quite a bit, potentially putting more people at risk. If Twitter and Elon Musk weren’t just cost cutting and were actually looking to make Twitter more secure for its users, they would create a plan that did a lot more to transition users over to an authenticator app.
I mean, the fact that they’re still leaving SMS 2FA for Twitter Blue subscribers pretty much gives away the game that this is solely about cost-cutting and not about transitioning users to better security. Indeed, it seemed like after spending a day talking about the expenses, it was only then that Musk realized that SMS 2FA also wasn’t good for security and started making those claims as well (a day late to be convincing that this has anything to do with the decision).
All that said, I am wondering if this might trigger yet another FTC investigation. The last consent decree with the FTC (remember, this was less than a year ago) was mostly about SMS 2FA, and how Twitter had abused the phone numbers it had on file, provided for 2FA, as a tool for marketing. That’s obnoxious and wrong and the FTC was correct to slam Twitter for it. Part of the consent decree was that Twitter had to provide 2FA options “that don’t require people to provide a phone number” (such as an authenticator app or security key, which the company does). But, also, it says that “Twitter must implement an enhanced privacy program and a beefed-up information security program.”
The details of that program include regular security assessments any time that the company “modifies” security practices. I’m curious if Twitter did such an assessment before making this change? The requirements of the program also include things like the following:
Identify and describe any changes in how privacy and security-related
options will be presented to Users, and describe the means and results of any
testing Respondent performed in considering such changes, including but not
limited to A/B testing, engagement optimization, or other testing to evaluate a
User’s movement through a privacy or security-related pathway;
Include any other safeguards or other procedures that would mitigate the
identified risks to the privacy, security, confidentiality, and integrity of Covered
Information that were not implemented, and each reason that such alternatives
were not implemented; and
Was any of that done? Or was it just Musk getting upset after seeing a bill for SMS messaging and declaring that they were cutting of SMS 2FA? We may find out eventually…
In the end, I do think Twitter is right to move away from SMS 2FA (and, as users, you should do so yourself wherever you use it). Multi-factor authentication is a very important security practice, and one that more people should use, but the SMS variety is not nearly as safe as other methods. But there is little indication here that Musk is doing it for any reason other than to cut costs, and the haphazard way in which this has been rolled out suggests that it may increase security risks for a noticeable percentage of Twitter users.
Court transparency and equitable access to court documents are ongoing struggles. The federal court system’s malicious compliance with congressional directives has given us exorbitant fees and a clunky, counterintuitive platform for online access to court documents.
Part of the federal court system doesn’t even give us that much. Despite being subject to a 2016 law mandating access to military court documents, the US military’s court system has continued to do its own thing. For seven years, it pretty much completely ignored the law ordering it to perform “timely” releases of court documents “at all stages of the military justice system.”
This hasn’t happened. A recent Pentagon directive finally addresses the seven-year-old law. But the directive merely tells military branches it’s still business as usual, no matter what the law says. Megan Rose has the details for ProPublica.
Caroline Krass, general counsel for the Defense Department, told officials from the Army, Navy, Air Force, Marines, Coast Guard and Space Force in a memorandum last month that they could mostly continue doing what they have been for years: keep many court records secret from the public.
[…]
The guidance tells the services they do not have to make any records public until after a trial ends. It gives the military the discretion to suppress key trial information. And in cases where the defendant is found not guilty, the directive appears to be even more sweeping: The military services will be allowed to keep the entire record secret permanently.
The memo [PDF] appears to instruct the military’s court system to act more like the rest of the federal court system.
Public access to military justice docket information, filings, trial-level court documents, and appellate documents should follow the best practices of Federal and State courts, to the extent practicable.
Then the discretionary part kicks in. “To the extent practicable” aren’t words that inspire efforts meant to surmount obstacles. They’re words that encourage lackadaisical efforts — something that doesn’t even rise to the level of trying. It encourages failure due to a lack of effort, so long as actual success can still be portrayed as impracticable.
These aren’t the best practices of federal and state courts, which generally make most documents available almost immediately.
Absent extraordinary circumstances, filings, trial-level court documents, and appellate documents will be publicly accessible no later than 45 calendar days after the certification of the record of trial (at the trial court level) or after the Court of Criminal Appeals decision (at the appellate level).
“Extraordinary circumstances.” Just a little more discretionary leeway. And while the memo notes courts are free to make documents available earlier, they won’t be considered in violation of a directive that is pretty much in direct violation of federal law.
A 45-day delay means most court records will be of limited public interest and of almost no use to journalistic agencies, which rely on the newsworthiness of their reporting to attract readers and viewers. And what will be made public won’t be everything that’s made public by other courts.
The services do not have to provide transcripts or recordings of court sessions or any evidence entered as exhibits, according to the Pentagon guidance. And the Pentagon does not consider any preliminary hearing documents to be part of the trial record.
In the military, there is a proceeding called an Article 32 hearing to decide whether there is enough evidence for a trial. Under the new guidance, the military won’t have to put these hearings on the docket, so the public won’t even know they are happening.
If there’s any upside, it’s this: the guidance does not allow the military to continue to abuse Freedom of Information Act exemptions to redact or withhold court documents. That kind of thing doesn’t fly in the US federal court system and it definitely has no place in the military court system.
The rest is all downside. A law is only as effective as its enforcement. Unless Congress is willing to step in and force the Defense Department to issue new guidance that actually complies with the 2016, the military will continue to play keep away from taxpayers.