Earlier this year, I was a part of a CNN documentary, Twitter: Breaking the Bird, which gave me much pause for reflection about the state of social media and how we got here. This year alone we’ve witnessed an unprecedented wave of disruption across these platforms.
Government workers, locked out of their jobs, struggled to organize securely. Protestors, seeking to plan No Kings marches, wondered which app could be the most trusted. Inbound international travelers have been deleting their social apps for fear that immigration officers will search their phones. And during major disasters, like the tragic Texas floods and the LA fires, emergency responders and volunteers find their critical updates buried by algorithms that prioritize engagement over urgency. On a daily basis, countless online communities face arbitrary deplatforming, surveillance, and loss of their digital spaces without recourse or explanation.
These aren’t isolated incidents: they’re symptoms of a fundamental crisis in how we’ve allowed our digital communities to be governed. We’ve unwittingly accepted a system where massive corporations control the public sphere; algorithms optimize for advertising revenue rather than human connection, and we the people have no real agency over our digital existence.
We’ve Lost Our Way
I’ve spent decades building social technologies, including working at Odeo, the company that ultimately pivoted to become Twitter. There I was the social app’s first employee and de facto CTO until late 2006; and have since built numerous other community organizing platforms. I’ve watched with growing concern as our digital spaces have become increasingly toxic and hostile to genuine community needs. The promise of social media as we defined it in the early days—to connect and empower communities of people—has been subverted by a business model that treats human connection as a commodity to be monetized.
Today, if you run a Facebook Group with thousands of members, you have no real authority – your community exists at the whim of corporate policies you cannot influence. This is fundamentally at odds with how real-world communities have always operated. Your local gardening club, bowling league, or neighborhood association has democratic processes for leadership and decision-making. Why should our digital communities be any different?
It’s Time For a New Social Media Bill Of Digital Rights
I believe that the time has come for a new Social Media Bill of Digital Rights. Just as the original Bill of Rights protected individual freedoms from government overreach, we need fundamental protections for our digital communities from corporate control and surveillance capitalism.
So what could such a Social Media Bill of Rights include?
The right to privacy & security: The ability to communicate and organize without fear of surveillance or exploitation.
The right to own and control your identity: People and their communities must own their digital identities, connections and data. And, as the owner of an account, you can exercise the right to be forgotten.
The right to choose and understand algorithms (transparency): Choosing the algorithms that shape your interactions: no more black box systems optimizing for engagement at the expense of community well-being.
The right to community self-governance: Crucially, communities of users need the right to self govern, setting their own rules for behavior which are contextually relevant to their community. (Note: this does not preclude developer governance.)
The right to full portability – the right to exit: The freedom to port your community in its entirety, to another app without losing your connections and content.
To determine whether these are the appropriate “Rights,” I’ve just launched a new podcast, Revolution.Social where I invite my guests, including the likes of Jack Dorsey, Cory Doctorow, Yoel Roth, Kara Swisher and Renee DiResta, to share their feedback and debate where we need to head next.
Architecting For A Better Future
The good news is that the technical foundations for a better future already exist through open protocols that work like the web itself – interconnected and controlled by no single entity.
The Fediverse, powered by ActivityPub, enables platforms like Mastodon to create interconnected communities free from corporate control.
Nostr provides a foundation for decentralized, encrypted communication that no one can shut down.
BlueSky is pioneering user choice in algorithms.
Signal demonstrates that private, secure communication is possible at scale.
Unlike the walled gardens of Meta, TikTok, and Twitter (now X), these open protocols allow communities to connect across platforms while maintaining control of their spaces. When you use email or browse the web, you don’t worry about which email provider or browser your friends use – it just works. Our social spaces should function the same way.
What’s missing is the bridge between these technical capabilities and the tools communities actually need to thrive. We need to move from closed, corporate platforms to open protocols that communities can shape and control. This isn’t just a technical challenge – it needs to become a social movement. We need to build systems that are co-designed with communities, that respect their autonomy, and that enable their authentic purposes.
Evan Henshaw-Plath, known as “rabble,” is an activist and technologist passionate about building commons-based social media apps that prioritize equity and sustainability.
Axon Enterprise’s Draft One — a generative artificial intelligence product that writes police reports based on audio from officers’ body-worn cameras — seems deliberately designed to avoid audits that could provide any accountability to the public, an EFF investigation has found.
Our review of public records from police agencies already using the technology — including police reports, emails, procurement documents, department policies, software settings, and more — as well as Axon’s own user manuals and marketing materials revealed that it’s often impossible to tell which parts of a police report were generated by AI and which parts were written by an officer.
You can read our full report, which details what we found in those documents, how we filed those public records requests, and how you can file your own, here.
Everyone should have access to answers, evidence, and data regarding the effectiveness and dangers of this technology. Axon and its customers claim this technology will revolutionize policing, but it remains to be seen how it will change the criminal justice system, and who this technology benefits most.
For months, EFF and other organizations have warned about the threats this technology poses to accountability and transparency in an already flawed criminal justice system. Now we’ve concluded the situation is even worse than we thought: There is no meaningful way to audit Draft One usage, whether you’re a police chief or an independent researcher, because Axon designed it that way.
Draft One uses a ChatGPT variant to process body-worn camera audio of public encounters and create police reports based only on the captured verbal dialogue; it does not process the video. The Draft One-generated text is sprinkled with bracketed placeholders that officers are encouraged to add additional observations or information—or can be quickly deleted. Officers are supposed to edit Draft One’s report and correct anything the Gen AI misunderstood due to a lack of context, troubled translations, or just plain-old mistakes. When they’re done, the officer is prompted to sign an acknowledgement that the report was generated using Draft One and that they have reviewed the report and made necessary edits to ensure it is consistent with the officer’s recollection. Then they can copy and paste the text into their report. When they close the window, the draft disappears.
Any new, untested, and problematic technology needs a robust process to evaluate its use by officers. In this case, one would expect police agencies to retain data that ensures officers are actually editing the AI-generated reports as required, or that officers can accurately answer if a judge demands to know whether, or which part of, reports used by the prosecution were written by AI.
One would expect audit systems to be readily available to police supervisors, researchers, and the public, so that anyone can make their own independent conclusions. And one would expect that Draft One would make it easy to discern its AI product from human product – after all, even your basic, free word processing software can track changes and save a document history.
But Draft One defies all these expectations, offering meager oversight features that deliberately conceal how it is used.
So when a police report includes biased language, inaccuracies, misinterpretations, or even outright lies, the record won’t indicate whether the officer or the AI is to blame. That makes it extremely difficult, if not impossible, to assess how the system affects justice outcomes, because there is little non-anecdotal data from which to determine whether the technology is junk.
The disregard for transparency is perhaps best encapsulated by a short email that an administrator in the Frederick Police Department in Colorado, one of Axon’s first Draft One customers, sent to a company representative after receiving a public records request related to AI-generated reports.
“We love having new toys until the public gets wind of them,” the administrator wrote.
No Record of Who Wrote What
The first question anyone should have about a police report written using Draft One is which parts were written by AI and which were added by the officer. Once you know this, you can start to answer more questions, like:
Are officers meaningfully editing and adding to the AI draft? Or are they reflexively rubber-stamping the drafts to move on as quickly as possible?
How often are officers finding and correcting errors made by the AI, and are there patterns to these errors?
If there is inappropriate language or a fabrication in the final report, was it introduced by the AI or the officer?
Is the AI overstepping in its interpretation of the audio? If a report says, “the subject made a threatening gesture,” was that added by the officer, or did the AI make a factual assumption based on the audio? If a suspect uses metaphorical slang, does the AI document literally? If a subject says “yeah” through a conversation as a verbal acknowledgement that they’re listening to what the officer says, is that interpreted as an agreement or a confession?
Ironically, Draft One does not save the first draft it generates. Nor does the system store any subsequent versions. Instead, the officer copies and pastes the text into the police report, and the previous draft, originally created by Draft One, disappears as soon as the window closes. There is no log or record indicating which portions of a report were written by the computer and which portions were written by the officer, except for the officer’s own recollection. If an officer generates a Draft One report multiple, there’s no way to tell whether the AI interprets the audio differently each time.
Axon is open about not maintaining these records, at least when it markets directly to law enforcement.
In this video of a roundtable discussion about the Draft One product, Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added):
“So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices—so basically the officer generates that draft, they make their edits, if they submit it into our Axon records system then that’s the only place we store it, if they copy and paste it into their third-party RMS [records management system] system as soon as they’re done with that and close their browser tab, it’s gone. It’s actually never stored in the cloud at all so you don’t have to worry about extra copies floating around.”
To reiterate: Axon deliberately does not store the original draft written by the Gen AI, because “the last thing” they want is for cops to have to provide that data to anyone (say, a judge, defense attorney or civil liberties non-profit).
Following up on the same question, Axon’s Director of Strategic Relationships at Axon Justice suggests this is fine, since a police officer using a word processor wouldn’t be required to save every draft of a police report as they’re re-writing it. This is, of course, misdirection and not remotely comparable. An officer with a word processor is one thought process and a record created by one party; Draft One is two processes from two parties–Axon and the officer. Ultimately, it could and should be considered two records: the version sent to the officer from Axon and the version edited by the officer.
The days of there being unexpected consequences of police departments writing reports in word processors may be over, but Draft One is still unproven. After all, every AI-evangelist, including Axon, claims this technology is a game-changer. So, why wouldn’t an agency want to maintain a record that can establish the technology’s accuracy?
It also appears that Draft One isn’t simply hewing to long–establishednorms of police report-writing; it may fundamentally change them. In one email, the Campbell Police Department’s Police Records Supervisor tells staff, “You may notice a significant difference with the narrative format…if the DA’s office has comments regarding our report narratives, please let me know.” It’s more than a little shocking that a police department would implement such a change without fully soliciting and addressing the input of prosecutors. In this case, the Santa Clara County District Attorney had already suggested police include a disclosure when Axon Draft One is used in each report, but Axon’s engineers had yet to finalize the feature at the time it was rolled out.
One of the main concerns, of course, is that this system effectively creates a smokescreen over truth-telling in police reports. If an officer lies or uses inappropriate language in a police report, who is to say that the officer wrote it or the AI? An officer can be punished severely for official dishonesty, but the consequences may be more lenient for a cop who blames it on the AI. There has already been an occasion when engineers discovered a bug that allowed officers on at least three occasions to circumvent the “guardrails” that supposedly deter officers from submitting AI-generated reports without reading them first, as Axon disclosed to the Frederick Police Department.
To serve and protect the public interest, the AI output must be continually and aggressively evaluated whenever and wherever it’s used. But Axon has intentionally made this difficult.
What the Audit Trail Actually Looks Like
You may have seen news stories or other public statements asserting that Draft One does, indeed, have auditing features. So, we dug through the user manuals to figure out what that exactly means.
The first thing to note is that, based on our review of the documentation, there appears to be no feature in Axon software that allows departments to export a list of all police officers who have used Draft One. Nor is it possible to export a list of all reports created by Draft One, unless the department has customized its process (we’ll get to that in a minute).
This is disappointing because, without this information, it’s near impossible to do even the most basic statistical analysis: how many officers are using the technology and how often.
Based on the documentation, you can only export two types of very basic logs, with the process differing depending on whether an agency uses Evidence or Records/Standards products. These are:
A log of basic actions taken on a particular report. If the officer requested a Draft One report or signed the Draft One liability disclosure related to the police report, it will show here. But nothing more than that.
A log of an individual officer/user’s basic activity in the Axon Evidence/Records system. This audit log shows things such as when an officer logs into the system, uploads videos, or accesses a piece of evidence. The only Draft One-related activities this tracks are whether the officer ran a Draft One request, signed the Draft One liability disclosure, or changed the Draft One settings.
This means that, to do a comprehensive review, an evaluator may need to go through the record management system and look up each officer individually to identify whether that officer used Draft One and when. That could mean combing through dozens, hundreds, or in some cases, thousands of individual user logs.
An example of Draft One usage in an audit log.
An auditor could also go report-by-report as well to see which ones involved Draft One, but the sheer number of reports generated by an agency means this method would require a massive amount of time.
But can agencies even create a list of police reports that were co-written with AI? It depends on whether the agency has included a disclosure in the body of the text, such as “I acknowledge this report was generated from a digital recording using Draft One by Axon.” If so, then an administrator can use “Draft One” as a keyword search to find relevant reports.
Agencies that do not require that language told us they could not identify which reports were written with Draft One. For example, one of those agencies and one of Axon’s most promoted clients, the Lafayette Police Department in Indiana, told us:
“Regarding the attached request, we do not have the ability to create a list of reports created through Draft One. They are not searchable. This request is now closed.”
Meanwhile, in response to a similar public records request, the Palm Beach County Sheriff’s Office, which does require a disclosure at the bottom of each report that it had been written by AI, was able to isolate more than 3,000 Draft One reports generated between December 2024 and March 2025.
They told us: “We are able to do a keyword and a timeframe search. I used the words draft one and the system generated all the draft one reports for that timeframe.”
We have requested further clarification from Axon, but they have yet to respond.
However, as we learned from email exchanges between the Frederick Police Department in Colorado and Axon, Axon is tracking police use of the technology at a level that isn’t available to the police department itself.
In response to a request from Politico’s Alfred Ng in August 2024 for Draft One-generated police reports, the police department was struggling to isolate those reports.
An Axon representative responded: “Unfortunately, there’s no filter for DraftOne reports so you’d have to pull a User’s audit trail and look for Draft One entries. To set expectations, it’s not going to be graceful, but this wasn’t a scenario we anticipated needing to make easy.”
But then, Axon followed up: “We track which reports use Draft One internally so I exported the data.” Then, a few days later, Axon provided Frederick with some custom JSON code to extract the data in the future.
What is Being Done About Draft One
The California Assembly is currently considering SB 524, a bill that addresses transparency measures for AI-written police reports. The legislation would require disclosure whenever police use artificial intelligence to partially or fully write official reports, as well as “require the first draft created to be retained for as long as the final report is retained.” Because Draft One is designed not to retain the first or any previous drafts of a report, it cannot comply with this common-sense and first-step bill, and any law enforcement usage would be unlawful.
Axon markets Draft One as a solution to a problem police have been complaining about for at least a century: that they do too much paperwork. Or, at least, they spend too much time doing paperwork. The current research on whether Draft One remedies this issue shows mixed results, from some agencies claiming it has no real-time savings, with others agencies extolling its virtues (although their data also shows that results vary even within the department).
In the justice system, police must prioritize accuracy over speed. Public safety and a trustworthy legal system demand quality over corner-cutting. Time saved should not be the only metric, or even the most important one. It’s like evaluating a drive-through restaurant based only on how fast the food comes out, while deliberately concealing the ingredients and nutritional information and failing to inspect whether the kitchen is up to health and safety standards.
Given how untested this technology is and how much the company is in a hurry to sell Draft One, many local lawmakers and prosecutors have taken it upon themselves to try to regulate the product’s use. Utah is currently considering a bill that would mandate disclosure for any police reports generated by AI, thus sidestepping one of the current major transparency issues: it’s nearly impossible to tell which finished reports started as an AI draft.
We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now… AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.
We urge other prosecutors to follow suit and demand that police in their jurisdiction not unleash this new, unaccountable, and intentionally opaque AI product.
Conclusion
Police should not be using AI to write police reports. There are just too many unanswered questions about how AI would translate the audio of situations and whether police will actually edit those drafts, while simultaneously, there is no way for the public to reliably discern what was written by a person and what was written by a computer. This is before we even get to the question of how these reports might compound and exacerbate existing problems or create new ones in an already unfair and untransparent criminal justice system.
EFF will continue to research and advocate against the use of this technology but for now, the lesson is clear: Anyone with control or influence over police departments, be they lawmakers or people in the criminal justice system, has a duty to be informed about the potential harms and challenges posed by AI-written police reports.
We’ve long written about One America News (OAN), the right wing propaganda mill pretending to be cable news journalism. The “news” outlet, originally funded and proposed by AT&T, traffics in no limit of dangerous conspiracy theories and authoritarian fan fiction, ranging from fake election conspiracies to the false claim that COVID was created in a North Carolina lab.
OAN “reporters” relentlessly kiss Donald Trump’s ass like dutiful stenographers. But OAN’s Gabrielle Cuccia, assigned by the network to pretend to seriously cover the Pentagon, has apparently paid the price for wandering a little too close to the truth.
It spends a thousand words or so kissing Trump’s ass before eventually getting around to some light criticism of the Pentagon’s ongoing hostility to journalists, which has included banning reporters from large, non-secure parts of the Pentagon and assigning them goonish handlers:
“This marks a troubling shift in how the Department of Defense engages with the press and, by extension, the American public.
The Pentagon Press Association (although I am not officially part of the association —again hello I am MAGA) has raised valid concerns over the new restrictions on the movement of credentialed journalists within the Pentagon, even in non-secure, unclassified hallways.”
Hesgeth, himself a former fake journalist who failed upward into his completely unqualified role as Secretary of Defense, has launched a new harmful assault on real journalism to cover up the fact his short tenure has been potholed by a steady stream of staffer leaks and historically problematic security fuck ups. Screw ups which Cuccia’s broader post, of course, dutifully downplays.
Unfortunately for Cuccia, being too honest about Hesgeth’s assault on the First Amendment was apparently enough to get her booted from the Pentagon and fired from OAN, according to the Associated Press:
“Three days after her Memorial Day Substack post, Cuccia said her Pentagon access badge was revoked. “By Friday,” she said, “I was out of a job.”
The AP frames this as an almost-serious news organization firing a reporter because she expressed a human opinion in her off hours:
“Traditionally, the legacy media does not want its journalists expressing opinions about people they cover, since it calls into doubt their ability to report without bias. But exceptions are often made in cases where media access is at issue, said Tom Rosenstiel, a journalism professor at the University of Maryland.”
That’s of course gibberish. One because it implies OAN is real journalism. And two, because real outlets that enforce these types of restrictions will usually turn around and publish fifty stories in a row credulously parroting the strange claims of law enforcement, lobbyists, or CEOs without batting an eyelash. Or turn their websites into glorified blogspam affiliates for Amazon.com several times a year.
In this case, Cuccia was clearly fired for not towing the authoritarian line, which is OAN’s entire purpose. The firing is particularly ironic coming from the same Republican party that insists that any cable or satellite TV company that refuses to carry OAN (it genuinely doesn’t have that many viewers) is engaged in an act of overt, unfair censorship.
When DirecTV refused to carry OAN because the network wasn’t profitable for them, you might recall it was such a crisis that six Republican AGs felt the need to whine publicly about unfair “censorship,” and make vague threats against DirecTV for the crime of… making their own choices. But an OAN employee gets shitcanned for stumbling accidentally into the truth in her free time, and it’s crickets.
Again because Trump Republicans, shockingly enough, don’t actually care about free speech. They care about parroting, protecting, and perpetuating authoritarian bullshit.
The Oversight Project works for government that is responsible and accountable to its citizens. We use Freedom of Information Act requests and other means to make government more transparent to the public and to allow Congress to use its oversight authorities with maximum effectiveness. The requests and analysis of information are informed by Heritage’s deep policy expertise. By its nature, the Oversight Project primarily engages in disseminating information to the public.
The site Forward.com has obtained a presentation put together by the Heritage Foundation as part of that Oversight Project, with the title “Wikipedia Editor Targeting” (pdf). According to the deck, its aim is to:
Identify and target Wikipedia editors abusing their position by analyzing text patterns, usernames, and technical data through data breach analysis, fingerprinting, HUMINT, and technical targeting.
The Heritage Foundation sent the pitch deck outlining the Wikipedia initiative to Jewish foundations and other prospective supporters of Project Esther, its roadmap for fighting antisemitism and anti-Zionism.
Among the doxxing techniques listed on the presentation are “Fingerprinting”:
Text Analysis: Use NLP to identify writing style, repeated phrases, and content patterns.
Cross-Article Comparison: Detect similarities in multiple articles, focusing on propagandist themes.
“Username Analysis and Dataset Correlation”:
Reuse in Breached Data: Search breached datasets for reused names, emails, and online identities.
Cross-Platform Analysis: Identify connections between usernames and other online activities.
Controlled Links: Use redirects to capture IP addresses, browser fingerprints, and device data through a combination of in browser fingerprinting scripts and HTML5 canvas techniques
Technical Data Collection: Track geolocation, ISP, and network details from clicked links
and “Online Human Intelligence (HUMINT)”:
Persona Engagement: Engage curated sock puppet accounts to reveal patterns and provoke reactions, information disclosure
Behavioral Manipulation: Push specific topics to expose more identity related details
Cross-Community Targeting: Interact across platforms to gather intelligence from other sources.
It’s an extremely comprehensive set that suggests this could be a major program, although there’s no indication yet whether it has managed to doxx anyone or has even begun to operate. Forward.com writes:
A Heritage Foundation spokesperson said she was not able to answer questions about the organization’s work related to Wikipedia, which editors it was seeking to identify or how it sought to “target” them. The Wikimedia Foundation, which provides the infrastructure for Wikipedia, declined to comment.
The methods outlined are potentially a serious threat to the freedom of speech of Wikipedia editors. Doxxing them would clearly open them up to the kind of online attacks that have become all-too common since Elon Musk bought Twitter. It would be quite understandable if doxxed editors stopped working on Wikipedia, for fear of real-world consequences for them and their families.
The Heritage Foundation claims that the aim of its Oversight Project is “to make government more transparent to the public and to allow Congress to use its oversight authorities with maximum effectiveness.” But doxxing and targeting volunteer editors on Wikipedia has nothing to do government transparency, and just looks like bullying to stifle viewpoints the Heritage Foundation disagrees with. That’s bad enough, but the worry has to be that if editors are successfully doxxed and stop writing as a result, others will adopt the same methodology to chill freedom of speech on Wikipedia more widely.
California taxpayers are now on the hook for $345,576 in legal fees to… Elon Musk. Why? Because Governor Gavin Newsom and Attorney General Rob Bonta ignored warnings about the obvious Constitutional problems with AB 587, their social media “transparency” law. The law, which Google and Meta actually supported (knowing full well that they could comply while competitors would struggle), has now been partially struck down — exactly as we predicted back in 2022.
While positioned as a transparency bill (who could be against that?), the reality is that it would create a huge hassle for smaller companies, give instructions to malicious actors, and make it harder for content moderation to work well. And, it would effectively enable the California Governor/AG to demand certain types of content moderation.
Look, here’s the thing about content moderation: Companies make editorial decisions all the time about what content to allow, what to remove, what to promote, what to bury. (This is basically their job!) The government generally stays out of these decisions because, well, the First Amendment.
And yet California decided it would be fine to demand that social media companies explain exactly how they make these decisions. Not just in general terms, mind you, but with detailed data about how often they take down posts about “extremism” or “disinformation” or “hate speech.” And also revealing how many people saw that (very loosely defined!) content.
Think about how absurd this would be in any other context. Imagine California passing a law requiring the LA Times to file quarterly reports detailing every story they killed in editorial meetings, with specific statistics about how many articles about “misinformation” they chose not to run. Or demanding the San Francisco Chronicle explain exactly how many letters to the editor about “foreign political interference” they rejected. The First Amendment violation would be so obvious that newspapers’ lawyers would probably hurt themselves rushing to file the lawsuit.
But somehow, when it comes to social media, California convinced itself this was fine. (Narrator: It wasn’t fine.)
Now California has agreed to settle most of the case, conceding two crucial points: the core reporting requirements were unconstitutional, and California taxpayers need to cover Musk’s legal bills. The stipulated agreement makes clear just how thoroughly the state’s position collapsed:
IT IS HEREBY DECLARED that subdivisions (a)(3), (a)(4)(A), and (a)(5) of California Business and Professions Code section 22677 violate the First Amendment of the United States Constitution facially and as applied to Plaintiff.
IT IS HEREBY ORDERED that Defendant, as defined, shall be permanently enjoined from enforcing subdivisions (a)(3), (a)(4)(A), and (a)(5) of California Business and Professions Code section 22677. Defendant shall also be permanently enjoined from enforcing Section 22678 insofar as that section applies to violations of subdivisions (a)(3), (a)(4)(A), and (a)(5) of California Business and Professions Code section 22677.
[….]
It is ORDERED that Plaintiff shall recover from Defendant the amount of $345,576 in full compensation for the attorneys’ fees and costs incurred by Plaintiff in connection with this action and the related preliminary injunction appeal.
The invalidated sections of the law would have required social media companies to define nebulous terms like “hate speech,” “extremism,” and “disinformation,” then provide detailed reports about how they enforced these categories. Companies would have had to reveal not just their moderation practices, but specific data about content flagging, enforcement actions, and user exposure to this content.
Let’s be clear: this outcome was entirely predictable. California’s leadership wasted time and resources pushing through a law that was constitutionally dubious from the start. Now they’re spending taxpayer money to pay legal fees to the world’s wealthiest man — all because they wouldn’t listen to basic First Amendment concerns.
So here’s a modest proposal for Governor Newsom and AG Bonta: next time we warn you about constitutional problems with your tech regulation plans, maybe take those warnings seriously? It’ll save everyone time and money — and bonus, you won’t have to cut checks to Elon Musk.
If you want to write something on the U.S. government’s official DOGE website, apparently you can just… do that. Not in the usual way of submitting comments through a form, mind you, but by directly injecting content into their database. This seems suboptimal.
The story here is that DOGE — Elon Musk’s collection of supposed coding “geniuses” brought in to “disrupt” government inefficiency — finally launched their official website. And what they delivered is a masterclass in how not to build government infrastructure. One possibility is that they’re brilliant disruptors breaking all the rules to make things better. Another possibility is that they have no idea what they’re doing.
The latter seems a lot more likely.
Last week, it was reported that the proud racist 25-year-old Marko Elez had been given admin access and was pushing untested code to the US government’s $6 trillion/year payment system. While the Treasury Department initially claimed (including in court filings!) that Elez had “read-only” access, others reported he had write access. After those reports came out, the Treasury Dept. “corrected” itself and said Elez had been “accidentally” given write privileges for the payments database, but only for the data, not the code. Still, they admitted that while they had put in place some security protections, it’s possible that Elez did copy some private data which “may have occasionally included screenshots of payment systems data or records.”
Yikes?
Now, you might think that having a racist twenty-something with admin access to trillion-dollar payment systems would concern people. But Musk’s defenders had a compelling counterargument: he must be a genius! Because… well, because Musk hired him, and Musk only hires geniuses. Or so we’re told.
The DOGE team’s actual coding prowess is turning out to be quite something. First, they decided that government transparency meant hiding everything from FOIA requests. When questioned about this interesting interpretation of “transparency,” Musk explained that actually DOGE was being super transparent by putting everything on their website and ExTwitter account.
There was just one small problem with this explanation. At the time he said it, the DOGE website looked like this:
That was it. That was the whole website.
On Thursday, they finally launched a real website. Sort of. If by “real website” you mean “a collection of already-public information presented in misleading ways by people who don’t seem to understand what they’re looking at.” But that’s not even the interesting part.
These supposed technical geniuses managed to build what might be the least secure government website in history. Let’s start with something basic: where does the website actually live? According to Wired, the source code actually tells search engines that ExTwitter, not DOGE.gov, is the real home of this government information:
A WIRED review of the page’s source code shows that the promotion of Musk’s own platform went deeper than replicating the posts on the homepage. The source code shows that the site’scanonical tagsdirect search engines tox.comrather thanDOGE.gov.
A canonical tag is a snippet of code that tells search engines what the authoritative version of a website is. It is typically used by sites with multiple pages as a search engine optimization tactic, to avoid their search ranking being diluted.
In DOGE’s case, however, the code is informing search engines that when people search for content found onDOGE.gov, they should not show those pages in search results, but should instead display the posts on X.
“It is promoting the X account as the main source, with the website secondary,” Declan Chidlow,a web developer, tells WIRED. “This isn’t usually how things are handled, and it indicates that the X account is taking priority over the actual website itself.”
If you’re not a web developer, here’s what that means: When you build a website, you can tell search engines “hey, if you find copies of this content elsewhere, this version here is the real one.” It’s like telling Google “if someone copied my site, mine is the original.”
But DOGE did the opposite. They told search engines “actually, ExTwitter has the real version of this government information. Our government website is just a copy.” Which is… an interesting choice for a federal agency? It’s a bit like the Treasury Department saying “don’t look at our official reports, just check Elon’s tweets.”
You might think that a government agency directing people away from its official website and toward the private company of its leader would raise some conflict-of-interest concerns. And you’d be right!
But wait, it gets better. Or worse. Actually, yeah, it’s worse.
Who built this government website? Through some sloppy coding, security researcher Sam Curry figured out it was DOGE employee Kyle Shutt. The same Kyle Shutt who, according to Drop Site News, has admin access to the FEMA payments system. The same Kyle Shutt who used the exact same Cloudflare ID to build Musk’s America PAC Trump campaign website. Because why maintain separate secure credentials for government systems and political campaigns when you can just… not do that?
But the real cherry on top came Thursday when people discovered something amazing about the DOGE site database: anyone can write to it. Not “anyone with proper credentials.” Not “anyone who passes security checks.” Just… anyone. As 404 Media reported, if you know basic database operations, you too can be a government website administrator:
The doge.gov website that was spun up to track Elon Musk’s cuts to the federal government is insecure and pulls from a database that can be edited by anyone, according to two separate people who found the vulnerability and shared it with 404 Media. One coder added at least two database entries that are visible on the live site and say “this is a joke of a .gov site” and “THESE ‘EXPERTS’ LEFT THEIR DATABASE OPEN -roro.”
While I imagine those will be taken down shortly, for now, the insertions are absolutely visible:
Look, there’s a reason we called this whole thing a cyberattack. When someone takes over your computer systems and leaves them wide open to anyone who wants to mess with them, we usually don’t call that “disruption” or “innovation.” We call it a cybersecurity breach.
“Feels like it was completely slapped together,” they added. “Tons of errors and details leaked in the page source code.”
Both sources said that the way the site is set up suggests that it is not running on government servers.
“Basically,doge.govhas its codebase, probably through GitHub or something,” the other developer who noticed the insecurity said. “They’re deploying the website on Cloudflare Pages from their codebase, anddoge.govis a custom domain that theirpages.devURL is set to. So rather than having a physical server or even something like Amazon Web Services, they’re deploying using Cloudflare Pages which supports custom domains.”
Here’s the thing about government computer systems: They’re under constant attack from foreign adversaries. Yes, they can be inefficient. Yes, they can be bloated. But you know what else they usually are? Not completely exposed to the entire internet. It turns out that some of that inefficient “bureaucracy” involves basic things like “security” and “not letting random people write whatever they want in federal databases.”
This isn’t some startup where “move fast and break things” is a viable strategy. This is the United States government. And it’s been handed over to people whose main qualification appears to be “posts spicy memes on 4chan.” The implications go far beyond embarrassing database injections — this level of technical negligence in federal systems creates genuine national security concerns. When your “disruption” involves ignoring decades of hard-learned lessons about government systems security, you’re not innovating — you’re inviting disaster.
“I think that the strong bias with respect to government information should be to make it available to the public. Let’s be as transparent as possible. Fully transparent.”
When one of his fanboys tweeted that quote, Elon responded by making an even bigger claim, saying: “There should be no need for FOIA requests. All government data should be default public for maximum transparency.”
As big believers (and users) of the FOIA system, that actually sounded good to us, and I would have supported any actual effort to make more government information and documents public by default.
Right after the inauguration, Lauren Harper at the Freedom of the Press Foundation noted that this was an opportunity for Elon to put “his documents where his mouth is, and make DOGE’s records public.” But, she noted, the early indications didn’t look good, including the fact that one of their first orders of business was to shut down the OMB FOIA portal. It’s still down as I type this.
Of course, if Musk was living up to his words that we wouldn’t even need FOIA because he’d just make everything public, well, that would be one explanation.
But that’s not what is actually happening. Just as when he took over Twitter, we’re learning that Musk’s promises and Musk’s reality are wholly different things. When he promises to make things better for “the people,” he always means “make things better for Elon.”
As you can see, he said those things two days before Elon Musk was elected alongside Donald Trump to (apparently) rip out every bit of accountability from the government of the United States of America. Now that he has near total control over the systems that make the US work, he apparently wants them to be pretty damn secret.
We first heard about this last week when the always excellent 404 Media reported that the DOGE boys were told to stop using Slack, because someone realized the conversations were accessible by FOIA.
Employees working for the agency now known as DOGE have been ordered to stop using Slack while government lawyers attempt to transition the agency to one that is not subject to the Freedom of Information Act, 404 Media has learned.
“Good morning, everyone! As a reminder, please refrain from using Slack at the moment while our various general counsels figure out the best way to handle the records migration to our new EOP [Executive Office of the President] component,” a message seen by 404 Media reads. “Will update as soon as we have more information!”
Sounds like someone’s got something to hide, huh?
Given that not one, not two, but three of the DOGE boys have been outed as having terrible fucking judgment (either blatantly racist tweets or being involved with a fucked up cybercrime group built around Discord and Telegram chat channels) you have to imagine that some shit is going on in those Slack chats.
And thus, it was announced late last week that DOGE has been reorganized outside of OMB (subject to FOIA) and now under the Executive Office of the President, which is subject to the Presidential Records Act instead, allowing such records to be hidden for at least a decade.
The White House has designated Mr. Musk’s office, United States DOGE Service, as an entity insulated from public records requests or most judicial intervention until at least 2034, by declaring the documents it produces and receives presidential records.
And that, of course, is only if the Trump admin abides by the PRA, something he was famous for ignoring in his first administration, including when he took classified documents with him to Mar-A-Lago when he left office.
So, again, what is Elon hiding? After all, when he said everything should be public, he said the only exceptions should be things like “how to make a nuclear bomb.”
Seems like an admission that he’s doing some crazy shit.
Which is actually a problem if he’s claiming to be protected by the Presidential Records Act. After all, the reason there is secrecy like that under the PRA is because it’s supposed to cover advice to the President. The fear was if that advice would become public too quickly, advisors wouldn’t be able to be honest with the President. But the reason most of the rest of the executive branch is subject to FOIA is because they’re actually doing stuff, not just advising. And that information is required, under law, to be public.
I recognize, again, that the Trump administration sees laws only as things they get to use to punish those they hate, rather than anything that binds them, but I’m guessing that lawsuits are about to be filed (if they haven’t been already) challenging this designation.
So, maybe we’ll actually find out what kinds of messages Elon is trading with the guy who calls himself “Big Balls” and the guy who claimed he “was racist before it was cool.”
But only after a court gets involved. So much for “maximum transparency.”
Musk’s version of government efficiency appears to mean efficiently hiding what he and his crew are doing inside our government.
One of the many new executive orders signed by President Donald Trump on Monday was the long-hyped creation of the Department of Government Efficiency (DOGE). DOGE is portrayed as a sort of government efficiency and innovation office, but it’s primarily flimsy cover for the extraction class as they eliminate corporate oversight, consumer protection, labor rights, and the social safety net.
The program was supposed to be spearheaded by two of the country’s biggest bloviating weirdos, Elon Musk and Vivek Ramaswamy. Ramaswamy is already leaving the agency because he purportedly wants to take a shot at becoming the Governor of Ohio (though other reports suggest he somehow managed to annoy most of the people at a fake government agency already filled with annoying people).
DOGE has other issues already as well. While it’s not a real government agency, it does appear to qualify as a federal advisory committee (FACA). And FACAs do have documentation, transparency, and other rules they have to follow, including producing meeting minutes, filing a Charter with Congress, having “fairly balanced” ideological representation, and maintaining some semblance of public open access.
Not surprisingly, Musk’s fake government efficiency agency has allegedly done none of those things, resulting in several new lawsuits that may or may not result in any reform of note.
One of the lawsuits was filed by the The American Public Health Association, the American Federation of Teachers, Minority Veterans of America, VoteVets Action Fund, Center for Auto Safety, and CREW. It calls DOGE a “shadow operation led by unelected billionaires who stand to reap huge financial rewards from this influence and access.”
“Plaintiffs and those they represent believe that the government should work for the American people and be transparent, efficient, and effective – and that the government can and should do better,” the complaint states.
Another lawsuit, filed by Public Citizen, filed in conjunction with the American Federation of Government Employees, also alleges the fake government agency is playing fast and loose with government rules.
Yet another lawsuit, filed by National Security Counselors, also points out how the setup of DOGE seems wholly disconnected from how the government is supposed to work.
It’s clear DOGE supporters (including lots of corporate backed deregulatory “innovation” think tanks) want to have their cake and eat it too. They want DOGE to be respected as a serious thing, while simultaneously having to do none of the serious things adults have to do to be taken seriously in the world of government policy:
“Sam Hammond, senior economist at the Foundation for American Innovation, who has been supportive of DOGE’s efforts, said the initiative will primarily implement ideas within the executive branch and White House, which he said would exempt it from FACA requirements. If Trump does treat DOGE as a FACA, then it should follow the required reporting rules. But for now, he said, “DOGE isn’t a federal advisory committee because DOGE doesn’t really exist. DOGE is a branding exercise, a shorthand for Trump’s government reform efforts.”
When announced, the press went out of its way to frame DOGE as a very serious thing. Of course it’s mostly a vehicle for access (read: corruption). And a way to put a lazy shine on what will be a brutal and very harmful dismantling of federal consumer protection, labor rights, environmental law, and social safety programs, which will result in very real suffering at unprecedented scale.
Musk himself admits this suffering is coming, but hopes he can bedazzle a lazy press with enough bullshit that they softsell and downplay the broad, percussive looming harms to the American public. Meanwhile fake government official Musk is already walking back claims that his fake government efficiency agency would drive some two trillion in overall government savings.
You’re supposed to ignore the fact that this is because the stuff usually most in need of cutting — fat and purposeless corporate subsidies (see: the Starlink kerfuffle) and the bottomless well of military and intelligence overbilling — are precisely the sort of stuff billionaire extraction class parasites enjoy glomming on to. The stuff deemed “inefficient” is the stuff that doesn’t benefit them personally.
Ohio residents pay for the cops. They pay for the cameras. Now, they’re expected to pay for the footage generated by cops and their cameras. Governor Mike DeWine, serving no one but cops and their desire for opacity, recently signed a bill into law that will make it much more expensive for residents to exercise their public records rights.
Ohio Gov. Mike DeWine has signed a controversial bill into law that could charge the public hundreds of dollars for footage from law enforcement agencies, including body cameras.
[…]
Around 2 a.m. during the 17-hour marathon lame duck session, lawmakers passed H.B. 315, a massive, roughly 450-page omnibus bill.
In it was a provision that could cost people money to get access to video from police and jails. Law enforcement could charge people for the “estimated cost” of processing the video — and you would have to pay before the footage is released. Governments could charge up to $75 an hour for work, with a fee cap of $750 per request.
[…]
The policy was not public, nor had a hearing, prior to being snuck into the legislation.
That’s pretty ugly. It’s also a clear indication those pushing this measure knew the public wouldn’t like it, hence the last-minute subterfuge tied to an apparently must-pass bill shoved through the legislation before its Christmas recess.
Reporter Morgan Trau had questions following the passage of this measure. Gov. DeWine had answers. But they’re completely unsatisfactory.
“These requests certainly should be honored, and we want them to be honored. We want them to be honored in a swift way that’s very, very important,” DeWine responded. “We also, though — if you have, for example, a small police department — very small police department — and they get a request like that, that could take one person a significant period of time.”
Sure, that’s part of the equation. Someone has to take time to review information requested via a public records request. But that’s part of the government’s job. It’s not an excuse to charge a premium just to fulfill the government’s obligations to the public.
DeWine had more of the same in his official statement on this line item — a statement he was presumably compelled to issue due to many people having these exact same questions about charging people a third time for something they’d already paid for twice.
No law enforcement agency should ever have to choose between diverting resources for officers on the street to move them to administrative tasks like lengthy video redaction reviews for which agencies receive no compensation–and this is especially so for when the requestor of the video is a private company seeking to make money off of these videos. The language in House Bill 315 is a workable compromise to balance the modern realities of preparing these public records and the cost it takes to prepare them.
Well, the biggest problem with this assertion is that no law enforcement agency ever has to choose between reviewing footage for release and keeping an eye on the streets. I realize some smaller agencies may not have a person dedicated to public records responses, but for the most part, I would prefer someone other than Officer Johnny Trafficstop handle public records releases. First, they’re not specifically trained to handle this job. Second, doing this makes it a fox-in-the-hen-house situation, where officers might be handling information involving themselves, which is a clear conflict of interest.
Mike Weinman with the Fraternal Order of Police said this new law would help smaller municipalities that already struggle with staffing.
“Whoever is in charge of their public records, that person might be pulled off the road to do these things,” Weinman said. “So that means there’s a person who’s not responding to calls, who’s not out there being proactive in the community.”
To be fair, this stupidity comes from a cop unionrep, but these reps are almost always current or former cops. This says the same thing: without charging $75/hour, smaller agencies might have to pull officers off patrol to process video for records requests. Equally as stupid as Gov. DeWine’s assertions and equally (and willfully) ignorant of the reality.
Again, no cop should be handling records requests because of the conflict of interest, much less the lack of specific skills. Beyond that, there’s the fact that the state could easily have increased funding for public records handling, just as easily as it decided everyone should have to pay more to exercise their First Amendment right to access information. But legislators (and the cops who back them) don’t want more accountability or transparency. They want to erect barriers that limit their exposure. So, the end result is a law that allows law enforcement agencies to “recoup” the costs of processing, even if the cost of processing is actually much lower or already covered by their existing budgets.
This argument isn’t much better:
Marion Police Chief Jay McDonald, also the president of the Ohio FOP, showed me that he receives requests from people asking for drunk and disorderly conduct videos. Oftentimes, these people monetize the records on YouTube, he added.
Moving past the conflict of interest that is a police chief also being the head of a police union, the specific problem with this argument is that it suggests it’s ok to financially punish everyone just because a small minority of requesters are abusing the system for personal financial gain. Again, while it sounds like a plausible argument for charging processing fees, the real benefit isn’t in deterring YouTube opportunists, but in placing a tax on transparency most legitimate requesters simply won’t be able to pay. And that’s the obvious goal here. If it wasn’t, this proposal would have gone up for discussion, rather than tacked onto the end of 315-page omnibus bill at the last minute. This is nothing but what it looks like: people in the legislature doing a favor for cops… and screwing over their own constituents.
Among the key reasons Elon Musk insisted he had to buy Twitter were (1) that it was too political in how it was managed and how content moderation was done, (2) the company was not as transparent as it should be, and (3) it was too quick to censor.
Since taking over, Elon has been worse on all three of those things. He’s turned the site into a one-sided MAGA campaign platform, he’s been significantly less transparent than the old regime, and he’s been much faster to cave to government demands.
I guess as a silver lining, he’s at least trying to be a bit more transparent, though only more transparent than he’s been (it’s still way less than what old Twitter was). We can only confirm how much more willing to censor he is because he finally released a transparency report. Twitter had been among the first internet companies to regularly release transparency reports, talking about content moderation, copyright takedown demands, and (of course) government demands for both information and content/account removals. Every six months, like clockwork, Twitter would publish detailed, thorough transparency reports.
Indeed, old Twitter was so committed to transparency on those things that it fought the US government in court for the right to publish more details of the demands it received from the government, after the rest of the big internet companies caved.
The last of the six-month transparency reports that Twitter published was the one that was published in July 2022, covering the last six months of 2021. And then, until this week, silence. It took two years, but ExTwitter finally got its act together to publish a transparency report for the first half of 2024 and… it shows that for all of Elon’s bluster about standing up for free speech, he’s way, way, way more willing to pull down content when governments demand removals than the old regime.
The site acted on 71 percent of the legal requests it received to remove content in the first half of this year, up 20 percent from the last time it reported the figure in 2021 and more than double the rate in preceding years
I mean, we’ve pointed this out multiple times in the past two years. Elon keeps changing his definition of free speech. Sometimes he claims it’s following the laws of each country.
That definition allows him to justify removing content as soon as governments request it. And boy does he ever seem willing to remove content when governments he likes request removals. These tend to come from right-wing authoritarian regimes in places like Turkey, where the new report reveals they removed 68% of requested content.
But, then, of course, when there are countries that are more left-leaning, like Brazil or Australia, he’ll make a big show of how he’s “standing up for free speech” in fighting them. As I’ve said, in both those cases, I think it was good that he was willing to stand up to over-aggressive government demands. But it’s hard to see it as any strong commitment to free speech when he’s so quick to comply elsewhere. Indeed, he’s already backed down in Brazil, to much less fanfare.
Separately and importantly, Elon has been way more willing to hand over user data to governments upon request. This was another thing that old Twitter was aggressive in fighting back against, but Elon seems quite willing to roll over on.
X has also complied more frequently this year with government requests for users’ personal data than in the years immediately before Musk’s takeover, at 53 percent, according to the report. X received the most such requests in the U.S. and complied with 76 percent of them.
For all of Elon’s misleading talk about how the “old” Twitter was really an extension of the FBI, it seems notable that (1) old Twitter sued to reveal details of DOJ requests and (2) Elon’s way more willing to comply with them.
So, hey, it’s great that Elon is finally releasing (much simpler, less detailed) transparency reports (though we’ll see if they actually keep coming). But, it also underscores just how much Elon has done the opposite of what he’s promised. He’s made ExTwitter way more political in its moderation and focus, he’s made the site way less transparent, and he’s way more willing to cave to governments in takedown demands and requests for user info.