One of the reasons that copyright is so unbalanced in favor of companies, especially Big Content, is that the process of bringing in new copyright laws is hard for ordinary members of the public to engage with. Typically, new laws come about after government consultations. Although these are public in the sense that they are not secret, and anyone can take part, their questions and format are at best intimidating, and at worst incomprehensible for ordinary people.
As a result of this issue, digital rights organizations often try to help members of the public respond to a consultation by preparing explanations of what the questions mean, as well as sample answers that people can use as models when they respond. The problem with this approach is that this means many of the responses look very similar, which leads to claims that they are “spam”, or considered only as one response, disregarding the actual number of citizens that took the time to respond. This allows unscrupulous politicians to dismiss even massive responses from members of the public as being “fake”. As I discuss in Walled Culture the book, this is precisely what happened with the EU Copyright Directive, and this was one of the reasons such a bad law was rammed through despite public opposition.
However, help may be at hand. Back in October, Walled Culture wrote about how generative AI programs were producing images from text prompts. In the last few weeks, many people have been exploring the fascinating capabilities of a new generative AI system called ChatGPT from OpenAI:
We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.
There are lots of impressive and entertaining examples online of what people have asked ChatGPT to do, and how it responded. It’s worth emphasizing that however convincing the result may look, there is no guarantee that what it says is correct – ChatGPT doesn’t understand what it produces, which means it can happily produce utter nonsense.
That said, it is very good at turning prompts and rough ideas consisting of just a few words and phrases into polished prose. This means that it will be a boon for people who know what they want to say but find it difficult to write fluently.
In particular, it means that it will be great for responding to copyright consultations. For example, ChatGPT and similar systems could explain what a particularly abstruse question might mean. It could suggest a range of answers as starting points for a person’s response. To fine-tune the output, it could be prompted to include personal elements that will ensure that it is different from other responses that use the same AI system, to head off charges that it is fake.
It’s not a perfect solution to the problem of consultations – ideally, they would be conducted in a way that was designed for everyone, not just copyright specialists. But it might help people to get around at least some of the most glaring issues with today’s approach, and broaden participation in these important initiatives.
Free speech “heroes” can freely curb your speech. The government, however, may not. So, if you’re a government account operating on social media services, when you fuck around, you find out. This decision [PDF] — targeting St. Louis lawmakers — reminds everyone of these uncomfortable facts. (h/t Courthouse News Service)
Social media platforms are public squares… at least as far as public servants are concerned. You may not like what your constituents have to say, but you’re not allowed to silence them. That’s what a Missouri federal court has declared, following an absurd amount of precedent that should have made it clear to the city of St. Louis (as personified by Lewis Reed, the president of the city’s Board of Alderman) that blocking a resident’s Twitter account from interacting with the city’s official account was unconstitutional.
As the order notes, the jury trial over the constitutional issues got off to a somewhat strange start… at least in terms of a civil lawsuit.
Reed appeared at trial with counsel and, when called to testify, invoked the Fifth Amendment.
To be sure, invoking the Fifth isn’t an admission of guilt. But considering the only thing at stake was a court-ordered unblocking of St. Louis resident Sarah Felts’ Twitter account, this move does seem a little strange. Given this turn of events, the court reached a compromise: Felts could submit a list of questions for the (now-former — he retired two years after this lawsuit was filed) Board of Alderman president to be answered after the trial was concluded.
Everything at issue here went down fairly innocuously. And by that I mean it was rookie night on Doomscroll.com, where people said things and other people reacted terribly by not understanding how swiftly antagonistic flotsam is swept away by the tyranny of auto refresh. Read on and be amused by the give-and-take that ultimately decided to be the equivalent of a palace coup by the Board of Aldermen president.
In March 2009, Reed created a public Twitter account (the “Account”) to “put out information for people to … let them know what I’m up to.” At times, Reed changed the Account’s handle to indicate his candidacy for office, but between March 2009 and June of 2020, the most frequently used handle was @PresReed.
On his Twitter page, Reed described himself as “Father of 4 great kids, husband, public servant, life long democrat, proud St. Louis City resident, President of the Board of Aldermen.”
Any member of the public could view Reed’s posts and either “like,” reply, or “retweet” his posts.
On January 26, 2019, a Twitter account with the handle @ActionSTL tweeted: “Reeds asked to clarify his position on @CLOSEWorkhouse. He says we need to rework out [sic] court system. Eventually says yes, he does support the demand to close the workhouse but we need to change the messaging around it.” Action St. Louis, a local, black-led advocacy organization, operates the @ActionSTL Account.
Plaintiff responded to Action St. Louis’ tweet stating: “What do you mean by ‘change the messaging around #CloseTheWorkhouse,’ @PresReed? #STLBOA #aldergeddon2019 #WokeVoterSTL.” The issue of closing the St. Louis Workhouse, a medium security institution and one of two jails in the City, was a subject of political debate in January 2019. Plaintiff was among those advocating for the Board of Aldermen to take action to close the Workhouse, as was Action St. Louis.
Plaintiff believed Reed’s statement, as reported by Action St. Louis, that “we need to change the messaging around closing the Workhouse” was an attempt to avoid dealing with the underlying issue. Plaintiff sent her tweet to ask Reed what he meant by “change the messaging” and signal to other Twitter users that they could reach Reed via Twitter.
Later in the evening of January 26, 2019, Plaintiff attempted to access Reed’s Twitter profile page and learned she had been blocked by Reed, meaning she could no longer view his tweets, or otherwise interact with his Account.
According to Reed, the board president blocked the plaintiff because he believed Felts’ question (and her instructions to contact Reed via Twitter) somehow “implied violence” against him and the Board of Aldermen. No evidence was presented that any threats — violent or otherwise — followed this interaction.
On top of that, the court notes that Reed intertwined his Twitter account with official business in 2019. The city’s website was altered to include a link to Reed’s Twitter account. This was followed by an embed of his Twitter feed. This feed remained live on the city’s website until Reed was sued by Sarah Felts, at which point it was removed, presumably by a city IT employee. Felts’ Twitter account remained blocked until after she filed the lawsuit in early 2021.
So, Reed made it clear his Twitter account was also the Board president’s account. And the victim of his careless blocking wasn’t freed from this incursion on her First Amendment rights until after she engaged in litigation. Given this series of events, it’s not unsurprising (former) Board president Reed would invoke the Fifth when testifying in front of a jury of the people he was supposed to be serving.
The opinion recounts several times Reed’s Twitter account was used to engage in city business, citing several statements related to legislation, city policy changes, and Reed’s meetings with other local and federal politicians.
All of this indicates the account run by Reed was engaged in government business and used by Reed in his position as the president of the city’s Board of Alderman. So, there’s really no question his blocking of Sarah Felts violated her rights.
At all relevant times, Reed was the final decisionmaker for communications, including the use of social media, for the Office of the President of the Board of Aldermen. At or near the time Plaintiff was initially blocked, Reed’s public Twitter account had evolved into a tool of governance. In any event, by the time the Account was embedded into the City’s website in April 2019, while Plaintiff remained blocked, the Account was being operated by Reed under color of law as an official governmental account. The continued blocking of Plaintiff based on the content of her tweet is impermissible viewpoint discrimination in violation of the First Amendment. Thus, Plaintiff is entitled to judgment in her favor on her remaining claim for declaratory relief.
That is how the First Amendment actually works. The government can’t block your Twitter account simply because it doesn’t like what you’re saying. That happened here. And, while the lawsuit concludes with only a $1.00 reward in nominal damages, it does make things better for St. Louis residents, as well as those experiencing the same sort of government bullshit elsewhere in this federal circuit. It’s another ruling that clearly states government officials can’t engage in unwarranted blocking of people officials would rather not hear from. Elected officials represent and serve everyone in their jurisdictions. They can’t constitutionally pick and choose who they want to engage with.
The last few days on Twitter have been, well, chaotic, I guess? Beyond the blocking of the ElonJet account, followed by the blocking of the @JoinMastodon account, then the blocking of journalists asking about all this and the silly made up defense of it, over the weekend, Twitter announced a new policy banning linking to or even displaying usernames on a whole host of other social media platforms:
The new “promotion of alternative social platforms policy,” which was quite obviously hastily crafted, said that “Twitter will no longer allow free promotion of specific social media platforms on Twitter.” It said that “at both the Tweet level and the account level, we will remove any free promotion of prohibited 3rd-party social media platforms, such as linking out … to any of the below platforms on Twitter, or providing your handle without a URL.
The “prohibited platforms” list had some odd inclusions, and even odder exclusions:
Facebook, Instagram, Mastodon, Truth Social, Tribel, Post and Nostr
3rd-party social media link aggregators such as linktr.ee, lnk.bio
This is… desperate? Silly?
But it also raised questions. Where was TikTok? Or YouTube? Or Gab? Or Parler? Or a bunch of other small new wannabes? You could say they’re too small, but then again, he included Nostr, a social media protocol that is brand new and has basically zero features. I have personally been playing with it, but I think only about 500 people are currently using it. Maybe. Probably fewer.
Of course, as usual, Musk’s biggest fans immediately started crafting silly breathless defenses of how this was totally consistent with Musk’s claims of bringing his “free speech absolutism” to the platform. Most of these defenses were pathetic. Perhaps none more so than his mother’s.
That’s Elon’s mom saying that his new proposal “makes absolute sense” because “when I give a talk for a corporation, I don’t promote other corporations. If I did, I would be fired on the spot and never booked again? Is that hard to understand?”
I mean, that is not hard to understand, but it’s also not an accurate description of the scenario. The people using Twitter are not paid to give talks “for Twitter.” And, if that were the standard, then, um, that wouldn’t just justify Twitter’s old practices of banning accounts for lots of things that any company would fire you for saying during a “company talk,” but actually make you wonder why Twitter didn’t ban a hell of a lot more people.
But, of course, that’s not the standard. Or the scenario.
And then, of course, a few hours later, Musk (facing pretty loud criticism of this latest policy change) appeared to do an about-face, though you’d have to be following him closely to actually realize it. First he defended it, saying “Twitter should be easy to use, but no more relentless free advertising of competitors. No traditional publisher allows this and neither will Twitter.”
Except that’s also not true. First of all, every other social media platform absolutely allows accounts to link to alternative social media. Second, even “traditional publishers” frequently will link to accounts on alternative social media and they will also (not always, but increasingly) acknowledge competing media providers.
Then he made it more vague saying “casually sharing occasional links is fine, but no more relentless advertising of competitors for free, which is absurd in the extreme.”
Which is not a reasonable policy. Because how does anyone know when they’ve cross that line? Either way, as anyone who works in this space knows, if you have a vague policy like “casually sharing occasional links is fine” while the written policy says no links, you’re going to end up in ridiculous situations, such as when famed startup investor/Musk fan/pontificator Paul Graham pointed out that the policy was so dumb he was leaving for Mastodon… and promptly got banned, leading Musk to promise to have the account restored.
Eventually, in a reply to an account known for posting nonsense conspiracy theories, Musk said that the “policy will be adjusted to suspending accounts only when that account’s *primary* purpose is promotion of competitors, which essentially falls under the no spam rule.”
After that, he posted a poll asking whether he should step down as CEO of Twitter. He lost, 57.5% to 42.5% (though as I’m writing, he’s not said anything further on the results, but I full expect that he’s going to shove someone else into the role while still owning and controlling the company).
The TwitterSafety account also ran a poll asking “should we have a policy preventing the creation of or use of existing accounts for the main purpose of advertising other social media platforms”, and while the poll still has a few hours left as I write this, it seems people are almost universally against it:
So, despite Elon arguing that not having such a policy is “absurd in the extreme” and his mother insisting that such a policy “makes absolute sense,” the “vox populi” on Twitter disagrees.
Why is he doing all this? What is going on?
It seems that I have a bit of experience understanding how new social media CEOs who come in on a wave of “bringing free speech back!” promises end up running the social media content moderation learning curve. Thus, I thought it might be useful to explain the basic thought process that normally one goes through here, and that likely created each of these results. It’s basically the same as how Parler’s then CEO John Matze went from “our content is moderated based off the FCC and the Supreme Court” to “posting pictures of your fecal matter in the comment section WILL NOT BE TOLERATED” in a matter of days.
Basically, it’s exactly what I wrote in my speed run article. These naive social media CEOs come in, thinking that the thing “missing” from social media is “free speech.” But they’re wrong. Even if you strongly believe in “free speech” (as I do), that doesn’t mean you want to allow crazy assholes screaming insults at guests in your house. You ask those people to leave, so that your guests can feel welcome. That doesn’t mean you’re against free speech, you’re just saying “go be a crazy asshole somewhere else.”
Every “free speech” CEO eventually realizes this in some form or another. In Musk’s somewhat selfish view of the world, he only seems to notice the concerns when it comes to himself. While he’s had no problem encouraging brigading and harassing of those he dislikes, when a random crazy person showed up near a car with his child in it, he insisted (falsely, as we now know) that it was an account on his website that put him in danger, and banned it.
But, of course, reporters are going to report on it, and in that frenzied state of “this is bad, must be stopped,” he immediately jumped to “well, anyone talking about that account must also be bad, and obviously should also be stopped.”
The “links to other social media” freakout was likely related to all of this as well. First people were linking to the ElonJet account on other social media (which Musk referred to — incorrectly — as “ban evasion”) and so he saw social media as a sneaky tool for getting around his paradise view of how Twitter should work. Also, while there’s no confirmation on this point from Twitter’s numbers, it sure feels like these other social media sites are getting a nice inflow of users giving up on (or at least decreasing their usage of) Twitter.
The biggest beneficiary (by far) seems to be Mastodon, so Musk could view this as a “kill two birds with one stone” move: trying to blunt Mastodon’s growth while also (in his mind) stopping people from visiting the “dangerous” ElonJet account on Mastodon. Except, of course, the opposite of that occurred, and he created a sort of Streisand Effect bump for Mastodon users:
See those bumps in new signups? Those are Elon bumps. Each time he does something crazy, more people sign up.
So, based on that, Elon quickly started banning reporters who he disliked and who were asking what he saw as sketchy questions, and then tried to retcon policies to justify those bans. First it was the nonsense about “assassination coordinates” and then it became about links to social media. Reporter Taylor Lorenz got accused of both. Elon first claimed that her account was suspended for doxing someone “previously” in her reporting (which is something Lorenz-haters have falsely insisted she did). But Twitter directly told Lorenz she was banned for a tweet showing her accounts on other sites:
This is how tyrants rule when they want to pretend they’re ruling by principles. Punish those who oppose you, and then retcon in some kind of policy later, which you insist is an “obviously” good policy, to justify the bans.
Of course, in the old days, when Twitter had a thoughtful trust & safety team, at least they’d make some effort to game out new policies. They’d discuss how those policies might lead to bad outcomes, or how they might be confusing, or how they might be abused. But Elon and friends have no time for that. They need to ban people who upset him, and come up with the policies to justify it later.
That’s how you end up with the stupidly broad “no doxing” policy and the even dumber “no other social media” policy — and only then do they discover the problems of the policies, and try to adjust them on the fly.
There are two other facts here worth noting, and both apply to a very typical pattern found in authoritarians taking over governments while preaching about how they’re “bringing freedom back.”
First, they often will lie about the oppression that they claim happened under the last regime. That’s absolutely been the case here. As the Twitter files actually showed, Twitter’s former regime was not a bunch of “woke radicals censoring conservatives.” They were a thoughtful group of people doing an impossible task with not nearly enough resources, time, or information. As such, sometimes they made mistakes. But on the whole they were trying to create reasonable policies. This is why all evidence, across multiple studies, showed that Twitter actually bent over backwards to not be biased against conservatives, but Trumpists still insisted it was “obvious” that they were moderating based on bias.
The usefulness for the people now in charge, though, is that they feel they have free rein to do what they (falsely) insisted the previous regime was doing. You see it among many Musk fans now (including some high profile ones who should know better *cough* Marc Andreessen *cough*), who are mocking anyone pointing out the nonsense justifications and hypocrisy of Musk’s new policies, which clearly violate his old stated plans for the site. The people justifying this say, mockingly, “oooooooh, look who’s suddenly supportive of free speech.” The more vile version of this is “oh, well how does it feel now that you’re on the other end?” The more direct version is just “well, you did it to us.”
Except all of that is bullshit. Because people talking about it aren’t screaming about “free speech,” so much as pointing out how Musk is going back on his word. A thoughtful commentator might realize that maybe there were good reasons for older decisions, and it wasn’t just “woke suppression of free speech.” But, instead, they justify their new actions based on it being okay because of the falsely believed cruelty of the previous regime.
Second, this is pretty common with “revolutionaries” promising freedom. When they discover that freedom also allows people to oppose the new leader, those “disloyal” to the new regime need to be put down and silenced. In their minds, they justify it, because the ends (“eventual freedom”) justify the means of getting there. So, yes, the king must kill the protestors, but it’s only because those protestors might ruin this finely planned journey to more freedom.
So, in the mind of the despot who wants to believe they’re bringing a “better world of freedom” to the public, it’s okay to deny that freedom to the agitators and troublemakers, because they’re the ones “standing in the way” of freedom to the wider populace.
It seems like some of both of those factors are showing up here.
Maybe if enough cases pile into the federal court system, the Supreme Court might decide to actually establish a First Amendment right to record public officials as they engage in their public duties. Until then, we’re stuck with a patchwork of precedent that recognizes this right only in certain parts of the nation.
Fortunately for the plaintiff, independent journalist Justin Pulliam, the Fifth Circuit established a right to record law enforcement officers back in 2017. Given the events being sued over occurred in 2021, the officers cannot hope to plausibly claim this right wasn’t “clearly” established.
Pulliam’s lawsuit [PDF], filed with the assistance of the Institute for Justice, alleges a Fort Bend (Texas) Sheriff’s deputy illegally arrested him for recording officers responding to a mental health call. But his history with the Fort Bend County Sheriff doesn’t begin with the arrest in December 2021.
On July 12, 2021, Pulliam arrived at the scene of a reported submerged vehicle and began recording. Soon after his arrival, he and other press members were asked to leave the park and wait for law enforcement to address them later. The Sheriff and another FBCSO officer, Dalia Simons, then went after Pulliam, pushing him away from the scene while allowing other press members to inch closer to the site of the crash.
Pulliam complied. Despite his compliance, the Sheriff ordered him to be removed from the press gathering that was awaiting the Sheriff’s Office press representative.
Nonetheless, as Justin approached, the Sheriff gestured toward him and appeared to be giving Hartfield instructions. The Sheriff appeared to say: “[If he] don’t do it, arrest him. Cause he’s not part of the local media, so [he has to] go back.”
Having unilaterally decided Pulliam wasn’t part of the regular press, he sent deputies to move Pulliam away from the pending press conference. From his new vantage point — approximately 10 parking spaces away from the rest of the press — Pulliam was unable to hear or obtain a quality recording of the Sheriff’s statements.
Because of this Pulliam began attending fewer Sheriff’s Office press conference, reasonably fearing deputies would again force him to record from further away or prevent him from attending at all.
That led to the December 2021 incident where Pulliam was arrested for recording.
However, just before Christmas—on December 21, 2021—Justin saw an FBCSO vehicle (later discovered to be driven by FBCSO officer Ricky Rodriguez) pass at a high rate of speed. The vehicle ultimately began heading toward a remote area of Fort Bend County, and Justin knew that one of the only properties that direction was tied to a mentally ill man whose case Justin had followed for some time. Justin thus suspected that officers were heading to the property for a mental-health call on the man.
Justin had recorded previous FBCSO interactions with the mentally ill man and believed officers had a history of unnecessarily escalating their responses to him.
Pulliam began recording the officers from nearly 130 feet away, standing near the mentally ill man’s mother, who had given him permission to film the incident. This overhead photo (from the lawsuit) shows how far away from the deputies Pullliam was:
That apparently wasn’t far enough.
While Justin was filming the trailer, defendant FBCSO officer Taylor Rollins approached him from behind—the opposite direction from the house—and ordered him and the mental-health advocates near him to move across the street. Rollins did not appear to order the mentally ill man’s mother to leave.
Pulliam began moving towards the area he had been directed to go. He exchanged a few words with the deputy about his concerns the officers would hurt the man. Shortly after that, things escalated.
In the middle of his conversation, Rollins stopped and ordered Justin to leave. Justin responded that he had a right to remain there as long as the other bystanders were there, too.
Rollins continued to insist that Justin leave and began counting down on his fingers while moving toward Justin. Justin continuously moved back, away from the trailer and away from Rollins, as Rollins approached.
Rollins then arrested Justin.
At the time of his arrest, Justin was approximately 170 feet away from the trailer.
The other bystanders were still standing where Justin had been filming when Rollins first approached him. FBCSO officers did not arrest the other bystanders based on their proximity to the scene. While the bystanders later moved locations, to the front of the gas station’s main building on the property, officers did not force them to move across the street until approximately an hour or more later.
After his arrest, Deputy Rollins seized all of Pulliam’s recording equipment and his iPhone. Some of this property has been returned. The notable exception are the memory cards from Pulliam’s cameras and iPhone, which contain recordings of the mental health call along with his interactions with (and arrest by) Deputy Rollins.
Pulliam was booked, strip-searched, and obliquely threatened by deputies.
After Justin arrived at the jail, Rodriguez and another unidentified officer discussed Justin’s arrest in front of him. The unidentified officer, upon learning that Justin was an investigative journalist, replied in substance that Rodriguez should “teach [Justin] for fucking with us.”
Pulliam refused to speak to the Sheriff without a lawyer present. He spent several hours in jail before he was bailed out.
More bullshit followed. He was booked on one count of Interference with Public Duties, a Class B misdemeanor. Despite it only being a misdemeanor, prosecutors insisted on presenting this charge to a grand jury to secure an indictment. The indictment alleged Pulliam “interfered” with Deputy Rollins’ attempt to “set up a perimeter” — something Pulliam somehow managed to do from 130-170 feet away from the scene and while the deputy allowed other bystanders to remain inside the so-called “perimeter.”
So, there’s a whole bunch of obvious rights violations. Pulliam was engaged in protected First Amendment activity when he was arrested. The sheriff’s office may claim he was somehow “interfering” with the mental health call, but it’s undeniable he was singled out specifically because he was filming the officers. That ties in with the retaliation claims, with Pulliam being singled out for filming and (possibly) because he expressed his opinion that he believed deputies would harm the man they were supposed to be helping. That others were allowed to remain inside the “perimeter” established by the deputy brings with it selective enforcement claims under the 14th Amendment.
As was noted at the beginning of the post, the right to record is firmly established in this circuit. That alone should be enough to deny qualified immunity to the deputy. And once QI is denied, a settlement almost always follows. The deputy was on notice arresting someone for filming was unconstitutional, no matter how he chose to frame it after the fact. Hopefully, Pulliam is headed for a quick victory. And hopefully the Sheriff’s office will soon be re-training officers on established rights.
SMT Sling Bag’s rectangular style, measuring 6.3×3.7×13 inches, allows you the large capacity you want to comfortably store all your necessities. Keep your belongings close at hand but safe from thieving hands with SMT’s built-in combination lock and slash-resistant sling strap. Your comfort is assured with a breathable mesh backing and adjustable, padded shoulder strap that can be hooked to either side of the bag. The lightweight but durable oxford fabric is great-looking, water-resistant, and scratch-resistant, with reinforced stitching in high-touch areas. In SMT Sling Bag’s large main compartment, you will find 3 interior pockets, 1 zippered, and the USB port and cord to charge your phone with ease and access. Tucked away on the back is the fourth pocket with a zipper to secure those easy-to-grab items. SMT Sling Bag is a minimalist shoulder bag that is sleek and durable on the outside but spacious and safe on the inside. It’s on sale for $30.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Last week when Elon Musk banned the ElonJet account, then banned a bunch of reporters for talking about it, and then insisted that they had tweeted out his “assassination coordinates” leading to a crazed stalker to jump on a car with his child in it, some were… skeptical. I wasn’t sure it made sense to weigh in on the details of the “stalker” situation without more info, though it never made sense that a stalker would find the jet tweets particularly useful — especially when flying into an airport as massive as LAX.
That said, reporters Taylor Lorenz and Drew Harwell, both excellent tech reporters who were both suspended from Musk’s Twitter over the last few days, have a more complete story of the stalker, including talking to the guy. And it’s pretty damn clear that it has literally nothing to do with the ElonJet account, which did not dox him, nor help the crazed guy.
A confrontation between a member of Elon Musk’s security team and an alleged stalker that Musk blamed on a Twitter account that tracked his jet took place at a gas station 26 miles from Los Angeles International Airport and 23 hours after the @ElonJet account had last located the jet’s whereabouts.
The guy in question, definitely seems troubled. But there’s little indication he was actually a “stalker.” It even sounds entirely possible that he, in a troubled state, just happened into the same gas station as Elon’s security. Lorenz and Harwell tracked down the car’s owner (probably because Musk, uh, doxed the license plate by revealing it on Twitter). The car had been rented out via the car-sharing service Turo, and the owner revealed who had rented it.
The car’s renter, Brandon Collado, confirmed in interviews with The Post that he was the person shown in the video. He also provided The Post with videos he shot of Musk’s security guard that matched the one Musk had posted to Twitter.
In his conversations with The Post, Collado acknowledged he has an interest in Musk and the mother of two of Musk’s children, the musician known as Grimes, whose real name is Claire Elise Boucher. Boucher lives in a house near the gas station.
In his communications with The Post, Collado, who said he was a driver for Uber Eats, also made several bizarre and unsupported claims, including that he believed Boucher was sending him coded messages through her Instagram posts; that Musk was monitoring his real-time location; and that Musk could control Uber Eats to block him from receiving delivery orders. He said he was in Boucher’s neighborhood to work for Uber Eats.
I’m not sure I’d take the “interest” in Musk and Grimes as particularly confirmed, given that he was making “several bizarre and unsupported claims” during the conversation. It really does seem like he’s just a guy who needs help who happened into the parking lot of the gas station at a coincidental time, not because of any stalking.
The incident took place at the gas station on Tuesday, Dec. 13, approximately 15 minutes before the station closed, according to its manager, Daniel Santiago, who was working that night. Santiago said he was surprised when the car Collado was driving pulled into the Arco station and into the space next to Santiago’s car, which is not a normal location for a customer to park.
He said the incident was caught on the gas station’s security camera and that footage had been turned over to the South Pasadena police on Thursday.
According to the video of the incident that Musk posted, the member of Musk’s security team confronted Collado sitting in the car wearing gloves and a hood. “Yeah, pretty sure. Got you,” the Musk security team member can be heard saying on the video.
Perhaps he is a stalker, but either way, there’s basically zero evidence to suggest any of this has to do with the ElonJet account, or “assassination coordinates.”
And, of course, when Lorenz tweeted at Musk to see if he’d answer the email questions she and Harwell had sent him, Elon’s response was… to ban her from Twitter.
Musk later reversed the ban, though Drew remains suspended unless he removes a tweet that does not dox him or even point to a live ElonJet account (and which actually cites me).
From the Internet of very broken things to telecom networks, the state of U.S. privacy and user security is arguably pathetic. It’s 2022 and we still don’t have even a basic privacy law for the Internet era, in large part because over-collection of data is too profitable to a wide swath of industries, which, in turn, lobby Congress to do either nothing, or the wrong thing.
Sensitive medical data, supposedly held to a higher standard, isn’t much of an exception. The Markup and STAT this week had an interesting joint report showcasing how many telehealth startups routinely play fast and loose with consumer data. Numerous telehealth websites were found to share sensitive data with ad networks, including which new medications you were taking and what issues you are having:
On 13 of the 50 websites, we documented at least one tracker—from Meta, Google, TikTok, Bing, Snap, Twitter, LinkedIn, or Pinterest—that collected patients’ answers to medical intake questions. Trackers on 25 sites, including those run by industry leaders Hims & Hers, Ro, and Thirty Madison, told at least one big tech platform that the user had added an item like a prescription medication to their cart, or checked out with a subscription for a treatment plan.
Once this data makes its way into advertising networks, it inevitably gets collated into “anonymized” profiles of individuals that data routinely suggests aren’t actually that anonymous. All it takes is a few additional snippets of data found elsewhere (often available courtesy of a parade of breaches, hacks, or leaks) before individual users can be identified.
A recent Mozilla report also found that most mental health and prayer apps similarly have pathetic privacy and security standards. And numerous reports have pointed out how the “new and improved” privacy standards, heavily hyped by tech giants like Apple, are often performative.
As The Markup report makes clear, existing privacy regulations like the Health Insurance Portability and Accountability Act (HIPAA) were not built for telehealth, so much of this sloppy handling of consumer data falls through the cracks. Most consumers, meanwhile, operate from the false belief that this data is far more protected than it actually is:
“Individually, we have a sense that this information should be protected,” said [Andrew] Mahler, who is now vice president of privacy and compliance at Cynergistek, a health care risk auditing company. “But then from a legal and a regulatory perspective, you have organizations saying … technically, we don’t have to.”
U.S. regulators occasionally crack down on bad behavior in this sector, such as when the FTC sued data broker Kochava last July, stating the company wasn’t adequately protecting data on whether consumers had visited a reproductive health clinic or addiction recovery center. But even post-Roe, with the over-collection of location data taking on life or death stakes, the FTC routinely lacks the staff or finances to take such action with any real consistency in a market full of bad actors.
And it lacks the staff and resources because it’s become zealous dogma, particularly on the right, to lobotomize all meaningful US regulatory oversight (whether it’s privacy or anything else), then put on dumb, hollow performances any time a company abuses the cavalier private data environment they created through their greed and apathy (see: the myopic fixation on TikTok and only TikTok).
Inevitably there will be a medical privacy data scandal so massive it will force the culture to truly own the fact they’ve prioritized money over consumer/market health, privacy, and safety for decades. But even then, it’s a steep uphill climb to get a comically corrupt Congress to craft even the most modest of guardrails.