Oh Elon. As we’ve discussed, Elon is infatuated with Community Notes as a sort of crowdsourced alternative to actually funding a trust & safety staff and tooling. And while we actually like Community Notes and think more social media should use similar tools, it’s simply not a full trust & safety replacement.
But, over the past year, we’ve seen that Elon loves to point out when Community Notes supports his priors, and repeatedly claims victory when Community Notes debunks (or even quibbles with) content that Musk doesn’t like. If you look, you can find him cheering on Community Notes time and time again.
Not too long ago, ExTwitter changed the terms of its creator payout system such that creators who regularly get fact-checked via Community Notes will no longer get payouts.
But… how does the man in charge feel about things when he gets fact checked via Community Notes? Well, it appears that his tune quickly changes. While there have been a few times he’s been Community Noted in the past, and he’ll sometimes brush it off with a “yes, even I’m open to having such notes placed on my account,” when it’s a higher profile thing he seems to freak out.
Over the weekend, Tucker Carlson started pushing a very misleading story regarding YouTube sensationalist Gonzalo Lira who made his name as one of those jackass “dating coaches,” (i.e., “pickup artists”) who became a pro-Russia propagandist once the invasion of Ukraine began. Carlson’s version of the story pitched Lira as a “journalist” who was “imprisoned in Ukraine” for “criticizing Zelensky.”
Lira was arrested earlier this year for violations of Ukraine’s criminal code. There are many legitimate questions that can be asked regarding the nature of Ukraine’s laws regarding propaganda and free speech. But, the underlying accusations against Lira seem more focused on how he was revealing the identity and location of both Ukrainian soldiers and western journalists covering the war.
Either way, Musk picked up on Carlson’s story, falsely claimed Lira had been imprisoned for 5 years, and trying to demand answers as to what was happening with him. Community Notes quickly stepped in to first point out that Carlson’s description of Lira’s situation was misleading, and then that Elon’s tweets were also misleading.
After discovering that his own posts were being Community Noted (will he lose access to monetization?), he started claiming that “state actors” were “gaming” Community Notes. And then, hilariously, claimed that this was really a “honey pot” to catch those gaming the system.
The Community Notes folks quickly hit back:
They pointed out that:
Community Notes requires agreement from contributors of differing perspectives, as such is highly resistant to gaming. The entire Community Notes algorithm and data is open source, and can be reviewed by anyone…
Community Notes ftw.
Soon after that, the Community Notes on Elon’s post disappeared. Funny that.
And… soon after that, a different Community Note appeared on Elon’s tweet again pushing back on the idea that Community Notes was easy to game:
So, yes, any such system of crowdsourcing things can be gamed, though ExTwitter’s implementation of Community Notes (a modification of the tool Polis) is done in a way that, at the very least, makes it resistant to such gaming. It’s not impossible to game, but it’s also not easy given the way it’s set up.
But, still, given how often Elon acts like Community Notes is an infallible system that solves most of his trust & safety issues, it’s interesting to note that apparently it’s only “gamed” by “state actors” when its calling out his own false tweets. The rest of the time Community Notes is so accurate that the company can base payment information on it. So, when Community Notes supports Elon’s views, it’s a key part of ExTwitter’s platform strategy. When it goes against Elon’s views, it’s being abused by state actors.
When Twitter first launched what it called “Birdwatch,” I was hopeful that it would turn into a useful alternative approach to helping with trust & safety/content moderation questions, but I noted that there were many open questions, in particular with how it would deal with malicious actors seeking to game the system. When Elon took over Twitter, he really seemed to embrace Birdwatch, though he changed the name to the pointlessly boring “Community Notes.”
I still think the concept is a good one, and think it’s one of Elon’s few good moves. I think other social media sites should experiment with some similar ideas as well.
The problem, though, is that Elon seems to think that Community Notes is an effective replacement for a comprehensive trust & safety program. At the heart of so many of Elon’s decisions in firing the vast majority of the company’s trust & safety staff was that “Community Notes can handle it.”
As we’re in the midst of a series of major crises around the globe, where the flow of information has proven incredibly important, one thing we’re clearly learning is that Community Notes is not up to the task. Just to drive this point home, over the weekend Elon himself posted some fucking nonsense (as he’s prone to do) and many hours later Community Notes pointed out it was hogwash. Elon, as he’s done in the past when he’s been “Noted,” claimed he was happy it happened to himself… before claiming that his post was “obviously a joke meme” and that “there is more than a grain of truth to it.”
So, first of all, there isn’t “more than a grain of truth to it.” The whole thing is simply false. But, more importantly, if you look at the top replies to his “obviously a joke meme,” suggests that Elon’s biggest fans did not, even remotely, think that this was “obviously a joke meme,” but rather took it entirely seriously, cheering him on for “telling the truth.” Here’s just one of the top replies to his original tweet:
Also, it took quite some time for the note to appear on Elon’s account. And, look, content moderation at scale is impossible to do well and all that, but Community Notes seems like the exact wrong approach in situations like this one. Especially at a time when the accounts pushing out the most viewed news these days seem to be made up by a combination of grifters and idiots:
Online we have seen many users of X describe their experience of this crisis as different. Some of that may result from the more ambiguous nature of the larger conflict, especially as the news cycle moves from the unambiguous horror of the initial attack to concerns about Israel’s response. However, our investigation here suggests an additional factor: in Musk’s short tenure as owner of the platform, a new set of news elites has emerged. These elites post frequently, many sharing unvetted content and emotionally charged media. While sharing no single political ideology, many embrace a similar culture of rapid production of unlinked or ambiguously sourced content, embracing a “firehose of media” ethos that places the onus of verification on the end-user. This occurs in an environment that has been shorn of many of the “credibility signals” that served to ground users in the past — checkmarks that indicated notability, fact-checks distributed through Twitter Trends, and Twitter/X-based labeling of deceptive content. Even fundamental affordances of the web — such as simple sourcing through links — have been devalued by the platform, and, perhaps as a result, by the new elites that now direct its users’ attention.
Leaving aside the significant concern of taking away professional, trained trust & safety employees, and replacing them with random (often hand-picked) untrained volunteers, there are serious concerns coming to light about how Community Notes actually works in practice.
Multiple reports have come out lately highlighting the limitations of Community Notes on important breaking news in the midst of various conflicts around the world, where you have malicious actors seeking to deliberately spread misinformation. A report at Wired found that Community Notes is actually making some of the problems worse, rather than better.
On Saturday, the company wrote on its own platform that “notes across the platform are now being seen tens of millions of times per day, generating north of 85 million impressions in the last week.” It added that thousands of new contributors had been enrolled in the system. However, a WIRED investigation found that Community Notes appears to be not functioning as designed, may be vulnerable to coordinated manipulation by outside groups, and lacks transparency about how notes are approved. Sources also claim that it is filled with in-fighting and disinformation, and there appears to be no real oversight from the company itself.
“I understand why they do it, but it doesn’t do anything like what they say it does,” one Community Notes contributor tells WIRED. “It is prone to manipulation, and it is far too slow and cumbersome. It serves no purpose as far as I can see. I think it’s probably making the disinformation worse, to be honest.”
The report isn’t just based on random Community Notes users, but looking more closely at how the program works, and the ability for it to be gamed. Wired found that it wasn’t difficult to set up multiple accounts controlled by one person which all had access to Community Notes, meaning that you could manipulate support for a position with just a small group of users controlling multiple accounts.
It also points to earlier (pre-Elon) research that showed that (then) Birdwatch wasn’t used nearly as much for standard fact checking, but was used in political debates by users who disagreed politically with someone who had tweeted.
Back during the summer, the Poynter Institute had a good analysis of the limitations of Community Notes for dealing with real-time misinformation campaigns during crises. Specifically, the design of the current Community Notes has some, well, questionable assumptions built in. Apparently, it looks over your tweeting history and assigns you to a camp as being either “left” or “right” and then only allows a Community Note to go public if enough of the “left” people and the “right” people agree on a note.
“It has to have ideological consensus,” he said. “That means people on the left and people on the right have to agree that that note must be appended to that tweet.”
Essentially, it requires a “cross-ideological agreement on truth,” and in an increasingly partisan environment, achieving that consensus is almost impossible, he said.
Another complicating factor is the fact that a Twitter algorithm is looking at a user’s past behavior to determine their political leanings, Mahadevan said. Twitter waits until a similar number of people on the political right and left have agreed to attach a public Community Note to a tweet.
While that may work on issues where there isn’t any kind of culture war, it’s completely useless for culture war issues, where plenty of disinformation flows. Indeed, the Poynter report notes that a huge percentage of the highest rated Community Notes inside the Community Notes system are never seen by the public because they don’t have “cross-ideological agreement.”
The problem is that regular Twitter users might never see that note. Sixty percent of the most-rated notes are not public, meaning the Community Notes on “the tweets that most need a Community Note” aren’t public, Mahadevan said.
The setup with “cross-ideological” consensus basically seems almost perfectly designed to make sure that the absolute worst nonsense will never have Community Notes shown publicly.
Meanwhile, a report from NBC News also highlights how even when Community Notes is able to help debunk false information, it often comes way too late.
NBC News focused on two prominent pieces of Israel-Hamas misinformation that have already been debunked: a fake White House news release that was posted to X claiming the Biden administration had granted Israel $8 billion in emergency aid and false reports that St. Porphyrius Orthodox Church in Gaza was destroyed.
Only 8% of 120 posts related to those stories had published community notes, while 26% had unpublished notes from volunteers that had yet to be approved. About two-thirds of the top posts NBC News reviewed had no proposed or published Community Notes on them.
The findings echo what a Community Notes volunteer said was X’s lack of response to efforts to debunk misleading posts.
“All weekend we were furiously vetting, writing, and approving Community Notes on hundreds of posts which were demonstrably fake news,” Kim Picazio, a Community Notes volunteer, wrote on Instagram’s Threads. “It took 2+ days for the backroom to press whatever button to finally make all our warnings publicly viewable. By that time… You know the rest of that sentence.”
And when the Community Notes don’t show up until much later, a ton of nonsense can spread:
A post about the debunked White House news release published by a verified account had nearly 500,000 views and no proposed or appended note Tuesday afternoon.The Community Notes system also showed that a user tried to submit a fact-check Sunday on another post including the same known misinformation but that it had yet to be approved, saying, “Needs more ratings.” The post had accrued 80,000 views since Sunday.
In a search for St. Porphyrius Orthodox Church in Gaza, only five Community Notes had been applied to the top 42 posts echoing the debunked misinformation. Several posts from verified users with no notes repeated the claim and got over 100,000 views, while 13 Community Notes had been proposed on posts of the debunked claims but had not yet been approved for publishing.
During the first 5 days of the conflict, just 438 Community Notes (attached to 309 posts from 223 unique accounts) earned a “HELPFUL” rating and ended up being displayed publicly to users. Although it’s impossible to know what percentage of content about the war this represents, the fact that trending topics related to the conflict have routinely involved hundreds of thousands or even millions of posts suggests that a few hundred posts is just a drop in the bucket. The visible notes were generally attached to popular posts — the 309 posts in question earned a combined total of 2147081 likes, an average of 6948 likes per post. The majority of the posts that earned Community Notes (222 of 309 posts, 71.8%) came from paid X Premium/Twitter Blue subscribers, and the majority of the accounts posting them (147 of 223, 65.9%) are X Premium subscribers, who are potentially earning a share of X’s ad revenue based on the number of times their posts are seen and who therefore have a financial motive to never delete misleading content. (Overall, roughly 7% of posts that received Community Notes were deleted during the period studied, but there’s no reliable way of knowing how many of these posts were related to the Israel/Hamas war.)
Again, I really like the concept of Community Notes. I think it’s a very useful tool — and one example (of many) of trust & safety tools beyond simply “taking down” content. But it needs to be part of a wider strategy, not the only strategy. And, the program can’t be setup with such a huge blindspot for culture war issues.
But, that’s exactly how things currently work, and it’s a shame, in part because I fear it’s going to discourage others from creating their own versions of Community Notes.
Who doesn’t love the wisdom of the crowds? Hey, it’s a great thing if you’re seeking comment from the oft-disrespected “stakeholders” known as the people who pay your salaries. Comment periods for proposed regulation ensures a healthy mix of intelligent commentary and unhinged partisanship. You know, like pretty much any congressional hearing.
On the other hand, opening up your thing for public comment via the internet tends to ensure those with an agenda will try to take control of the thing. Ask pretty much any corporate entity that saw millions of internet users and still decided the best way to take the pulse of the connected was to trot out a perfunctory CAPTCHA and crowdsurf their way into increased profitability.
Behold the wreckage: The PepsiCo crowdsourcing that suggested the next Mountain Dew product should align itself with one of dozens of porn fetishes (“Gushing Granny”) or things possibly even less tasteful (“MTN JEW,” “Methamphetagreen”). Maybe the brains in the Missouri capital thought the same internet that voted to send Taylor Swift to perform a concert at a school for the deaf and rapper Pitbull to a remote Alaskan Walmart to plug his latest album would take this complaint box for bigots seriously and not do turn its sanctimonious suggestion box into another toy for trolls.
I’m sure they’ve learned something from this. Unfortunately, the lesson learned won’t be “acceptance” or “don’t write laws specifically to make people you don’t like miserable.” What they will learn is that you just can’t leave a complaint box open on the internet, as Morgan Sung reports for TechCrunch:
A Missouri government tip site for submitting complaints and concerns about gender-affirming care is down after people flooded it with fanfiction, rambling anecdotes and the “Bee Movie” script.
The Missouri Attorney General’s office launched an online form for “Transgender Center Concerns” in late March, inviting those who’ve witnessed “troubling practices” at clinics that provide gender-affirming care to submit tips. The site didn’t ask users to name patients or healthcare providers, but encouraged users to complete the form “in as much detail as possible.”
But after days of TikTok and Twitter users spamming the site with gibberish, the tip line has been removed from the Missouri government site entirely. Instead of the online form, the link to the tip line now says that the page no longer exists.
This is a far, far better thing than electing the creator of 4chan (and then, a bit later, North Korean dictator Kim Jong-Un) “Man of the Year” in a Times poll. This is people acting in concert to prevent assholes from pushing through the sort of garbage the Missouri legislature clearly wants to see: bigoted fanfic detailing how a trans person somehow upended the lives of themselves or their loved ones. This is the kind of filler a shitbox like this deserves.
Why “Bee Movie?” It’s been a bountiful source for memes for pretty much its entire existence. And, as such, its existence as a powerful meme cannot be entirely explained. Attempting to wrap your minds around the contours of the unexpected internet-wide embrace of an digitally animated film starring ultra-smug comedian Jerry Seinfeld is like trying to explain why some pinball machines are great, while others just kind of suck. This blend of subjective and objective cannot be accurately described. It is because it is. It works or it doesn’t.
This one does work. The “Bee Movie” script is a well-known word bomb that trolls (well-meaning or otherwise) deploy to clog up the “please fill in the box” machinations of people who don’t understand how the internet actually works. The disruption of this malicious attempt to cater to the worst residents of Missouri has been derailed by a script that suggests, without irony, it’s possible for a bee to sustain a romantic relationship with a human being.
However impossible it may be to explain why the “Bee Movie” script was the preferred text bomb lobbed into Missouri’s gaping ass(hat) hole, it’s still more explicable than the response from the state’s top prosecutor:
Madeline Sieren, press secretary for Missouri Attorney General Andrew Bailey, blamed “far left activists” for breaking the site. She said the tip line is down temporarily.
“Rather than standing on their supposed science to back up their facts, they’re resorting to trying to hack our system to silence victims of the exact network we’re attempting to expose,” Sieren told TechCrunch in an email.
LOL.
The “far left” didn’t do this. Everyone who isn’t as hateful as you and your supposedly-hetero government bedfellows did this. People who understand the nastiness of the effort did this. The far left may have been involved, but there are plenty of people close to the political center who likely felt compelled to treat a state-designed garbage receptacle as, well, a receptacle for garbage.
Furthermore, you’re also an idiot for claiming this was people “trying to hack” the system. You (the state you’re speaking for) created a form inviting people to respond. People responded. Just because they weren’t the people you wanted to respond doesn’t mean this was a “hacking” attempt meant to “silence” whoever the fuck you wanted this “I don’t like non-binary sexuality” complaint form to appeal to.
Face the facts, Missouri. You were trolled. And you should have seen it coming. That you didn’t clearly demonstrates the shortsightedness of this anti-trans hate, as well as the pinhole pig eyes of the legislators who are allowed to foster hatred and blow tax dollars on incredibly stupid snitch lines they think might give them the political ammo to codify bigotry.
There are plenty of authoritarian, ultra-religious countries willing to applaud garbage views like this, Madeline Sieren and everyone you speak for. If you don’t like your shittiness being mocked and disrupted here in the United States, you’re always welcome to leave.
Nearly two years ago, we discussed a fascinating project spearheaded by one dedicated person, going by the moniker Peebs, to digitize every video game manual’s English version for the Super Nintendo system. For those of you not of a certain age, video games used to come in the form of cartridges that you would load into the console. Those cartridges came packaged with game manuals that did everything from tell you how to play the game, up to and including game lore and backstory. Again, if you were born at the right time like yours truly, reading the manual upon buying the game, sometimes in the back of your Mom’s Plymouth Voyager minivan on the way back from Toys ‘R Us, was part of the excitement.
But, for a variety of reasons including both the timespan since these games came out, the physical nature of the manuals, and Nintendo’s neglect when it comes to preservation, these bits of gaming culture were under threat of historical erasure. Where Peebs and his volunteers came in was in spending nearly a decade collecting and digitizing those manuals without collecting a dime in payment, donations, or ad revenue. Two years ago, the project had about 100 manuals to go to be complete.
Whether or not you care about gaming culture such as this, the undertaking is undeniably impressive. And, frankly, fueled by an oft-villified internet. Peebs made the archive available publicly when he had managed to collect roughly half the English game manuals out there. From there, a community sprung up around the site, with enthusiasts putting the word out all over the internet that someone was archiving these manuals for preservation. The internet did its thing, Peebs and his volunteers got the remaining manuals, and here we are.
Which leaves us with two questions while we celebrate this achievement. First, thus far Nintendo has been entirely hands off with Peebs’ work. That’s despite plenty of publications from the gaming industry taking notice of the work. That seems to suggest that maybe Nintendo either knows that this is covered by fair use or tacitly supports the project… but it’s Nintendo, so you can never be sure. It should shock exactly nobody if the company suddenly swoops in and tries to shut the site down.
And, second, why the hell is it up to a bunch of fans and volunteers to do the work of archivists? Surely Nintendo could have marshalled more resources and gotten this all done more quickly, no? So why didn’t it? Why does it seem that the only people doing the work of preserving the art of video games are the fans?
A few weeks ago Twitter announced Birdwatch as a new experimental approach to dealing with disinformation on its platform. Obviously, disinformation is a huge challenge online, and one that doesn’t have any easy answers. Too many people seem to think that you can just “ban disinformation” without recognizing that everyone has a different definition of what is, and what is not disinformation. It’s easy to claim that you would know, but it’s much harder to put in place rules that can be applied consistently by a large team of people, dealing with hundreds of millions of pieces of content every day.
Facebook has tried things like partnering with fact checkers, but most companies just put in place their own rules and try to stick with it. Birdwatch, on the other hand, is an attempt to use the community to help. In some ways it’s taking a page from (1) what Twitter does best (enabling lots of people to weigh in on any particular subject), and (2) Wikipedia, which has always had a community-as-moderators setup.
Birdwatch allows people to identify information in Tweets they believe is misleading and write notes that provide informative context. We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable. Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.
In this first phase of the pilot, notes will only be visible on a separate Birdwatch site. On this site, pilot participants can also rate the helpfulness of notes added by other contributors. These notes are being intentionally kept separate from Twitter for now, while we build Birdwatch and gain confidence that it produces context people find helpful and appropriate. Additionally, notes will not have an effect on the way people see Tweets or our system recommendations.
Will this work? There are many, many reasons why it might not. Wikipedia itself has spent years dealing with these kinds of questions, and had to build a kind of shared culture and informal and formal rules about what kind of content belongs on the site. It’s a lot harder to retrofit that kind of thinking back onto a platform like Twitter where pretty much anything goes. There is, also, of course, the risk of brigading and mobs — whereby a crew of people might attack a certain tweet or type of information with the goal of getting accurate information being declared “fake news” or something along those lines.
Twitter, I’m sure, recognizes these challenges. The details of how Birdwatch is set up certainly suggests that it’s going to watch and iterate as it goes, but the company recognizes that if it can get this right, it could be quite useful. That’s why, even if there’s a high risk of failure, I still think it’s an interesting and worthwhile experiment.
Some of the initial results, however… don’t look great. A bunch of clueless Trumpists have been trying to minimize the traumatic experience that Alexandria Ocasio-Cortez recently described as her experience during the insurrection at the Capitol on January 6th. Because these foolish people don’t understand that the Capitol complex is a set of interconnected buildings, they are arguing that AOC was “lying” when she talked about the fear she felt while initially hiding in her office during the raid — since her office is in the connected Cannon Building, and not in the domed part of the Capitol complex. It turned out that some of the fear came from a Capitol police officer yelling “where is she?” and barging into the office. AOC, not realizing it was a Capitol police officer, recently spoke movingly about how afraid she was that it was an insurrectionist.
Since they started trying to make this argument on social media, AOC responded, pointing out that the entire Capitol complex was under attack (even if it wasn’t, the fact that you’re in a building across the street from a riotous mob that clearly wouldn’t mind killing you, is a perfectly good reason to be afraid). She also mentioned the two pipe bombs that were found near the Capitol, which were not far from the Congressional office buildings.
Of course, this just shows exactly the problem of trying to deal with “disinformation.” It is often used as a weapon against people you disagree with, where you might nitpick or argue technicalities, rather than the actual point.
I am hopeful that this experiment gets better at handling these situations, but I recognize the huge difficulty in doing this with any sort of consistency at scale, when you’re always going to be dealing with disingenuous and dishonest actors trying to game the system to their own advantage.
A recent episode of NPR’s Fresh Air ran an amazing interview with Dr. David Fajgenbaum, who was diagnosed years ago with the rare Castleman’s Disease, about which very little information was known (and the general prognosis was grim). Fajgenbaum talks about how he ended up in hospitals believing that he was about to die five separate times (he even had his last rites read to him), but then set up his own organization to try to crowdsource a cure. He details the full story in his book that was published last fall, called Chasing My Cure.
The good news is through that crowdsourcing effort, called the Castleman Disease Collaborative Network (CDCN), they at least found a treatment that (for now…) appears to work for Fajgenbaum himself:
The biggest difference between this fifth time I nearly died and the previous four times is that, at this stage, I was engaged. And I had the ultimate date in mind, which is our wedding date, May 24, 2014 – in mind as the driver to say, I need to find something. I failed to respond to all these drugs. There’s nothing left for me. But I have to make it to May 24, 2014. And so thankfully, this combination of seven chemotherapies saved my life.
And when I got out of the hospital, I was able to go back to all those samples I’d been storing on myself and performed a series of experiments where, from within my experiments, I found this pattern that suggested this one communication line in the immune system called the mTOR pathway was highly activated. And what was so exciting about finding this communication line turned on is that there is a drug that was developed 30 years ago that’s really good at turning it off. It’s called sirolimus.
And just knowing that this pathway was on did not guarantee that blocking it would work and that taking this drug would save my life. In fact, the immune system is a very finicky system. And basically, turning off this communication line could have actually caused even more problems. No one knew because this drug had never been given to a Castleman disease patient before.
But really, knowing that I needed to try something if I wanted to make it to our wedding date, I decided to take the leap of faith and to start taking this drug as the first patient with my disease ever to take sirolimus back in early 2014. And amazingly, thankfully, I was able to make it to Caitlin and I’s wedding date. And you wouldn’t think this is too important, Dave, but my hair grew back just in time.
He admits that the treatment that works for him has not been shown to work for everyone with Castleman’s — in fact, it appears to help only about 1/3 of those treated with it. But just the fact that it’s been helping some is worth noting.
And here’s the really interesting part: as we’ve gone into this whole pandemic thing, many of the participants in the CDCN have noticed some similarities between the issues with Castleman’s disease, and with what people are reporting about COVID-19. So they’ve been repurposing the crowdsourcing effort to work on COVID-19:
DAVIES: So the collaborative that you formed to try and share information and leads about treating Castleman is now focused on COVID-19. I mean, this is obviously an urgent public health matter. Did you see similarities between Castleman disease and COVID-19 that made this a good fit?
FAJGENBAUM: That’s right. So early on in this pandemic, it became clear that the most deadly aspect of COVID-19 is actually the cytokine storm that the virus ignites. And the cytokine storm that it ignites is almost identical.
While there are lots of different groups working on different ideas — from vaccines to antibodies — the CDCN is focused on what it does best: looking to see if there are FDA approved drugs out there that might have some useful effect here, and recognizing that the only way to really figure that out was to actually get the data (something very few others seemed set up to do):
And so with this similarity between – at the very basic mechanism, what drives the deadliness of COVID-19 is almost identical to what makes Castleman disease so deadly, it’s these – the cytokine storm. That was one aspect of it. The second is that we know that drug repurposing is our best shot at identifying a drug that can help patients in the short-term, so a drug that’s either already FDA-approved or a drug that is maybe experimental but is not yet approved for anything that could be repurposed for COVID-19. We knew that was our best shot.
And, Dave, I found myself, in early March, thinking to myself, I really hope that some research group out there that has experience studying cytokine storms and has experience doing drug repurposing will follow our blueprint and search for drugs that can be repurposed against this cytokine storm. And I was sitting there hoping that someone would do it.
And then I realized that I needed to listen to my own advice, and that if I’m going to hope that some research lab out there that has experience with cytokine storms and repurposing would turn their effort towards this, then I would need to turn my effort towards this. This is what we’ve been doing to chase my cure for these years. And we felt like we needed to do what we could in the fight against COVID-19.
The really incredible part here is that he notes that there’s no official tracking of the various tests that doctors are doing, and that’s a key aspect of what they’ve set up for doctors around the world:
I mean, you can basically think about the state that we’re in right now is that doctors are trying all kinds of things – hydroxychloroquine, remdesivir and many other drugs. Yet there’s no system in place to track what’s working and what’s not working. And so recognizing that this wasn’t being done, we decided to build a database, what we called the CORONA database – COVID-19 Registry of Off-label & New Agents. So it’s a database to track all of the drugs that have been used against COVID-19 to date ’cause we want to know everything that’s been tried, and we want to see what’s working and what’s not working. And amazingly, almost 150 different drugs have already been tried against COVID-19. And of course, we hear about a handful of them, but there are a lot of others that have already been tried as well. And so we’ve created this giant database from – right now it’s over 11,000 patients and growing – to collect data on every drug that’s been used and so that we can really dig into what’s working and what’s not working.
And the second part of this equation is that you want to track what’s being used, but then you also want to piece together all of the data that’s emerging from labs around the world to try to map out what are maybe some new drugs that we could start trying to use? What are some new pathways from all of this data that we should start going after? And interestingly, from the state of – we’re finding signals that are Castleman-like, basically. A number of the features that we’re seeing in the COVID-19 data, these same features we see in Castleman disease.
Think about that first part for a second. In the past, if you wanted to have a database of how certain drugs were used to treat different diseases, and what the impacts of those treatments were, you’d probably need a government to set up a program — with lots of bureaucracy and mess. But here, a doctor and some other interested researchers were able to set up their own such database on the fly and get a massive amount of data piped into it, from which they can do all sorts of (hopefully!) useful analysis.
This is not to say they’re ignoring other approaches — because these things work together. In the interview, it’s mentioned that the crowdsourcing team at CDCN has combed through over 2,500 published papers to look for potential promising treatments.
Also important: they’re being very open about all of this. While some keep insisting that we need to lock up successful treatments and ideas, Fajgenbaum recognizes the power of sharing information widely (the very root of crowdsourcing, after all):
You know, what we really want to do with this corona project is to map out everything that’s being tried, to put in one place all of the studies that are being published, all of the data on every drug that’s being tried so that other people can go to it and they can kind of decide for themselves what looks promising and what doesn’t. We didn’t build this to say this is the drug and that’s not the drug; we built this to say this is where all the data is. If anyone wants to use the data, we have this very data-first approach. Anyone can use the data.
And from our perspective, we want to use the data to determine and to prioritize what drugs should go on to clinical trials. So the fact is, is that this drug is already being studied in randomized controlled trials, and that’s all that we can really ask for. We want to use the database to say what’s being given, what looks really promising and what should go forward to randomized controlled trial. We don’t want to use the database to say this drug should be given or that drugs should not be given. So we’re hopeful.
And actually, we put together a paper based on our first pass of analyses of the data and recently received favorable reviews. So hoping that that’ll get published in peer-reviewed journal shortly and that we will be able to get the word out about this database. But the goal is not to say this is the drug that everyone should be on; the goal is to say these are the promising drugs. Let’s make sure that we don’t forget anything along the way because you’re right – I think that we all have a tendency to jump on every major drug or every major headline. But we need to keep an eye on all the drugs that are being tried and make sure that we’re doing this really systematically.
Once again, the ability for anyone to just setup and build something on the internet, without needing to ask for approval or go through some big bureaucratic process, may be helpful yet again, and hopefully the very open process of bringing in data, and sharing it outward, will lead to real breakthroughs.
A few weeks back we wrote about how Fortress Investment Group — a massive patent trolling operation funded by Softbank — was using old Theranos patents to shake down BioFire, a company that actually makes medical diagnostics tests, including one for COVID-19. Fortress had scooped up the patents as collateral after it issued a loan to Theranos, which Theranos (a complete scam company, whose founders are still facing fraud charges…) could not repay. Fortress then set up a shell company, Labrador Diagnostics, which did not exist until days before it sued BioFire. After it (and the law firm Irell & Manella) got a ton of bad press for suing BioFire over these patents — including the COVID-19 test — Fortress rushed out a press release promising that it would issue royalty-free licenses for COVID-19 tests. However, it has still refused to reveal the terms of that offer, nor has it shared the letter it sent to BioFire with that offer.
And while some have argued that after issuing this “royalty-free license” offer, the whole thing was now a non-story, that’s not true. It appears that the offer only covers half of the test: the pouches that have the test-specific reagents, but not the test device that is used to analyze the tests. And so while the COVID-19 test pouches may get a “free” license, the machines to test them are still subject to this lawsuit.
In the meantime, tons of people have been asking how Theranos — who appeared to never have a working product, despite publicly claiming it did (and convincing Walgreens that it did) — could possibly have received patents on technology that never actually existed. Tragically, the answer is that our patent system (for reasons that make no sense) does not require a working prototype, which results in all sorts of nonsense getting a patent. That said, the good folks at Unified Patents have launched a crowdsourcing contest for prior art about the two Theranos patents in question.
We kindly ask our crowdsourcing community of thousands of prior art searchers to take a few minutes to help identify prior art on these patents that never should have issued and help rid the world of them, in the process improving the world?s chances of testing for and containing COVID-19 and other dangerous public health concerns.
The contest will expire on April 30, 2020. Please visit PATROLL for more information or to submit an entry for this contest.
If you’re looking to help out and would like a place to start, the good folks at M-CAM, who analyze patents for prior art and obviousness, have a fairly remarkable analysis of the Theranos patents, and refers to Fortress/Softbank/Labrador as “graverobbers.” The analysis is worth reading, including this analysis of the 1st claim in the patent for “a two-way communication system for detecting an analyte in a bodily fluid from a subject…”:
No shit. My tongue is part of a system which detects various ?analytes? in food such as salt, sugar, and acids. Don?t tell
anyone, but I?m starting to worry that I might be the next target for an infringement lawsuit.
But on a more serious level, the analysis explains why the patents are pretty much exactly as sketchy as you would expect from a company of Theranos’ reputation:
… the claims of the patent they state are being infringed are incredibly mundane and obvious (patents
must be non-obvious to be granted). They include gems such as “a) a reader assembly comprising a programmable
processor that is operably linked to a communication assembly;” where they point out that Biofire?s machine uses, of all
things, an ETHERNET CABLE to export data from its processor. Heathens!
It then notes that M-CAM found at least 416 other patents that appear to be significantly similar to the patents at issue, which makes you wonder why the hell the USPTO approved these patents in the first place…
A couple years back we wrote about the patent trolling operation Blackbird Technologies, which was a law firm that pretended it wasn’t a law firm, and seemed to focus on buying up patents to shake down companies for cash. It had threatened many and sued a few, but definitely picked the wrong target when it decided to go after Cloudflare. Like Newegg before it, the team at Cloudflare decided that even if it was cheaper to settle, it would set a bad precedent and would likely lead to more trollish threats landing on its doorstep. So, instead, Cloudflare decided to fight back. And it went a step or two beyond Newegg, who would just fight the trolls in court. Cloudflare decided to not just fight in court, but then to seek to destroy Blackbird Technologies entirely. It launched a crowdsourced contest to search out prior art not just on the patent at issue in its own case, but on all Blackbird patents. It also went after the lawyers at Blackbird, filing bar complaints against the company for violating attorney ethics rules (mainly in holding itself out as not a law firm, but then acting as a law firm). There was also the issue of the firm appearing to purchase the bare right to sue, the same issue that brought down copyright trolling operation Righthaven. The issue there is that if you purchase the rights to a patent or a copyright, you have to actually purchase all of the associated rights, not do a convoluted thing where you pretend to buy the rights, but the original copyright or patent holder gets some of the proceeds of your trolling.
The legal strategy went swimmingly well. Cloudflare got an easy win at the district court, and then a super quick and easy win on appeal at CAFC, the Court of Appeals for the Federal Circuit. Cloudflare was so obviously on the right side of things that the CAFC panel didn’t ask its lawyers a single question (which is very rare), issued a decision mere days after the hearing (incredibly rare) and found Cloudflare’s arguments so correct that it didn’t even explain its decision, but just issued a judgment that said “Affirmed” (even more rare). As we noted at the time, even though it was an “easy” win for Cloudflare, it still involved two years of legal wrangling, involving over 1,500 pages of legal briefings on both sides (900 from Cloudflare alone). That’s expensive, time-consuming and distracting.
Earlier this week, Cloudflare released an update about the rest of its efforts to hit back at Blackbird (now that Blackbird chose not to request the Supreme Court review the CAFC decision). All in all, the effort to clip Blackbird’s wings appears to have been a pretty good success overall, even if the company is still operating. The crowdsourcing (and funding) campaign to find prior art against a bunch of Blackbird patents was definitely a success:
A high-level breakdown of the submissions:
We received 275 total unique submissions from 155 individuals on 49 separate patents, and we received multiple submissions on 26 patents.
40.1% of the total submissions related to the ?335 patent asserted against Cloudflare.
The second highest concentration of prior art submissions (14.9% of total) relate to PUB20140200078 titled ?Video Game Including User Determined Location Information.? The vast majority of these submissions note the similarity between the patent?s claims and the Niantic game Ingress.
It certainly appears that Blackbird’s prospects have diminished thanks to this team effort:
In the one-year period immediately preceding Project Jengo, (Q2?16-Q2?17) Blackbird filed more than 65 cases. Since Project Jengo launched more than 2.5 years ago, the number of cases Blackbird has filed has fallen to an average rate of 10 per year.
Not only are they filing fewer cases, but Blackbird as an organization seems to be operating with fewer resources than they did at their peak. When we launched Project Jengo in May 2017, the Blackbird website identified a total team of 12: six lawyers, including two co-founders, four litigation counsel, as well as a patent analysis group of 6. Today, based on a review of the website and LinkedIn, it appears only three staff remain: one co-founder, one litigation counsel, and one member of the patent analysis group.
As for the ethics complaints, the company notes that the proceedings there are confidential, so there’s not much to report, but also notes that they only filed these complaints in two states, Massachusetts and Illinois. At the very least, this should hopefully scare off others from mimicking Blackbird’s sham agreements:
We based our complaints on the assignment agreement we found filed with the USPTO, where Blackbird purchased the ?335 patent from an inventor in October 2016 for $1. It seemed apparent that the actual but undisclosed compensation between the parties was considerably more than $1, so Blackbird may have simply acquired the cause of action or the agreement involved an arrangement where Blackbird would split a portion of any recovered fees with the inventor. Such agreements are generally prohibited by the ethical rules.
In public statements, Blackbird?s defense to these allegations was that it (i) was not a law firm (despite the fact it is led exclusively by lawyers who are actively engaged in the litigation it pursues) and (ii) does not use contingency fee arrangements for the patents it acquires, but does use something ?similar.? Both defenses were rather surprising to us. Isn?t an organization led and staffed exclusively by lawyers who are drafting complaints, filing papers with courts, and arguing before judges amount to a ?law firm?? In fact, we found pleadings in other Blackbird cases where the Blackbird leadership asked to be treated as lawyers so they could have access to sensitive technical evidence in those cases that is usually off-limits to anyone but the lawyers. And what does it mean for an agreement to be merely ?similar? to a contingency agreement?
The successful campaign against Righthaven seemed to have prevented similar operations forming in the copyright trolling space, and hopefully this effort against Blackbird will do the same in the patent trolling space. At the very least, though, this, again, demonstrates the value of standing up to a patent troll, even if it would be a hell of a lot cheaper and easier to just settle.
It’s not entirely clear what motivations lie behind Barrett Brown’sKickstarter project, but you have to imagine it has to partially be an extended middle finger to the DOJ.
Journalist Barrett Brown was tried and convicted on a handful of charges related to the act of journalism. He ended up with a 63-month sentence and a $890,000 restitution order — some of which was tied to this activity.
[A] key part of the initial charges included the fact that Brown had organized an effort to comb through the documents that had been obtained from Stratfor via a hack. The key bit was that Brown had reposted a URL pointing to the documents to share via his “Project PM” — a setup to crowdsource the analysis of the leaked documents. Some of those documents included credit card info, so he was charged with “trafficking” in that information.
Brown made his situation worse by threatening federal agents, but the prosecution originally stemmed from his sharing of Stratfor documents. The link-sharing charge was ultimately dropped, but the DOJ included it in the indictment, trying to turn sharing a URL into trafficking in stolen credit cards.
Pursuance is open source software that provides a better way to organize online. It provides an integrated suite of digital tools, all designed to allow activists, researchers, journalists, artists, coders – anyone with talent and a little time – to collaborate on projects large and small, working within customized, evolvable entities called pursuances. (Think of a pursuance as a mission-oriented project/organization/group that people on the platform can join and contribute to.)
So… crowdsourcing knowledge/skillsets to engage in activism or journalism or whatever. This may include sharing access to leaked documents, much like those Brown was prosecuted for. But this won’t all be out in the open. Steps will be taken to shield collaborators from those opposed to their efforts. Two-factor authentication will be baked in, along with “Tor by default.” On top of that, pursuers[?] are given tools to keep The Man from surveilling their projects.
We’re including a robust permissions system that allows you to invite people at various trust levels. At the minimum trust level, the person you’ve invited can only see and only work on the tasks you’ve assigned them; they can’t see the rest of the task hierarchy, and they can’t see who else is involved, thus limiting the possible damage done by malicious infiltrators.
This sounds very much like Brown wants to get back to the work he was doing before the federal government interrupted his life with trumped-up charges. More journalism, more collaborations, and a suite of tools to keep those who view investigative journalism as threatening locked out of the process.
Crowdsourcing has obviously now been a thing for some time. Along internet timelines, in fact, crowdsourcing is now something close to a mature business practice and it’s used for all manner of things, from the payment for goods created, to serving as a form of market research for new products and services, all the way up to and including getting fans involved in the creation and shaping of an end product. The video game industry was naturally an early adopter of this business model, given how well-suited the industry is to technological innovation. Here too we have seen a range of crowdsourcing efforts, from funding game creation through platforms like Kickstarter to empowering supporters to shape the development of the game.
In that last example, it was Double Fine and Tim Schafer getting gamers involved in what would otherwise be the job of the creative team behind their game. The personalities here may matter greatly, because Ubisoft has recently unveiled an attempt to further get their fans involved in the game-creation process, yet many people are up in arms over it. Let’s start with what Ubisoft is attempting with its anticipated next installment in the Beyond Good & Evil franchise.
The long-awaited sequel to a 2003 Ubisoft game that was critically loved but flopped at retail, Beyond Good and Evil 2 will take place in an open universe full of strange creatures and cultures. During its E3 press conference, Ubisoft said that fans will be able to help populate that universe with their own music and artwork through a partnership with a company called HitRECord, with that company’s founder, actor-turned-entrepreneur Joseph Gordon-Levitt, appearing on stage.
The HitRECord-powered Space Monkey Program allows fans to submit ideas and works into a series of musical and visual categories like “devotional music,” “anti-hybrid propaganda,” and “anti-establishment art.” Other fans can then comment on and remix those works, which will ultimately be evaluated by HitRECord and—if they fit the game well enough—sent along to Ubisoft. Everybody who’s contributed at all to an accepted work will be paid.
If you’re anything like me, your reaction to this was purely positive. Fans of Ubisoft titles and Beyond Good & Evil get to contribute to the game in a way they will recognize and be paid some amount of money for? How cool is that? Collaboration with fans on the creation of art is squarely in the realm of our CwF+RtB formula. To add some compensation to that makes this all the better. And, in my opinion, if this were anyone but Ubisoft doing this kind of thing, nobody would be pushing back on it at all. But because of Ubisoft’s sketchy reputation, many are viewing this through purely cynical glasses and seeing nothing other than a company trying to avoid paying the full rate for the creation of its game.
Almost immediately after Ubisoft’s conference, critics and developers started asking questions: Why not just pay full-time, salaried developers to do this work? What happens if fans’ work doesn’t get accepted? Do they not get paid? Did they do it all for nothing?
Scott Benson, the co-creator of the indie game Night in the Woods and a vocal advocate for workers’ rights, pointed out that HitRECord’s business model seems to rely on what’s known as “spec work,” short for “speculation.” This is a common but nonetheless ethically muddy practice in creative and design fields. When you do work “on spec,” you’re producing something that a buyer might decide to pick up and then pay you for.
Great, except this isn’t being done in the “creative industry” at all, but rather directly with fans of the game franchise. Were Ubisoft trying to strong-arm artists for content it would otherwise pay for up front, then, yeah, this would suck. That’s not what it’s doing at all, though. Instead, the company is going directly to fans and asking them, rather than coercing them, to get involved in the project in a way those fans will find meaningful. Does this have the happy coincidence of being somewhat less costly? Sure. There’s no denying that. But so what? If fans of a game are able to compete with the art created by the creative industry and want to do that type of thing under this platform, where exactly is the ethical dilemma? Were Benson to have his way, fans should be denied this opportunity because… why? Because someone else might not get paid? Where is the sense in that?
There’s also something to be said for HitRECord’s meta-crowdsourcing experiment here and how interesting it will be to see if it can be pulled off.
“At HR, people build on each other’s ideas, and our website (and community) keeps track of how projects evolve—and how ideas influence one another,” HitRECord executive producer Jared Geller said in an email, noting that the company has paid out a total of nearly $3 million since it was founded in 2010. “So any contribution that is included in any of the songs or visuals (guitar parts, vocal stems, etc) delivered to the Beyond Good and Evil 2 dev team will get credited and paid. If your contribution isn’t used, you don’t get paid.”
So it’s not just milking a fanbase for cheap labor, but allowing that fanbase to them play off of one another and build a community product, which will then be injected into the game and for which they will be paid. I mean, come on, if everyone could take their labor union hats off for just a second, they’d have to admit how cool an experiment this is. And, while HitRECord will have the ultimate decision-making authority on how compensation is divvied up between creators, it even takes feedback from multiple creators into account when making those decisions.
The one area where there might be real concern is copyright infringement.
There are other possible complications, as well, said a representative of NoSpec, an organization that advocates against the practice of spec work.
“When people who participate in spec work know that the chance of payment is slim-to-none, it invites the fastest possible turnaround, and we’ve found that spec websites (those that sell design contest listings) are rife with plagiarism,” wrote the rep in an email.
There is truth to this and Ubisoft and HitRECord have better have their shit in order if they don’t want to turn this into some hellscape of accusations about plagiarism and copyright infringement. But if they can pull this off, the end result is going to be the injection of the voice of the fan directly into its game, which is about all we could hope for coming from a content producer.
I’ll end this with a thought experiment. Imagine for a moment if I had written this same post, except I did a find/replace for “Ubisoft” and replaced it with “Sole game creator.” Does anyone really think the same level of outrage would exist? If not, then this isn’t a moral question at all, but a monetary one. And if that’s the case, it should go without saying that Ubisoft’s reputation shouldn’t prevent it from being able to try something good and cool with its fans.