Mike Masnick’s Techdirt Profile


About Mike Masnick Techdirt Insider

Mike is the founder and CEO of the Copia Institute and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick


Posted on Techdirt - 30 September 2020 @ 9:35am

I See This Stupid New Section 230 Bill, And I Say It's A Stupid Section 230 Bill

from the file-that dept

Another day, another truly terrible bill to "reform" Section 230. This is another "bipartisan" bill, which should be a reminder that bad Section 230 ideas are happening across the entire spectrum of political ideologies in Congress. It's being released by Senator Joe Manchin along with Senator John Cornyn, and it's obnoxiously called the See Something Say Something Online Act. I do wonder if they licensed that term, because it was the NYC Metropolitan Transit Association who holds the trademark for "see something, say something" and is notoriously litigious about it. Indeed, the DHS program under the same name "licensed" the name from the MTA, though I still fail to see how either has anything to do with "commerce."

As a side note, before we get into why this bill is so, so bad, let's just note that the whole "See Something, Say Something" concept has been thoroughly and comprehensively debunked as a reasonable approach to law enforcement or stopping crime. Indeed, all "See Something, Say Something" has been shown to accomplish so far is to stuff massive databases full of useless information of people spying on each other.

Now, to this actual bill. It's worse than ridiculous. It's yet another one of these bills that seems to think that it can blame any and all societal ills on Section 230. In this case, it's trying to blame the internet and Section 230 for any kind of criminal behavior with a focus on illegal opioid sales. I know that this is an issue that Manchin has been vocal about for years (and for good reason, West Virginia appears to regularly have the highest overdose rates of any state in the country). But blaming the internet, or Section 230, for that is ridiculous and will not help stop the problem.

And yet, Manchin seems to think he can magically deal with the opioid problem by creating a massive regulatory burden for the internet in a very dangerous manner. The basics are that it would require any website that "detects a suspicious transmission" to submit a "suspicious transmission activity report" or "STAR." What is a "suspicious transmission" you ask?

The term "suspicious transmission" means any public or private post, message, comment, tag, transaction, or any other user-generated content or transmission that commits, facilitates, incites, promotes, or otherwise assists the commission of a major crime.

So... that's preposterously broad. If some comment spammer shows up in the Techdirt comments and posts some nonsense "promoting" drugs, I would have to file an official report with the DOJ? This would be an incredible burden for nearly any website.

And how would it be judged if that suspicious activity was "known" by the platform? Again, we get a very, very, very broad definition:

The term "known suspicious transmission" is any suspicious transmission that an interactive computer service should have reasonably known to have occurred or have been notified of by a director, officer, employ, agent, interactive computer service user, or State or Federal law enforcement agency.

So... if they claim that a website should have known, that's enough that the website has to file one of these crazy reports. Or if basically anyone merely claims something on a website is loosely related to a crime, the website is then required to file one of these STAR reports. Do the staffers who wrote this bill have no clue how many false reports are made every damn day?

And it's not just the websites. The bill would open up this STAR process directly to anyone. This is where it takes the problematic "See Something, Say Something" concept to ridiculous new heights:

The agency designated or established under [this law] shall establish a centralized online resource, which may be used by individual members of the public to report suspicious activity related to major crimes for investigation by the appropriate law enforcement or regulatory agency.

In other words, the government would set up a snitch database that will undoubtedly be filled with useless junk or people claiming that they saw some "illegal" garbage online that is unlikely to actually be illegal. Just the fact that this encourages people to snitch on others to the DOJ seems problematic enough.

The bill also appears to have a built in gag order, preventing any website from disclosing information about the STARs they've filed with the government. That's a huge blow to transparency. In fact, the bill also says that all of these reports are exempt from any FOIA request.

Of course, all of that is the "new" stuff. The change to 230 is that it would be amended to say that if any website fails to submit he required STARs, then they lose Section 230 protections and may be held liable for the underlying "suspicious transmission."

There are many, many, many problems with this whole bill. It would be massively burdensome to every website that hosts any form of user generated content. I don't think we (or any blog, honestly) could reasonably continue to host comments with this law on the books. We'd have to police all of our comments closely, and with the structure of the bill giving no leeway, we'd be compelled to file these snitch reports to the DOJ on any possibly "suspicious" comments, with suspicious being defined so broadly that merely talking about some sort of crime would necessitate us filing. That's an impossible standard.

Of course, this wouldn't do anything useful. It wouldn't help law enforcement discover crime rings online, because this STAR database would certainly be overwhelmed with garbage, just like every other "See Something, Say Something" database. Also, the fact that it requires websites to report on private information means it will require websites to snoop on private messages, and turn them over to law enforcement. That raises some fairly significant 4th Amendment concerns, by turning private companies into arms of law enforcement.

So, it wouldn't fix anything, would create a massive snoop database for law enforcement, would encourage people to snitch on anything "suspicious" and to force websites to file these useless reports -- while also likely shutting down many user forums online (especially those centered around helping those with drug addiction problems). In other words, it's yet another garbage Section 230 reform bill.

Someone please make these stop.

Read More | 12 Comments | Leave a Comment..

Posted on Techdirt - 29 September 2020 @ 11:59am

Our Latest Techdirt Gear: I Paid More For This T-Shirt Than Trump Paid In Taxes

from the and-you-can! dept

We were working on some new Techdirt gear designs for our Techdirt Gear shop at Threadless (stay tuned!) when the NY Times dropped its bombshell of a story regarding President Donald Trump's tax returns. As you likely know, despite every Presidential candidate in my lifetime releasing their tax returns, Trump has refused to do so (also, despite promises that he would). For years, reporters have sought out those taxes, and somehow the reporters at the Times got them. There were many interesting things highlighted in those tax returns, but a key point that has resonated widely: in the year Trump won the Presidency he only paid $750 in federal taxes (the same as he paid in many other years as well, including his first year as President in 2017).

Lots of people have been pointing out that this is crazy for all sorts of reasons, and plenty of people, including Joe Biden have jumped in with "I paid more in taxes than Donald Trump" gear. But, here at Techdirt, we believe in... going bigger. So we're selling a "I Paid More For This T-Shirt Than Trump Paid In Taxes" t-shirt... for $751 (plus shipping).

This is a real shirt and you can really buy it. Whether or not it's worth paying $751 for such a t-shirt is a decision that only you can make, though we'd be happy with that kind of support.

Of course, if that's a bit too pricey for you, we do still have a lot of other more affordable gear you can pick up too, like our copyright takedown gear:

Or our 1st Emojiment gear that explains the 1st Amendment in emoji.

And many other designs and products (not just t-shirts, we've got face masks, mugs and notebooks among many other items as well). So shop around, and feel free to spend more supporting us than the President has spent supporting the United States of America.

11 Comments | Leave a Comment..

Posted on Techdirt - 29 September 2020 @ 10:51am

Hypocrite FCC Commissioner Cheers On Zoom Block Usage By Person He Disagrees With; While Insisting Social Media Shouldn't Block People

from the want-to-try-that-again-brendan? dept

FCC Commissioner Brendan Carr has been first in line to gleefully pump up the still-unsubstantiated-by-any-evidence claim that social media companies are "unfairly censoring conservatives." In fact, he's been very vocal about the idea that social media should not be blocking anyone for their political beliefs. Just a few months ago he attacked "conservative bias and threats to free speech on the internet," saying that "doing nothing is not the answer." He's complained loudly about when Twitter briefly banned an infamous Twitter troll/Republican political operative for harassment, saying (falsely) that it was "censorship on social media and an attempt by social media "gatekeepers" to "launch a war on memes so they can control the 2020 narrative." No, dude. It was Twitter following its rules against harassment.

Oh, and here's my favorite. Back in 2016, he happily quoted his then-boss Ajit Pai saying "the impulse to squelch free speech on college campuses is anything but progressive."

Can you take a wild guess where this is heading? I know that you can.

Last week there was yet another hyped up culture war nonsense thing, when some conservative websites started freaking out that two faculty members at San Francisco State were planning to host a webinar/roundtable conversation with one of the participants being Leila Khaled, considered to be the first woman to hijack a plane. She hijacked two planes -- one in 1969 and one in 1970 -- as part of the Popular Front for the Liberation of Palestine. In 1997 (many years later) the State Department designated the PFLF as a foreign terrorist group.

This resulted in many arguing that merely allowing Khaled to speak with students was somehow a violation of laws against material support for terrorists. This is wrong. Some people point to the ruling in Holder v. Humanitarian Law Project, in which the Supreme Court (somewhat awkwardly) upheld the law against material support for terrorism, saying that groups that were seeking to support humanitarian or legal projects associated with terrorist groups could not do so.

But that's not what's happening here. This was just some professors hosting a Zoom call. As FIRE points out, there does not seem to be any realistic way to argue that Khaled being on a Zoom call violates the law:

The Lawfare Project went further, calling on the U.S. Department of Justice to “take appropriate action” against the SFSU faculty members for hosting the discussion and Zoom for broadcasting it, alleging that the discussion amounts to material support for terrorism in violation of federal law. Rep. Doug Lamborn joined this call, asking United States Secretary of Education Betsy DeVos and Secretary of the Treasury Steven Mnuchin to open investigations into — and cut all federal funding to — the university, arguing that Khaled cannot be given a “platform” because it is “aiding the dissemination of terrorist propaganda” and is “not speech or academic inquiry.”

That’s unlikely. First, we’re not aware of any indication that Khaled will be compensated for her virtual appearance, meaning the only conceivable “support” provided is the virtual forum for the discussion — in other words, speech. The Supreme Court’s pronouncement on the intersection of the First Amendment and “material support for terrorism” laws — Holder v. Humanitarian Law Project— is muddled, but focused on the “fungible” nature of the training provided to prohibited organizations, as it allowed the organizations to become more efficient. But the Court also pointedly noted that the law does not prohibit being a member of the organization, just providing it with “material” support. In other words, association — even membership — with an organization is protected by the First Amendment.

Second, Holder should not be read to enshrine no-platforming into federal law. Universities should be places where students and faculty can discuss — and hear from — people whose acts or views may be controversial or unlawful. Students and faculty members must remain free to invite speakers — whether they agree with them or not — without fear of censorship or punishment by administrator, bureaucrat, or politician. The First Amendment extends a right not only to speak, but also a right to hear or receive information. This right makes no exception for people who have engaged in misdeeds, criminal activity, or membership in blacklisted organizations. Indeed, in Kleindienst v. Mandel, although the First Amendment did not require the federal government to grant a visa to a communist speaker, the Supreme Court said it was “loath” to hold that the ability of students and faculty to hear from the speaker by telephone was sufficient to override their interest in the “particular form of access” to face-to-face discussions. If Holder applies here, then there is no form of access available.

You may or may not like Khaled, and you may not approve of her group or her views or her past actions. But what she was looking to do here was purely in the realm of speech. 1st Amendment protected speech. As complaints got louder, Zoom, on whose platform the call was to be held, said that it would not allow the call to be done. That... seems concerning. While private social media companies have every right to determine whose content they wish to host, it gets a lot trickier when we're talking about infrastructure and just the transmission, rather than hosting, of speech.

As some are pointing out Zoom's decision represents a terrible precedent. If you make enough noise, you can block someone from merely presenting. In that article, Mark Gray wonders if a hypothetical 2024 Republican National Convention might be pressured into not being allowed to be broadcast over Zoom because Roger Stone is one of the speakers. After all, he's been convicted of seven felonies. Or what about various convicts who have done their time and now give lectures to warn people about the mistakes they made in the past? Should they not be able to use Zoom?

And, yet, there was Brendan Carr, first in line, to cheer on this decision. He who had so eagerly tweeted against the "impulse to squelch free speech on college campuses" was cheering on a move by a tech company to squelch free speech on campus, saying that you "don't need to hear both sides." When I pointed out to him that this seemed hypocritical he doubled down saying that he has "no issue with a company denying service in this case," because Khaled was a "convicted terrorist who literally hijacked two airplanes where at least one person was shot." Lest you think Khaled shot someone, it was her partner in trying to hijack the plane who got shot. In neither of the two hijackings -- which again, were 50 years ago -- did anyone else die.

But, really, there are two key points here. First, Carr is admitting that his position is hypocritical, and that he does believe that private platforms should be able to deny service in certain cases based on the political views of the individual -- it's just that it seems he supports it when those political views diverge from his own. Carr seems to suggest that an appropriate standard is when someone has been "convicted" of certain crimes. I do wonder, then, if he thinks that no one in prison for murder should have access to phone lines. I mean, that's an area that he, as an FCC Commissioner even has some direct say over. Should, say, Kyle Rittenhouse, who shot and killed some people in Wisconsin be denied access to any phone service?

Second, while I have explained many times why websites should have freedom about whose content they host and whose they do not, it gets much, much, trickier when we're not talking about hosting content, but the mere transmission of content. Zoom is little different from a telephone call. And yet, here, it's blocking people based on political complaints about the speaker's views and the crimes she committed 50 years ago. Even if you disagree with Khaled, and even if you condemn her actions 50 years ago, it seems like a stretch to say she shouldn't be allowed to take part in a Zoom webinar.

24 Comments | Leave a Comment..

Posted on Techdirt - 29 September 2020 @ 9:32am

The Social Dilemma Manipulates You With Misinformation As It Tries To Warn You Of Manipulation By Misinformation

from the it's-not-good dept

There's been a lot of buzz lately about the Netflix Documentary The Social Dilemma, which we've been told reveals to us "the dangerous human impact of social networking, with tech experts sounding the alarm on their own creations." I know that the documentary has generated widespread discussion -- especially among those not in the tech space. But there's a problem with the film: nearly everything it claims social media platforms do -- manipulating people with misinformation -- the film does itself. It is horribly one-sided, frequently misrepresents some fairly basic things, and then uses straight up misinformation to argue that social media has some sort of godlike control on the people who use it. It's nonsense.

Also, I should note that nowhere do they mention that Netflix, the company which funded produced, distributed and widely promoted the documentary, is also arguably the first big internet company to spend time, money, and resources on trying to perfect the "recommendation algorithm" that is at the heart of the film's argument that these internet companies are evil. I guess some folks no longer remember, but a decade ago, Netflix even held a huge $1 million prize contest asking anyone to try to build a better recommendation algorithm. (Update: it has been claimed that despite this being a "Netflix original" and widely promoted and distributed by Netflix as such, that Netflix did not "fund" the film, which doesn't really change anything here, given everything else Netflix has done to make this film widely seen.)

There are a number of reasons to complain about what is portrayed in the film, but I'll highlight just a few key ones. One narrative device that is used throughout the film is that it has these weird... not quite "re-enactments" but odd "afterschool special" style fictional clips of a family with kids who really use social media a lot. And, yes, there are plenty of kids out there who have trouble putting down their phones/tablets -- and there are reasons to be concerned about that (or at least to investigate the larger ramifications of it). But the film not only exaggerates them to a ridiculous degree reminiscent of Reefer Madness type moral panic propaganda, but it repeatedly suggests (1) that social media can ruin a kids' life in like two days, and (2) that social media can, within a matter of a week or two, turn an ordinary teen into a radicalized hate monger who will join in-person mobs (leading to arrest).

Even worse, the fictional clips go a level deeper, trying to anthropomorphize the evil "algorithm" in the form three white dudes standing... on the deck of the Starship Enterprise? Or some other weird sci-fi trope:

Throughout the film, these three guys and their weird computer-ish controls in front of them are shown trying to "increase engagement" of the son in the family through any means necessary -- including magically forcing some girl they think he likes to interact with him. And, I'm sorry, but that's... not how any of this works.

It is literally emotionally engaging misinformation designed to impact our beliefs and actions. In other words, the same thing that the film claims social media companies are doing.

But it's also the same thing any company has tried to do in the past through... advertising. One theme that runs throughout the film, and is dead wrong, is the idea that social media advertising can somehow "control" you. And... uh... no. Come on. Social media advertising is a joke. Can it better target some ads? Sure thing, and if those ads target stuff you actually find useful, then... that's a good thing? But, most social media advertising is still garbage. It's why so many of us block or ignore ads. Because they're still just not that good.

The film is really designed to showcase Tristan Harris, who probably takes up 1/3 of the screen time. Tristan made his name by being the internal "ethicist" at Google for a little while before setting out on his own to become the high prophet of "internet companies are trying to manipulate us!" But, as others have pointed out, Tristan has a habit of vastly exaggerating things, or being misleading himself. As just one example, highlighted by Antonio Garcia-Martinez in his must-read dismantling of the film, is that Harris argues that we didn't have these same problems with earlier technologies -- like the bicycle. But as Antonio points out, there was, in fact, quite a large moral panic about the bicycle, and the Pessimist's Archive makes the point quite clearly in this little clip:

As we've discussed for years, pretty much every new form of technology or entertainment -- including the waltz, chess, the telephone and more -- have resulted in similar moral panics, almost none of which proved to be accurate.

That doesn't mean that there aren't important concerns and messages that we ought to think about regarding the design of the internet and the various services we use. But the problem is that this film totally fails to adequately address any of those concerns and uses exactly the wrong messengers to bring the message. The vast majority of the talking heads in the film are former (and in some cases) current employees from the big tech companies who "regret" what happened with what they built. But they don't seem to have any more of an idea of what to do other than "put down your phone," which like "just say no" drug campaigns and sex-abstinence education programs have long been proven to be absolutely useless.

Also, it should be noted that the guy who gets the second most amount of screen time, former Facebook employee and Pinterest CEO Tim Kendall, currently is CEO of a company that tries to help you limit your phone time usage. Anyone think he has, perhaps, alternative motives to play up how "addictive" he made Facebook and Pinterest?

Notably, in nearly every case, the film takes the most nefarious and extreme explanations for what is happening at social media companies. At no time does it present a single person who offers a counterpoint, or suggests that the descriptions in the film are exaggerated and misleading. Again, all it does is use misinformation and manipulation to warn you about other tech companies supposedly using misinformation to manipulate.

On top of that, as many people have noted, there are many, many activists and experts -- though frequently not white male former tech bros in t-shirts -- who have been working on actual ways to improve technology and services, and to provide real solutions. But the film ignores all of them as well.

The entire conceit of the film is that these few tech giants (again, notably not Netflix, despite it being the leader in recommendation algorithms) have some sort of "total control" on the minds and actions of people. There's some nonsense in there from Harvard professor Shoshana Zuboff, coiner of the term "Surveillance Capitalism" which always feels like a useful phrase until you dig in and realize that Zuboff has less than no clue about how the internet actually works. She insists that these companies are selling "human futures" which... is... not... how any of this works.

As Antonio summarizes, Zuboff seems to think that Silicon Valley is doing magic that it is not doing. This is akin to Josh Hawley last week arguing that the tech platforms have "total control" over our brains and our voting abilities:

Less diplomatically, everything Zuboff says is a nonsensical non sequitur.

”This is a world of certainty.”

Then why am I, crusty ad tech veteran, building probabilistic models all day?

“This is a totally new world.”

No it isn’t. I was there, at Facebook when it happened. We copied it all from the direct-mail people who’ve done it for decades.

“They’re trading human futures like we do pork-belly futures.”


The CBC's coverage of the film, rightly points out that the film is greatly exaggerating reality to the point of it being misinformation.

One of the ways the documentary represents surveillance, Chun noted, is by using three human actors trying to entice someone to use their phone and stay on social media longer. Along with their presentation of social media as an addiction ("there's a difference between a habit and an addiction" Chun said) is the fear that is created when people think that real humans have access to all of their information, instead of algorithms that predict human behavior.

Though Chun argued users should not be tracked, she said the idea that algorithms know "everything" about you isn't correct. She argued the film itself is based on revealing "open secrets," and the information these services use to present personalized ads doesn't reflect a deep knowledge of users.

"The idea that somehow they control you is overblown," she said. "At the same time, you can say that a lot of what they know about you is accurate. But then the question you have to ask yourself is: So what?"

Indeed, the claims about "addiction" are so overblown as to be laughable. The film repeatedly argues that once you're addicted to social media these companies can change your thoughts. But... that's not what addiction is or how it works. It's not brainwashing.

Anyway, there were many more things wrong with it -- and even if people can agree that there are some significant problems with the internet of today, I have a hard time believing that the way you fix manipulation and misinformation is by creating a documentary that is full of misinformation and designed to manipulate emotionally people into believing things that just aren't true.

28 Comments | Leave a Comment..

Posted on Techdirt - 28 September 2020 @ 12:05pm

Judge Rejected Ban On TikTok Because Trump's DOJ Can't Show Any Real National Security Threat

from the because-of-course dept

Earlier today we wrote about a judge blocking Trump's TikTok ban, though noting that the full reasoning why was under seal. Right about the time that post went up, the details were unsealed. Unlike the WeChat injunction which was done on 1st Amendment grounds, the injunction here doesn't touch the 1st Amendment questions and just says that the Trump White House (even with presenting evidence under seal) totally failed to substantiate the national security threat of TikTok, even under the IEEPA (International Emergency Economic Powers Act) which grants the President tragically and dangerously broad powers to claim a "national emergency" to block international commerce.

As noted above, IEEPA contains a broad grant of authority to declare national emergencies and to prohibit certain transactions with foreign countries or foreign nationals that pose risks to the national security of the United States. But IEEPA also contains two express limitations relevant here: the “authority granted to the President . . . does not include the authority to regulate or prohibit, directly or indirectly” either (a) the importation or exportation of “information or informational materials”; or (b) “personal communication[s], which do[] not involve a transfer of anything of value.”

We pointed out this clause when Trump's executive order was first issued, noting that it likely doomed it, so it's good to see the judge highlight it. The DOJ pushed back on this, saying that since it was just prohibiting certain "business-to-business economic transactions," it wasn't actually prohibiting the movement of information. Incredibly the DOJ also claimed it had not taken any action concerning "TikTok users themselves." The judge more or less responds with a sarcastic "come on, you can't be serious."

But that argument fails to grapple with IEEPA’s text. Section 1702(b)(3) provides that IEEPA’s grant of authority “does not include the authority to regulate or prohibit, directly or indirectly,” the cross-border transmission of “information and informational materials.”... The content exchanged by TikTok users constitutes “information and informational materials”; indeed, much of that content appears to be (or to be analogous to) “publications, films, . . . photographs, . . . artworks, . . . and news wire feeds.” Id. And the purpose and effect of the Secretary’s prohibitions is to limit, and ultimately reduce to zero, the number of U.S. users who can comment on the platform and have their personal data on TikTok.... At a minimum, then, the Secretary’s prohibitions “indirectly” “regulate” the transmission of “informational materials” by U.S. persons.

Moreover, Section 1702(b)(3)’s express limitation applies to “commercial” informational materials. If prohibitions on business-to-business transactions could not constitute the regulation of “informational materials,” then there would have been no reason for Congress to include the word “commercial” when defining the scope of § 1702(b)(3)’s limitation.....

To be sure, TikTok (like a news wire, which is expressly identified in IEEPA’s carveout) is primarily a conduit of “informational materials.” In that sense, it is (among other things) a “medium of transmission,” and IEEPA provides that this carveout applies “regardless of format or medium of transmission.” 50 U.S.C. § 1702(b)(3). That is especially true where, as here, the transmitting medium is inextricably bound up with and exists primarily to share protected informational materials.

From there, the DOJ tried to argue that the Espionage Act (which I and many others believe is already unconstitutional) when combined with the IEEPA can be used to block certain information under the IEEPA. But again, the judge says that's not how any of this works, especially because kids dancing on TikTok are not violating the Espionage Act.

Finally, the government proposes a novel reading of the Espionage Act.... Section 1702(b)(3) contains an exception to its exception, so to speak, and permits the regulation of informational materials, “with respect to . . .acts . . . prohibited by chapter 37 of Title 18.” That Title authorizes life imprisonment or the death penalty for those who share U.S. defense secrets (especially classified government materials) with foreign adversaries.... But it is not plausible that the films, photos, art, or even personal information U.S. users share on TikTok fall within the plain meaning of the Espionage Act.

At the end of the order, the Court also addresses the national security question while looking at the "balance of equities" in determining whether or not an injunction against the ban was appropriate. And it notes that, despite the DOJ presenting evidence in sealed filings, it wasn't enough to substantiate the claims that TikTok is a national security threat.

The government argues that a preliminary injunction would displace and frustrate the President’s decision on how to best address a national security threat—an area where the courts typically defer to the President’s judgment.... The Court must, of course, give deference to the Executive Branch’s “evaluation of the facts” and the “sensitive and weighty interests of national security and foreign affairs,” Holder v. Humanitarian Law Project, 561 U.S. 1, 33–34 (2010), including “the timing of those . . . decisions.” Holy Land Found. for Relief & Dev. v. Ashcroft, 219 F. Supp. 2d 57, 74 n.28 (D.D.C. 2002). Here, the government has provided ample evidence that China presents a significant national security threat, although the specific evidence of the threat posed by Plaintiffs, as well as whether the prohibitions are the only effective way to address that threat, remains less substantial.

As for why the court only granted the injunction for last night's ban, and not the November 12th more complete ban, the court basically says "we have time to deal with that one later," but presents no suggestion that it would allow that ban to move forward either.

... the only truly imminent and immediate harm that Plaintiffs will suffer absent an injunction relates to paragraph 1 of the Commerce Identification. The Court therefore agrees with the government that injunctive relief should be limited to the prohibitions contained in paragraph 1, and that the other paragraphs of the Commerce Identification should appropriately be the subject of separate proceedings, which can be briefed and decided (potentially through cross-motions for summary judgment, and on a full administrative record) prior to those restrictions’ effective date of November 12.

So, the WeChat ban gets blocked on 1st Amendment grounds, and the TikTok ban gets blocked because the IEEPA doesn't let the President do what he wants to do, and all of this is just performative nonsense anyway, wasting two separate courtrooms' time, not to mention significant concerns among many different companies which would have had to deal with the fallout of a ban. All because Trump is mad at kids on TikTok who don't like him.

Still, it does seem notable that even under seal that government couldn't present any real evidence to the court of the threat of TikTok being owned by a Chinese firm. As if we didn't already have enough evidence about the fact that this entire debacle was a made up culture war, rather than a serious concern. It remains incredible to me that otherwise serious people jumped on board with Trump's decision to ban these apps.

Read More | 12 Comments | Leave a Comment..

Posted on Techdirt - 28 September 2020 @ 9:31am

Court Says Trump's Plan To Block TikTok Can't Go Into Effect Yet

from the blocked dept

As we noted late on Friday, even with the weird grifty deal between TikTok and Oracle, Trump's ban on TikTok was scheduled to go into effect last night -- but a court was rushing to review a request by TikTok/Bytedance to put in place a temporary injunction to stop the rules from taking effect.

In an emergency hearing on Sunday morning the judge appeared to be inclined to block the injunction, noting:

This was a unilateral decision with very little opportunity for the plaintiffs to be heard and the result, whether we're talking about November or tonight, is a fairly significant deprivation.

If you don't recall, the block had two stages. The first was supposed to go into effect last night, blocking app stores from any new downloads (including updates) for the software. The second would go into effect on November 12th, and that would block other US services from helping TikTok (no optimization, no CDNs, no peering) as well as any use of TikTok's API.

In a ruling late on Sunday, the judge agreed to a preliminary injunction blocking the rules from going into effect last night, but not issuing one blocking the November rules. However, the reasoning is not known, because it was filed as a sealed memorandum, though both parties have been asked to approve unsealing it perhaps as soon as today.

It seems likely that the reasoning will be at least somewhat similar to the preliminary injunction that blocked the WeChat ban from going into effect: that you can't just magically wave your arms around and scream "national security" to ban entire communications platform from the US -- especially without addressing the 1st Amendment concerns.

The Commerce Department put out a brief statement saying that the Executive Order is "fully consistent with the law" (it isn't) and "promotes legitimate national security interests" (it doesn't). However, it also says it will comply with the injunction, but will "vigorously defend" the E.O. from legal challenges.

On September 27, 2020, the United States District Court for the District of Columbia granted a nationwide preliminary injunction against the implementation of Executive Order (E.O.) 13942, limited to the Secretary of Commerce’s Identification of Prohibited Transactions with TikTok/ByteDance involving ‘any provision of services… to distribute or maintain the TikTok mobile application, constituent code, or application updates through an online mobile application store.’ The E.O. is fully consistent with the law and promotes legitimate national security interests. The Government will comply with the injunction and has taken immediate steps to do so, but intends to vigorously defend the E.O. and the Secretary’s implementation efforts from legal challenges.

In other words, this isn't over yet by a long shot.

Read More | 17 Comments | Leave a Comment..

Posted on Techdirt - 25 September 2020 @ 7:39pm

TikTok And The DOJ Still Fighting It Out In Court Despite Oracle 'Deal'

from the look-at-that dept

Even though Trump gave his supposed okay to the grifty TikTok/Oracle hosting deal, it appears that TikTok, ByteDance and the Trump administration are still busy fighting this out in court. The Trump rules to ban the app are still set to go into effect on Sunday. And while WeChat users were able to block the rules from going into effect, they still technically are scheduled to go into effect for TikTok this weekend.

TikTok has asked for an injunction to stop the ban and the court is going to decide at the last minute whether to issue an injunction in the TikTok case as well. This is, in part, because the Oracle deal (which is not a sale and accomplishes none of the stated goals of the original executive order) still needs approval from the Chinese side -- and there are indications that China wants a better deal.

After a hearing on Thursday, the judge ordered the government to either respond to the request for an injunction or to submit "a notice describing [the DOJ's] plan to delay the effective date of the subset of prohibited transactions directed against TikTok that are scheduled to go into effect" on Sunday at midnight. The DOJ, rather than say they were delaying the TikTok ban, instead, filed an opposition to the proposed injunction, though it did so under seal so we can't see what the DOJ said.

The judge is expected to rule by Sunday, and it's possible (likely) that he'll drag the lawyers from both sides into (virtual) court this weekend. The whole thing remains insane. The President should never have the right to just ban a random social media app like this. Hopefully, the court agrees to an injunction while everything else gets worked out.

Read More | 10 Comments | Leave a Comment..

Posted on Techdirt - 25 September 2020 @ 1:42pm

China Blocks Wikimedia From WIPO... Because There's A Taiwanese Wikimedia Chapter

from the petty dept

The World Intellectual Property Organization, WIPO, who has a long history of poor decision making despite its crucial role in helping to define standards regarding copyright and patent rules around the globe, is now letting China block Wikimedia from having "observer status." As Tersa Nobre from Communia notes, tons of civil society/public interest orgs have been granted observer status at WIPO, including EFF, Creative Commons and others. In fact, the only other time anyone can remember an organization being blocked is when Pirate Parties International was blocked. Indeed, when we wrote about that, we noted that it coincided with WIPO granting observer status to an organization that claimed its goal was to "free individuals and organizations from space lizards' control." Really.

In other words, it's not that common for WIPO to block anyone from observer status.

So why was Wikimedia blocked? The answer is that China doesn't like the fact that Wikimedia Taiwan exists.

China was the only country to raise objections to the accreditation of the Wikimedia Foundation as an official observer. Their last-minute objections claimed Wikimedia’s application was incomplete, and suggested that the Wikimedia Foundation was carrying out political activities via the volunteer-led Wikimedia Taiwan chapter. The United Kingdom and the United States voiced support for the Foundation’s application.

This is just petty assholery by the Chinese government and its infatuation with denying Taiwan's independent existence. Again, the Wikimedia Taiwanese chapter is an effort by volunteers in Taiwan. And, so what? How is that chapter going to have any impact whatsoever on Wikimedia's observer status at WIPO? Does China really believe that because there's a volunteer effort to support Wikimedia in Taiwan, it's somehow impossible for Wikimedia to be an observer at WIPO?

If China is legitimately arguing that no one should be able to engage in any international organization if they have some loose connection to Taiwan, they're going to find that not many organizations are able to participate. There is no legitimate reason for China to do this, and all it does is call that much more attention to China's obsessive attitude over the existence of an independent Taiwan.

19 Comments | Leave a Comment..

Posted on Techdirt - 25 September 2020 @ 9:54am

Josh Hawley Is A Lying Demagogue Who Has Built A Fake Fantasy World About 'Evil Big Tech'

from the get-over-it-dude dept

What's up Senator Hawley? What's bugging you today? Yesterday, Hawley went to the floor of the Senate to try to sneakily move forward one of his many, many bills to destroy the internet and take away Section 230. He tried to sneak it through without letting folks who he knew would oppose it know, in the hopes that they might not show up to stop him. In fact, he did it at a time when the key person blocking his bill -- Senator Ron Wyden, who authored Section 230 and knows that Hawley is lying about it -- was in an important committee meeting.

What happened then is what you can see in this video below, in which Wyden raced over and had to give unprepared remarks to explain to Hawley that he's a lying idiot.

Almost everything that Hawley says here is a lie or is garbage. It starts out with him gravely staring at the camera, saying that we're approaching an important election, and then it goes into this nonsense:

But there are a group of people who seem intent on influencing the people's choice on manipulating it, on shaping it, according to their own preferences.

Yes. The Russians seem intent on that. As does your political party, including the President and the Attorney General, who have repeatedly made moves to try to invalidate the ability of the public to vote. Is that who you're talking about?

And I'm not talking about China, or Russia, or Iran.

Oh. But they are actually trying to influence the vote. The intelligence community has reports on it -- though they've stopped briefing the President because it makes him sad.

I'm talking about a group of corporations, the most powerful corporations in the history of this nation, the most powerful corporations in the history of the world.

News Corp?

I'm talking about big tech. We know who they are. They run the giant digital platforms, the places where Americans communicate and share their opinions.

Wait, the websites that allow the public to criticize you and your lies? You can't possibly mean them.

But those platforms are more than that. They're more than places to talk or buy things, Facebook and Google, Twitter and Instagram and YouTube. These are the platforms that control more and more of our daily lives.

Dude. Come on. They don't control our lives.

And yes, I said, control.

You must think the public is a bunch of weak-minded fools. No one in Silicon Valley "controls" anyone.

These platforms control our social communication, the way that we talk to each other, when and how, where and on what terms. They control what news we read, or even what news we see.

This is literally not true. There are tons of different ways to get news, and the companies you named can't stop any of them. Nor do they.

They control more and more journalism in America, right down to what's in news articles and how the headlines are written. They control how elected officials communicate with their constituents when they can run advertisements what their messages can say and can't.

They don't control journalism. They don't control how headlines are written. They don't control how elected officials communicate. You can do all of that without them. And yes, many websites do use their platforms to spread their news, but not all of us do. And, sure, they may limit some advertisements. If they're bullshit lies, or inciting violence, but that's kind of their right as private companies. Which you should know... Because you were the lawyer for Hobby Lobby in bringing their case to the Supreme Court on the very principle that private companies get to decide how to run their own businesses with regards to certain 1st Amendment rights. Or do you not remember that?

Of course, if you're a mendacious demagogue and your only goal is to rile up your constituents with a bogus culture war to make sure you're in the headlines, then I guess maybe this nonsense makes sense.

And they want to control us.

No. They don't. Dude, rather than creating this fake bogeyman, maybe go out to Silicon Valley once and talk to the engineers who hold the internet together with bubblegum and duct tape. They're not competent enough to control anyone. This isn't science fiction. There is no mind control. They're just trying to build useful internet platforms, and you and your friends decided to use those platforms to go fascist. And some of them said "I don't want a part of that." Which, again, is their right.

The big tech platforms relentlessly spy on their customers, you and me. They track us around the web. They monitor our every move online, and even when we're offline. They track our location. And whether we're in a car or riding a bike around the street, they track the websites that we visit. And when they track the things that we buy, they track the videos that we watch, they track what our children are doing, they track everything all with the purpose of getting enough information on each one of us to influence us to shape our preferences, and opinions and viewpoints.

First of all, you can turn off most of that. It's not that hard. If there weren't this pandemic going on, I'd stop by your office and show you how. At the very least, it's not that hard to, like, install Privacy Badger. It's nice. It'll help you.

Also, they're not tracking you to "shape our preferences and opinions and viewpoints." That's what advertisers want to do. And Fox News. You don't seem mad at them.

This is enormous power--unheard of power. And the big tech platforms are intent on using it. They are intent on using it in this election.

This is just silly. If you actually spoke to the people at these companies, they want nothing less than to have nothing to do with this election. Why do you think many of them are saying "no political advertising at all"? Why do you think so many of them are trying to bend over backwards to stay out of anything that even looks remotely like influencing an election. You're literally making this up because you have nothing real to run on and you can only succeed by creating a fake enemy to rail against. You are a little man with no plan and no principles. So you make up enemies and try to turn your constituents against them, because you think that your constituents are rubes who you can lie to and they'll believe you.

Let's just cut to the chase. The big tech platforms are owned and operated by woke capitalists.

Um. What? "Woke capitalists"? I thought Republicans were complaining that the "woke" people were socialists? Now they're capitalists? Can you guys keep your ridiculous conspiracy theories straight?

They're leftists. They're liberals. They're not conservatives. They're no friend to conservatives.

Both Facebook and Google have policy shops with well known Republican officials in senior positions. Come on. Multiple reports have shown that Facebook favors right wing nonsense, and bends over backwards not to pull it down, even when it's utter bullshit.

They fervently oppose the election of Donald Trump and other conservatives in 2016. They fervently oppose it this year.

Well, first, Donald Trump is a moron who is incompetent. Anyone can see that. But, there is no evidence that any internet company is doing jack shit to stop him. Again, Facebook has bent over backwards to help him and his campaign out. Twitter remains his primary tool of communication. If Silicon Valley companies did 1/10 of the shit you accuse them of, that would be a huge deal. But they're not. You're just lying.

And now they're trying to use their power to shape the outcome of an election.

No. They're literally not doing this. And employees at Facebook are quitting because they're letting the President spew hatred and lies and incite violence on the platform.

For months, the tech platforms have been engaging in escalating acts of censorship, political censorship, aimed at conservatives.

We've gone over this a million times. There remains no evidence at all that they're targeting conservatives. They are targeting insane, outlandish lies, hate speech, efforts to incite violence, and such. If your party is doing more of that, well, perhaps that's on you.

They've censored the President of the United States.

No, they haven't. Twitter fact checked him. You know, adding more speech. Didn't you used to be one of those "the answer to bad speech is more speech" kind of Republicans?

They have banned pro life groups from their sites.

No they haven't. I just searched and there are literally dozens of pro-life Facebook groups with thousands of members. You're just making shit up that's easy to fact check.

They have tried to silence independent conservative journalists like the Federalist. Now this censorship is never against liberals, notice. Now Joe Biden isn't censored. Pro choice groups aren't discriminated against. Liberal new sites, they don't get threatened and bullied and shut out. Now big tech targets conservatives for censorship for a simple reason. They don't like conservatives, they don't agree with conservatives. They don't want to see conservatives get elected.

Why do you chuckleheads always go back to that bogus Federalist story. We faced the same thing (and currently have no ads on our site). Did we lose our ads because of anti-tech news bias? Slate -- which is generally considered left-leaning -- also faced the same demonetization threats. In that link, it's noted that Buzzfeed also got the same notices that we got, Slate got and that The Federalist got.

But notice that only the Federalist is crying victim. Only you are claiming that it was because of anti-conservative bias -- rather than the reality. The same reason that The Federalist, Techdirt, Slate, and Buzzfeed all got these notices: some of our content tripped a wacky AdSense algorithm. It's got nothing to do with bias, and you know it. You know it because I sent that information to you. Once again, it happened because these companies and their algorithms aren't the all-controlling puppetmasters you claim. They barely work most of the time. And they spew out all sorts of false claims. But it's only you and your whiny friends who take it so personally.

And then you go to the floor of the Senate and you lie about it. Because you think the public are idiots.

And here's the thing. If they are allowed to use their power in this way, if they are permitted to leverage their control over news and information and data to silence the voices of conservatives, then we will be turning control of our government over to them will be concerning control of our elections over to them, control of the nation to them. And let's just be clear, no corporation should run America. No set of corporate overlords should substitute their judgment. For the judgment of we the people no woke capitalist should be able to shape the outcome of an election by silencing speech. And that's why we have to act and act today.

Boy are you going to be upset when you learn about Fox News.

There is a simple, straightforward solution to the censorship power of these digital platforms. Let those who have been censored claim their rights, let them sue. Let them go to court. Let them challenge the decision to the tech platforms and have their day before the bar of the law. Now right now federal law prohibits this. It prevents Americans from challenging the tech platforms and their censorship. It prevents Americans from challenging just about anything that the tech companies do. That should change.

You're a constitutional lawyer. How do you not know the difference between Section 230 of the Communications Act and the 1st Amendment. Because it's the 1st Amendment that lets internet companies decide what content is published on their websites and what is not. What would "their day before the bar of the law" even look like? All that would happen is judges would laugh every one of the sad sacks you've convinced to go to court, telling them they have no right to force anyone to host their speech. And just like your client, Hobby Lobby, was free to screw over its employees thanks to your diligent lawyering, leading plenty of people to take their business elsewhere, if people don't like how the big internet companies moderate, they too can go elsewhere. Aren't you on Parler yet, Josh?

And that is why today, Mr. President, I urge this body to adopt my legislation, which I proudly have introduced along with Senator Rubio and Senator Cotton, Senator Braun, and Senator Leffler to give every American who is unfairly censored the right to have his or her day in court, the right to stand and be heard the right to fairness and due process of law. This is a stand we must take in defense of free speech, in defense of our elections. But more importantly, above all, in defense of our democracy, and the rule of we the people.

Again, the 1st Amendment says you're a lying fool, Josh. You're not defending free speech or elections. You're making a mockery of them.

At this point, an exasperated Senator Ron Wyden pops up to object to Hawley moving forward with this nonsensical, unconstitutional bill, pointing out that Hawley did not do the usual procedure of alerting others that he was going to make a request for unanimous consent to move his bill forward, and that he picked a time (perhaps deliberately) when Wyden was testifying before the Ways and Means Committee to make this move. So, Wyden had to respond, off the cuff, without a prepared speech, to highlight that basically everything Hawley said was utter and complete nonsense. Wyden seems rightfully pissed off.

I just want to say to the Senate, in my time in this body, this is one of the most stunning abuses of power. I have seen in my time in public service. I think my colleague knows that I was setting until five minutes go in the Ways and Means Committee, where I was invited to testify about Social Security.

And I was given a message that the Senator from Missouri was going to stand up and basically try to throw in the garbage can, a bipartisan law that I and a conservative Republican, former congressman Chris Cox, well known to conservatives, wrote, because as we thought about the formulation of technology policy, our big concern was for the little guy, the person who didn't have power, the person who didn't have clout, we were picking up accounts, that if they were just trying to come out with their invention might be something they put up on a website or a blog, it could be held personally liable, personally liable for something that was posted on their site that they'd have no idea of.

So we said, we can't do that to the little guy. We can't strip them of their voice.

And by the way, my concern about the little guy that led to the passage of this law, is something is something I continue to focus on today. This law is hugely important to movements like "Me Too," and "Black Lives Matter." Because it gives Americans the opportunity to see the messages that they want to get out. We've all seen the videos, frankly, the establishment media, I don't think would have even run a lot of it, because they would be sued. So the original interest in this was making sure that the little guy had a chance to be heard. That's the interest today. That's what the Senator from Missouri wants to throw in the trash can. So that's number one.

Number two, the effect of what the Senator from Missouri wants to do. And for colleagues who've just come in, I just learned about this. Five minutes before the Senator from Missouri went on to the floor. The net effect of this is that Donald Trump can force social media and he's already working the rest to print his lies. The thing that concerned me right at the outset, was the lies about vote by mail. He wanted to force Twitter to print his lies about vote by mail. That too, is something that we sought to constrain in the bipartisan legislation.

In other words, if your complaint is about unfairly influencing elections, Josh, maybe look at your own President first.

Now, many people think it's a 26 words, that really began a policy of empowering the little guy to be heard.

He then goes on to push his own bill, the Mind Your Own Business Act, that would actually propose jail times for big company CEOs that violate certain privacy principles. He points out that if Hawley actually cared about taking on big tech, he'd sign onto that bill. But Hawley's not interested in really taking on big tech. He's trying to build up an evil bogeyman that he can scare people with. He's demagoging.

And, of course, he couldn't let Wyden just make him look like a fool, so Hawley got up to spew more lies:

I will just say, Mr. President, that my friend, the Democratic Senator describes a world that doesn't exist. He says section 230 protects the little guy. Section 230 protects the most powerful corporations in the history of the world. Google and Facebook aren't the little guy. Instagram and Twitter aren't the little guy, you know, who is left vulnerable by those mega corporations.

This is where Hawley is completely misrepresenting things again. What Section 230 does is allow these, and every other website that hosts content, to exist without fear of being bogged down by overly burdensome litigation. And it does help the little guy. Google, Facebook, Twitter, Instagram, YouTube have all enabled many, many, many "little guys" to speak their mind and get attention, and build huge followings and businesses. Including tons of conservatives. And it's enabled there to be new companies and new entrants. And individual sites. It absolutely protects the little guy.

If Hawley's unconstitutional bill became law, that would hurt the little guy. The platforms would be much less willing to host user content. It would be that much more difficult for small sites or individual websites to be put up. But Hawley needs to make an enemy and thus he has to lie and lie and lie again.

The people who don't have a voice that people who when they get deplatformed don't have an option. If you're silenced by Google, or Facebook or Twitter, what's your option? None. Nothing, you can't be heard. You can't go to court, you can't do anything.

Oh, come on. There are so many other platforms -- even those catering to fascists and Trumpists. Free market? Competition? Didn't Republicans used to support that kind of thing. You can set up your own website. You're not "silenced" by Google or Facebook or Twitter. You have other options. And it's not like those companies are quick to shut down anyone's accounts anyway. In most cases, you have to have done something really, really egregious.

I mean, I see Josh Hawley posting all sorts of bullshit on Twitter daily, and his account remains.

Every American should have the right if they're unfairly discriminated against because of their political views to at least be heard in court. Now, section 230, as it exists today, and as it's currently being applied, it protects the most powerful corporations. It protects and has protected human traffickers. It protects some of the worst abuses of free speech in our society. And that's why Mr. President, I will continue to fight to have it reformed to continue to fight to give the American people a voice.

What's with the weird aside about human traffickers? That's simply not true and again Hawley knows it. Trafficking violates federal law and nothing in Section 230 blocks the DOJ from going after federal criminal violations. On top of that, Hawley was a key supporter of FOSTA which already carved more issues related to trafficking out of Section 230 (and it's been a total disaster). And, again, it's not clear what you think anyone will get from "being heard in court" other than judges laughing at them that they think they have a god-given right to force private companies to host their racist nonsense.

At this point, you can tell that Senator Wyden was pissed off about all these lies. He got permission to speak again, and it's worth watching the video of this part, because he starts yelling, righteously, about just how full of shit Hawley is. You can tell that Wyden is pissed off both about Hawley's lies, but also about the procedural gamesmanship of trying to move the bill forward without alerting Wyden about these plans.

Once again, the Senator from Missouri is getting it all wrong. He talked again about how this law, this bipartisan law, is basically not for the little guy, but he's taking on the big guys. Well, the reason that's factually wrong is that on this floor, a previous effort was made to deal with sex trafficking. It was called SESTA and FOSTA. And the desire was we're all against this horrible smut online, we're all against it. The desire was to block it. And as the debate went forward, I and others said you're not going to be able to block it. You're going to be able to block Backpage, like eventually happened under existing law, which I supported, not under this new thing. Well guess who supported this SESTA FOSTA deal that is pretty much like the Senator from Missouri [wants]. IT WAS FACEBOOK. Facebook supported the last effort. Last time I looked, they're a pretty big company. So the Senator from Missouri is just getting it all wrong here.

I've seen Senator Wyden speak many, many times. I've never, ever seen him this angry.

... what we've always been about is the little guy and you see it every day. With MeToo, Black Lives Matter and so many voices from the community, because of this law can be heard. Do not--not just on this because I have objected so it can't go forward--do not accept this idea that this is somehow the path to solving problems and communications because under SESTA FOSTA, which is really the kind of model the Senator from Missouri is talking about, the only thing that happened was the horrendous people involved in sex trafficking went to the dark web. And so now we have an even bigger problem.

He then goes on to repeat his procedural concerns, that if he were trying to advance a bill that he knew Hawley had objections to, he would have notified him ahead of time, not tried to sneak it through while Hawley was otherwise occupied.

So, let's just be clear on this. Josh Hawley is lying. He's lying to the American public. He's making things up because the only thing he knows how to do is to demonize. He knows how to create enemies who he can then smack down. And he's decided that "big tech" is the enemy. The only problem is that it's not actually true. So he has to lie and lie and lie again. This won't be the last time, but Josh Hawley has shown his true colors once again.

80 Comments | Leave a Comment..

Posted on Techdirt - 24 September 2020 @ 2:35pm

If Patents Are So Important To Innovation, Why Do Innovative Companies Keep Opening Up Their Patents Rather Than Enforcing Them?

from the questions-to-ponder... dept

To hear many politicians (and, tragically, many academics) tell the story, patents and patent policy are keys to innovation. Indeed, many studies trying to measure innovation use the number of patents as a proxy. For years, we've argued that there is little evidence that patents are in any way correlated with innovation. Indeed, in practice, we often see patents get in the way of innovation, rather than being a sign of innovation. If anything, an influx of patents seems to indicate a decline in innovation, because as the saying goes, smart companies innovate, while failed companies litigate. Litigating patents tends to happen when a more established company no longer is able to compete by innovation, and has to bring in the courts to block and stop more nimble competitors.

Indeed, over and over again we seem to see the most innovative companies eschewing the anti-competitive powers that patents give them. I was reminded of this recently with the announcement that payments company Square had agreed to put all of its crypto patents into a new non-profit called the Crypto Open Patent Alliance to help fight off the unfortunate number of crypto patent trolls that are showing up.

Of course, we see this throughout the companies generally considered to be the most innovative. A decade ago, Twitter came up with a very clever Innovator's Patent Agreement, which effectively would block patent trolls from ever being able to use Twitter's patents, should they somehow fall into trollish hands. A bunch of other top internet companies including Google, Dropbox, Asana, and Newegg launched the License on Transfer network, as a basic poison pill to, again, stop patent trolls.

And, most famously, Elon Musk flat out gave away Tesla's patents and encouraged anyone else to use them to compete with Tesla, license-free.

If patents really were so vital to innovation, why would all of these innovative companies be so quick to give them up? And why is it so incredibly rare that any of them assert patents against competitors? Instead, so much of the patent litigation we see is against those innovative companies coming from a variety of patent trolls (frequently lawyers who never innovated at all) or also ran companies which may have been innovative in the past but have long since seen their innovative days in the rearview mirror.

It would be nice if policymakers, the media, and academics finally started recognizing the idea that patents are not just a bad proxy for actual innovation, but often antithetical to innovation, and we can see all the evidence we need for that in the fact that the most innovative companies are "devaluing" in their own patents to improve the ecosystem, rather than enforce those patents.

9 Comments | Leave a Comment..

Posted on Techdirt - 24 September 2020 @ 9:28am

Justice Department Releases Its Dangerous & Unconstitutional Plan To Revise Section 230

from the it-never-ends dept

Every day it's something new. The latest is that the Justice Department has come out with its official proposal to revise Section 230. As you may recall, back in February, the DOJ held some hearings about Section 230, followed by announcing some vague and contradictory guidelines for reform in early March.

Apparently between March and now there's been nothing more important for the Justice Department to be working on, because it's now blasted out its full unconstitutional proposal for reforming Section 230 and it's like a greatest hits of bad ideas. You can look at the redline version of the law itself, but the DOJ's announcement summarizes the revisions in two giant buckets: (1) "promoting transparency and open discourse" and (2) "addressing illicit activity online." It will not surprise you that the actual recommendations would do neither of these things. There's a lot in here and I'm honestly just too tired of going through and debunking all the various bad ideas in these proposals, so I'll just highlight a few egregious parts.

First off, like the recent "Online Freedom and Viewpoint Diversity Act" and the soon to be marked up "Online Content Policy Modernization Act" from Senator Lindsey Graham, the DOJ's bill would remove the term "otherwise objectionable" (get your t-shirts while they're still relevant!), and simply create a longer list of why a website could moderate content. It would also require an "objectively reasonable belief" that the content falls into one of those categories to qualify. The new list of acceptable reasons to moderate content to keep your immunity:

any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user has an objectively reasonable belief is obscene, lewd, lascivious, filthy, excessively violent, promoting terrorism or violent extremism, harassing, promoting self-harm, or unlawful, whether or not such material is constitutionally protected

This is pretty similar to the two bills listed above. The only real difference is that this one adds in the promotion of "violent extremism" which you already know that this DOJ will use to include trying to force social media companies to take down Antifa and BLM content.

But, of course, as we discussed with the previous bills, this is clearly unconstitutional. It is a form of regulation of content that is not content neutral, and that's not allowed. Also, note what kind of content is not included here: racist, homophobic, hateful content would not be covered in many cases. Nor would spam. Yes, in some cases it could be argued that such content is "harassing" or perhaps it might qualify for some of the other categories, but most of it would lead a site to not have 230 protections. Websites would still have 1st Amendment protections, but to fight that legal battle would be hugely expensive and destructive for most websites -- meaning that many will not fight at all.

The bill would also expand 230's exemptions such that federal civil actions were no longer exempt (as per the FTC's wish). It also includes a bunch of other carve-outs that I'm too tired to go through, but will note that it would appear to allow state Attorneys General to bring lawsuits against websites that were previously barred by 230. This would be allowed in cases where the sites had knowledge of the dissemination of content that would violate Federal criminal law, that the site was notified of this content, and then failed to remove it or failed to preserve evidence of it.

This section is at least worded slightly more carefully that earlier proposals, but would still lead to the risk of significant censorship at the behest of Attorneys General who criticize a website's content moderation practices -- which again would likely make it unconstitutional under the 1st Amendment.

It would then throw in a long list of new laws that were exempt from 230, basically taking the FOSTA model of saying that 230 no longer applied to sex trafficking, and saying "ditto for anti-terrorism laws, child sex abuse laws, cyberstalking, and antitrust." There are, of course, significant problems with each of these. We're already seeing how much harm FOSTA has caused with no indication that it helped stop any sex trafficking, and now the DOJ wants to just expand that treatment to a bunch of other laws, just because.

The anti-terrorism one should be particularly concerning. We've wrote about a whole bunch of cases involving people who sued social media for "material support for terrorism" in response to a loved one being killed by terrorists. The arguments are, roughly, that because their family member was killed by a terrorist, and because some terrorist-connected individuals used social media, clearly, the social media companies are liable for their family member's death.

Courts have, rightly, been tossing these cases out on 230 grounds. But if the DOJ got its way, that would no longer be possible, and we'd likely see a ton of frivolous litigation in response.

Another change is an attempt to remove 230 protections for sites that fact check the President. This is not how it's framed of course, but it's pretty obvious why Bill Barr wanted this in there. Existing 230 says that you can be liable for content if you were "responsible, in whole or in part, for the creation or development" of the information. The new bill would add to that:

Being responsible in whole or in part for the creation or development of information includes, but is not limited to, instances in which a person or entity solicits, comments upon, funds, or affirmatively and substantively contributes to, modifies, or alters information provided by another person or entity.

Basically, fact check someone, and you can lose your 230 protections. Of course, again, this is unconstitutional, as it's an attempt to suppress the very thing that 230 (and the 1st Amendment) were designed to encourage: more open discussion. Indeed, for Bill Barr -- who has whined about "cancel culture" -- to include this in there is deeply ironic. This kind of thing will decrease incentives to add commentary or fact checks, thus suppressing speech.

Finally, the new bill would have a whole section to define what is meant by "good faith" in content moderation, which is basically that you have to clearly delineate in your policies what is allowed and what is not, and your moderation must match that. This is, of course, impossible. It is written by people who have never had to moderate content at all. It is written by people who don't understand how content moderation is not black and white, but often vast areas of gray where judgment calls need to be made. It is written by people, in bad faith, assuming that all users of a website are acting in good faith. So many of these attempts to reform Section 230 refuse to take into account that people will seek to game the system. And restricting sites' ability to stop those gaming the system is a recipe for disaster.

But, hey, this is Bill Barr's DOJ and Donald Trump's White House. A policy proposal that is a recipe for disaster, as well as unconstitutional, seems to be par for the course.

20 Comments | Leave a Comment..

Posted on Techdirt - 23 September 2020 @ 12:12pm

California Cities Voting On Ridiculous Resolution Asking Congress For Section 230 Reform... Because Of Violence At Protests?

from the what-the-even-fuck-is-this? dept

I attended an Internet Archive event (virtually, of course) yesterday, and afterwards one of the attendees alerted me to yet another nefarious attack on Section 230 based on out-and-out lies. Apparently the League of California Cities has been going around getting various California cities to vote on a completely misleading and bogus motion pushing for Congress to reform Section 230 of the Communications Decency Act. It was apparently put up first by the city of Cerritos, which is part of Los Angeles County (almost surprised it wasn't started in Hollywood, but it wouldn't surprise me to find out that the impetus behind it was Hollywood people...). Basically, cities are voting on whether or not the League of California Cities should officially call on Congress to amend Section 230 in drastic ways... all because of some violence at recent protests about police brutality. The process, apparently, is that one city (in this case Cerritos) makes the proposal, and gets a bunch of other cities to first sign on, and then various other cities take a vote as to whether it becomes official League policy (after which they'd send a letter to Congress, which Congress would probably ignore).

And, if you just read the nonsense that the originating proposal put out there, and had no idea how Section 230, the internet, the 1st Amendment or the 4th Amendment works, it might sound like a good idea. Except that what the proposal says is utter nonsense, disconnected from reality.

This resolution states that the League of California Cities should urge Congress to amend Section 230 of the federal Communications Decency Act of 1996 (CDA) to limit the immunity provided to online platforms where their forums enable criminal activity to be promoted.

Ultimately, the policy objectives proposed under this resolution, if enacted, would incentivize social media companies to establish and implement a reasonable program to identify and remove content that solicits criminal activity.

Except that Section 230 already says there's no immunity for platforms if they enable federal criminal activity. So this is a made up concern. Second, if you changed 230 in the manner they want, they're simply wrong that it "would incentivize social media companies to establish and implement a reasonable program to identify and remove content that solicits criminal activity." Because every major social media platform already has such a program. The problem is not that they don't have incentives. The problem is that not everyone will ever agree on what the "right" moderation is.

Incredibly, the proposal handwaves away the idea that putting more liability on internet websites might lead to more censorship:

While there is certainly an argument to substantiate concerns around censorship, the use of social media as a tool for organizing violence is equally disturbing.

Tomato, tomahto.

Also, the proposal seems to blame violence that broke out at various protests this summer... on social media, claiming that's why 230 must change.

Although the majority of protests were peaceful, some demonstrations in cities escalated into riots, looting, and street skirmishes with police. While much of the nation’s focus has been on addressing police misconduct, police brutality, and systemic racism, some have used demonstrators’ peaceful protests on these topics as opportunities to loot and/or vandalize businesses, almost exclusively under the guise of the “Black Lives Matter” movement. It has been uncovered that these “flash robs” were coordinated through the use of social media. The spontaneity and speed of the attacks enabled by social media make it challenging for the police to stop these criminal events as they are occurring, let alone prevent them from commencing altogether.

As these events started occurring across the country, investigators quickly began combing through Facebook, Twitter, and Instagram seeking to identify potentially violent extremists, looters, and vandals and finding ways to charge them after — and in some cases before — they sow chaos. While this technique has alarmed civil liberties advocates, who argue the strategy could negatively impact online speech, law enforcement officials claim it aligns with investigation strategies employed in the past.

So, let me get this straight. First, we should blame social media -- and not police brutality and militarization -- for the cases where violence has broken out at a few protests. And the way to deal with violence organized on social media is to... hold the social media platforms liable rather than those that engaged in or encouraged the violence? Are these people for real?

Also, the full proposal goes way beyond what is described regarding violence at protests. This is what it says:

  1. Online platforms must establish and implement a reasonable program to identify and take down content which solicits criminal activity; and
  2. Online platforms must provide to law enforcement information which will assist in the identification and apprehension of persons who use the services of the platform to solicit and to engage in criminal activity; and
  3. An online platform that willfully or negligently fails in either of these duties is not immune from enforcement of state and local laws which impose criminal or civil liability for such failure.
That would be a massive and problematic change to Section 230. First, as it stands, websites already have tremendous incentive to identify and take down content which solicits criminal activity -- and many of them try to do exactly that. Changing 230 will not change that -- but will lead to fewer places for people to communicate and put tremendous limits on the ability to speak freely online.

The second prong has nothing to do with Section 230 and raises significant 4th Amendment concerns about when a website should have to hand over private information on someone without any warrant or judicial review. That should be frightening to everyone.

This entire proposal is horrifically authoritarian, and is questionable on both 1st and 4th Amendment grounds, but a bunch of cities are signing onto it because the proposal is extremely misleading about how the internet works, how Section 230 works, and what this all means. While I'm not sure that Congress really gives a shit what the League of California Cities has to say about Section 230, it's yet another way in which people from all over the place are attacking the law that made the internet, because they're mad that people they don't like are doing stuff they don't like.

Thankfully, at least one California city has rejected the proposal. Last night the city of Hayward rejected the proposal, despite it getting support from the local police chief and the city attorney who, I'm told, used the totally bogus "fire in a crowded theater" line, suggesting that was the law of the land (it's not) and other wrong and misleading cliches, including "freedom of speech isn't free." Thankfully, some on the city council (and the mayor) seemed to recognize that this was a dangerous, half-baked proposal and voted it down. I hope other cities do the same.

Read More | 43 Comments | Leave a Comment..

Posted on Techdirt - 23 September 2020 @ 9:33am

China Calls TikTok Deal 'Extortion'; Says It Will Not Approve

from the back-to-the-drawing-board-folks dept

As was hinted at in our previous post on China's response to Trump forcing TikTok to... not actually be sold to Oracle, but to force TikTok into signing a hosting deal to store data in the US, it appears that China is going to do some posturing of its own. The Chinese government has said that it will block the deal which it calls "extortion."

And, to be fair, the Chinese government has a point. It was extortionate. Trump told ByteDance it had to sell or TikTok would be blocked in the US. And while it didn't actually sell TikTok, it was forced at gunpoint into a deal that it appears it would not have made otherwise. And, of course, China holds all the leverage here because Trump is a ridiculously bad dealmaker. His "plan" flopped in that he didn't force a sale, and then to save face (and to help out one of his big donors) he gave the thumbs up to the Oracle non-purchase/hosting contract. It was already a weak move that everyone other than the dumbest of Trump's fans knows is a weak move by a President who swings the executive power bat like a toddler who just learned how to smash things.

So, of course China is going to move for a better deal. In the Chinese state-controlled English language outlet China Daily, the the Chinese government goes in for the kill.

What the United States has done to TikTok is almost the same as a gangster forcing an unreasonable and unfair business deal on a legitimate company.

It (correctly) calls out that the "national security" excuse that Trump used is an obvious fig leaf and hogwash for his real motives. Of course, it claims that it was really about the US wanting to kill foreign competition, when that's unlikely to be the case. It's got more to do with various culture wars the President feels like fighting, rather than actually leading in the midst of a pandemic.

The editorial claims that even with this half-assed Oracle deal, it's a move towards the US using mafioso techniques to gain control over TikTok, and says that China has no reason to approve the deal on its end.

It is not the first time the US has played such dirty tricks to bully foreign companies in order to either destroy them or take them over.

China has no reason to give the green light to such a deal, which is dirty and unfair and based on bullying and extortion. If the US gets its way, it will continue to do the same with other foreign companies. Giving in to the unreasonable demands of the US would mean the doom of the Chinese company ByteDance.

Again, this is almost certainly just more posturing. And, ridiculously, what's it's likely to come down to is some sort of stupid diplomatic discussion between the US State Department and counterparts in China to come up with something that will make the deal work -- which means it will almost be an already worse deal than it currently is, with no redeeming points whatsoever, and what little Trump "got" out of the deal will not just be whittled down to nothing, but probably less than nothing.

14 Comments | Leave a Comment..

Posted on Free Speech - 23 September 2020 @ 2:59am

Trump Still Hates The 1st Amendment: Meeting With State Attorneys General To Tell Them To Investigate Internet Companies For Bias

from the oh-come-on dept

It never, ever ends. President Trump is continuing his war on Section 230 and the right for the open internet to exist. The latest is that he's meeting with various state Attorneys General to encourage them to bring investigations against internet websites over "anti-conservative bias" despite the fact that no one has shown any actual evidence of anti-conservative bias beyond assholes, trolls, and literal Nazis upset that they got banned.

The Trump administration is expected to urge Republican state attorneys general on Wednesday to investigate social-media sites over allegations they censor conservatives online, escalating the White House’s war with Silicon Valley at a time when tech giants are increasingly taking action against the president’s most controversial posts.

A different report notes that Trump and the DOJ are also planning to talk with them about "revising" Section 230:

U.S. President Donald Trump plans to meet on Wednesday with a group of Republican state attorneys general about revising a key law that shields social media companies from liability for content posted by their users and allows them to remove lawful but objectionable posts.

“Online censorship goes far beyond the issue of free speech, it’s also one of protecting consumers and ensuring they are informed of their rights and resources to fight back under the law,” White House spokesman Judd Deere said. “State attorneys general are on the front lines of this issue and President Trump wants to hear their perspectives.”

Of course, the State AGs would need a big change to Section 230 to be able to go after social media for bias -- but they'd need an even bigger change to the 1st Amendment, which allows companies to choose which content to host -- and what content not to host. If Trump and the DOJ think that law enforcement can investigate social media for anti-conservative bias, does that mean he'd be okay if AGs in other states investigate Fox News for bias? Or Breitbart? Of course not. The 1st Amendment doesn't allow it, and so we get another stupid culture war from the President shitting on the Constitution he swore to uphold and protect.

In reality, this is all just more posturing. State AGs (of both parties) have long hated Section 230 because it removes a tool for their ridiculous grandstanding to help them get elected to higher office (look at how many state AGs go on to be Governor or Senator). For the better part of a decade, State AGs have been asking to change 230 because, while Section 230 has an exemption for federal criminal law, it does not for state criminal law. That means State AGs are less able to go after social media companies. But, boy do they want to.

Pretty much every internet company, upon getting popular, has gone through an attack from State AGs, that was not based on any legitimate purpose but to get the state AGs in the headlines to claim that they were "protecting the citizens of our fine state" or some other nonsense. Almost exactly 10 years ago, we highlighted what then Topix CEO, Chris Tolles, went through in dealing with state AGs. They put out a press release threatening Topix for how the company moderated user comments.

Tolles did what he thought was the right thing. He sat down with the various AGs who had signed onto the press release, and explained to them the ins-and-outs of content moderation and why Topix made the choices it did. All of those choices, of course, were protected by Section 230 and the 1st Amendment. But, the AGs did not care. They took what Tolles told them, and put out an even more ridiculous press release, misrepresenting nearly everything he told them, and again "threatening" to do something.

So, after opening the kimono and giving these guys a whole lot of info on how we ran things, how big we were and that we dedicated 20% of our staff on these issues, what was the response. (You could probably see this one coming.)

That's right. Another press release. This time from 23 states' Attorney's General.

This pile-on took much of what we had told them, and turned it against us. We had mentioned that we required three separate people to flag something before we would take action (mainly to prevent individuals from easily spiking things that they didn't like). That was called out as a particular sin to be cleansed from our site. They also asked us to drop the priority review program in its entirety, drop the time it takes us to review posts from 7 days to 3 and "immediately revamp our AI technology to block more violative posts" amongst other things.

AGs have been doing this for years (across both parties). The attack on Topix was done by then Kentucky AG, Jack Conway. We've covered similar attacks from then Connecticut AG Richard Blumenthal, then South Carolina AG Henry McMaster, then New York AG Andrew Cuomo, then California AG Kamala Harris, then Mississippi AG Jim Hood, current Louisiana AG Jeff Landry and many, many more.

So of course AGs (of either party) are going to tell the President they need Section 230 reformed -- and that plays straight into the President's dimwitted playbook of wanting to harass companies he feels (wrongly) are "against" him. The reform the AGs want is to remove the state criminal law exemption, which will give them significantly more power to attack internet companies and demand ridiculous concessions that don't serve the public benefit, but do serve to keep the AGs in the headlines for "taking on big tech" and "protecting the children" and other similar nonsense. The reform the President wants is anything that lets him drum up a new bullshit culture war and play the crybaby victim about how big tech is "against" him, even as Facebook and Twitter have been two of the biggest tools to make his short political career viable.

It's nonsense, but a politically convenient nonsense for the President right now...

31 Comments | Leave a Comment..

Posted on Techdirt - 22 September 2020 @ 3:32pm

Senator Lindsey Graham Must Be Desperate For Donations; Announces Terrible Bill That Mashes Up Bad 230 Reform With Bad Copyright Reform

from the it's-bad,-get-rid-of-it dept

Senator Lindsey Graham is in a tight re-election campaign that he might just lose. And he's doing what politicians desperate for campaign cash tend to do: releasing a lot of absolutely batshit crazy bills that will pressure big donors to donate to him to either support the bill, or to get him not to move forward on it. It's corrupt as hell, but is standard practice. And the best of these kinds of bills are ones that pit two large industries with lots of lobbyists and cash to throw around against one another. For many years the favorite such bill for this was a bill about performance rights royalties for radio play. This would pit radio broadcasters against the music industry, and the cash would flow. Every two years, as the election was coming, such a bill would be released that was unlikely to go anywhere, but the cash would flow in.

More recently, the goal has been to target the big internet companies. And, boy, Linsdey Graham's campaign must be struggling, because he's decided to take two horrible, awful bills that would harm the internet and mash them together into a single bill that is set for markup by the Senate Judiciary Committee next week. This new bill, entitled the "Online Content Policy Modernization Act" simply combines the terrible and unconstitutional CASE Act (to create a quasi-judicial court in the Copyright Office to review copyright claims) with some of the recently released (and also horrible and unconstitutional) "Online Freedom and Viewpoint Diversity Act" which would rewrite Section 230 to remove the ability to moderate "otherwise objectionable" content without liability, and would, instead, insert a limited list of what kinds of content could be moderated without liability.

Both of these are bad ideas, but both of them are specific threats to the open internet -- and the kinds of things that Senator Graham knows he can fundraise on. Both bills are garbage, and Senator Graham likely knows this -- but he's not in the Senate to actually legislate. He's there to stay in power, and there's a real chance he might lose this November. So I guess it's time to break out the really stupid bills.

Read More | 17 Comments | Leave a Comment..

Posted on Techdirt - 22 September 2020 @ 10:44am

Authors Of CDA 230 Do Some Serious 230 Mythbusting In Response To Comments Submitted To The FCC

from the that's-not-how-any-of-this-works-at-all dept

While there were thousands of comments filed to the FCC in response to the NTIA's insanely bad "petition" to have the FCC reinterpret Section 230 in response to an unconstitutional executive order from a President who was upset that Twitter fact checked some of his nonsense tweets, perhaps the comment that matters most is the one submitted last week by the two authors of Section 230, Senator Ron Wyden and former Rep. Chris Cox. Cox and Wyden wrote what became Section 230 back in the 90s, and have spent decades fighting misinformation about it -- and fighting to keep 230 in place.

In the comment they submitted to the FCC, they respond to all the idiotic nonsense that everyone has been submitting. Again, these are the guys who wrote the actual law. They know what it was intended to do, and agree with how it's been used to date. So they go on a systematic debunking journey through the nonsense. First, they respond to comments that say that the FCC can interpret 230. Nope.

Several commenters have repeated the claim in the Petition that “[n]either section 230’s text, nor any speck of legislative history, suggests any congressional intent to preclude the Commission’s implementation.” In fact, however, as the authors of the legislation and the floor managers of the debate on the bill in the House of Representatives, we can assure you the very opposite is true. We and our colleagues in Congress on both sides of the aisle were emphatic that we were not creating new regulatory authority for the FCC or any other independent agency or executive branch department when we enacted Section 230. Not only is this clear from the legislative history, but it is written on the face of the statute. Unlike other provisions in Title II of the Communications Act, Section 230 does not invite agency rulemaking. Indeed, in a provision that judges interpreting the law have noted is “unusual,” Section 230(b) explicitly provides:

It is the policy of the United States … to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.

When this legislation came to the floor of the House of Representatives for debate on August 4, 1995, the two of us, together with members on both sides of the aisle, explained that our purpose was to ensure that the FCC would not have regulatory authority over content on the internet. We and our colleagues, Democrats and Republicans alike, decried the unwelcome proregulatory alternative of giving the FCC responsibility for regulating content on the internet, which at the time was being advanced in separate legislation by Senator James Exon...

The Cox-Wyden bill under consideration was intended as a rebuke to that entire concept.

Then, to prove they're not engaging in revisionist history, they cite the speeches they themselves gave about how the whole point of their bill was to keep the FCC from regulating the internet. From Wyden's floor speech at the time:

[T]he reason that this approach rather than the Senate approach is important is … the speed at which these technologies are advancing [which will] give parents the tools they need, while the Federal Communications Commission is out there cranking out rules about ·proposed rulemaking programs. Their approach is going to set back the effort to help our families.

Cox's floor speech was even more direct with the question of whether or not their approach was designed to give the FCC power:

Some have suggested, Mr. Chairman, that we take the Federal Communications Commission and turn it into the ‘Federal Computer Commission’ — that we hire even more bureaucrats and more regulators who will attempt, either civilly or criminally, to punish people by catching them in the act of putting something into cyberspace. Frankly, there is just too much going on on the Internet for that to be effective....

[This bill] will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the Internet —that we do not wish to have a ‘Federal Computer Commission’ with an army of bureaucrats regulating the Internet....

The message today should be, from this Congress: we embrace this new technology, we welcome the opportunity for education and political discourse that it offers for all of us. We want to help it along this time by saying Government is going to get out of the way and let parents and individuals control it rather than Government doing that job for us....

If we regulate the Internet at the FCC, that will freeze or at least slow down technology. It will threaten the future of the Internet. That is why it is so important that we not have a ‘Federal Computer Commission’ do that.

Next, the comment responds to the claims that 230 is "outdated." Nope, claim its authors:

Several commenters, including AT&T, assert that Section 230 was conceived as a way to protect an infant industry, and that it was written with the antiquated internet of the 1990s in mind – not the robust, ubiquitous internet we know today. As authors of the statute, we particularly wish to put this urban legend to rest.

Section 230, originally named the Internet Freedom and Family Empowerment Act, H.R. 1978, was designed to address the obviously growing problem of individual web portals being overwhelmed with user-created content. This is not a problem the internet will ever grow out of; as internet usage and content creation continue to grow, the problem grows ever bigger. Far from wishing to offer protection to an infant industry, our legislative aim was to recognize the sheer implausibility of requiring each website to monitor all of the user-created content that crossed its portal each day.

Critics of Section 230 point out the significant differences between the internet of 1996 and today. Those differences, however, are not unanticipated. When we wrote the law, we believed the internet of the future was going to be a very vibrant and extraordinary opportunity for people to become educated about innumerable subjects, from health care to technological innovation to their own fields of employment. So we began with these two propositions: let’s make sure that every internet user has the opportunity to exercise their First Amendment rights; and let’s deal with the slime and horrible material on the internet by giving both websites and their users the tools and the legal protection necessary to take it down.

The march of technology and the profusion of e-commerce business models over the last two decades represent precisely the kind of progress that Congress in 1996 hoped would follow from Section 230’s protections for speech on the internet and for the websites that host it. The increase in user-created content in the years since then is both a desired result of the certainty the law provides, and further reason that the law is needed more than ever in today’s environment.

Next up: the all too frequent claim that 230 creates a special rule for the internet that is different than for brick and mortar stores, and therefore there's a "double standard." Again, nope.

Several commenters have asserted that Section 230 sets up a “double standard” by treating online businesses differently from “brick-and-mortar” businesses. This represents a fundamental misunderstanding of both the purpose of the law and how it operates in practice.

Section 230 serves to punish the guilty and protect the innocent. Individuals and firms are made fully responsible for their own conduct. Anyone who creates digital content and uploads it to a website is legally liable for what they have done. A website that hosts the content will likewise be liable, if it contributes to the creation or development of that content, in whole or in part. Otherwise, the website will be protected from liability for third-party content.

Section 230 was written to adapt intermediary liability rules long recognized in the analog world for the digital world, applying the wisdom accumulated over decades in legislatures and the courts to the realities of this new technological realm. As authors of the law, we understood what was evident in 1996 and is even more in evidence today: it would be unreasonable for the law to impose on websites a legal duty to monitor all user-created content.

When Section 230 was written, just as now, each of the commercial applications flourishing online had an analog in the offline world, where each had its own attendant legal responsibilities. Newspapers could be liable for defamation. Banks and brokers could be held responsible for failing to know their customers. Advertisers were responsible under the Federal Trade Commission Act and state consumer laws for ensuring their content was not deceptive and unfair. Merchandisers could be held liable for negligence and breach of warranty, and in some cases even subject to strict liability for defective products. In writing Section 230, we—and ultimately the entire Congress—decided that these legal rules should continue to apply on the internet just as in the offline world. Every business, whether operating through its online facility or through a brick-and-mortar facility, would continue to be responsible for all of its legal obligations.

What Section 230 added to the general body of law was the principle that individuals or an entity operating a website should not, in addition to their own legal responsibilities, be required to monitor all of the content created by third parties and thereby become derivatively liable for the illegal acts of others. Congress recognized that to require otherwise would jeopardize the quintessential function of the internet: permitting millions of people around the world to communicate simultaneously and instantaneously, a unique capability that has made the internet “the shining star of the Information Age.” Congress wished to “embrace” and “welcome” this, not only for its commercial potential but also for “the opportunity for education and political discourse that it offers for all of us.” The result is that websites are protected from liability for user-created content, but only to a point: if they are responsible, even in part, for the creation or development of that content, they lose that protection.

The fact that Section 230 established the legal framework for assessing liability in circumstances unique to the internet does not mean that either this framework or the preexisting legal rules do not apply equally to all online and offline businesses. Every business continues to bear the same legal responsibilities when operating in the offline world, and every business is bound by the same statutorily-defined responsibilities set out in Section 230 when operating in the e-commerce realm.

Then there's the question about whether or not the FCC can mandate disclosure and reporting requirements. As Cox and Wyden note, this argument -- pushed strongly by AT&T and the NTIA "borders on the absurd."

The Petition asks the FCC to interpret Section 230 as if it contained explicit requirements mandating terms of service, content moderation policies, due process notice and hearings in which content creators could dispute moderation decisions, and public disclosures concerning these and other matters. The Petition further asks that the FCC impose these specific requirements by rule. Multiple commenters, including AT&T, have endorsed this aspect of the NTIA proposal.

The Petition clearly states NTIA’s understanding that Congress, with “strong bi-partisan support,” intended Section 230 to be “a non-regulatory approach.” In this they are correct. As outlined in Section II above, the legislative history clearly demonstrates that we and our colleagues in Congress intended to keep the FCC and other regulators out of this area. This is reflected in the language of Section 230 itself. Both of us, as the authors of the legislation, made ourselves abundantly clear on this point when the law was being debated.

This fact—and NTIA’s admission of it—makes it all the more illogical for their Petition to ask the Commission to interpret Section 230 as statutory authorization for the FCC to regulate the very subjects that Section 230 itself covers, and which Congress wanted the Commission to stay out of. It surpasses illogic, and borders on the absurd, for the Petition to ask the FCC to use authority that Section 230 clearly does not grant it, in order to divine from the text of the statute explicit duties and burdens on websites that Section 230 itself clearly does not impose.

As Cox and Wyden note, any such interpretation would clearly require new legislation and could not be created, whole cloth, from the mind of an angry President and clueless NTIA staffers with grudges about Section 230.

All of this would require new federal legislation. None of it appears in Section 230, either in the text of the law that we can all read (and that the two of us wrote), or even in the invisible ink which NTIA must believe only it can read.

I get the feeling that Cox and Wyden do not think highly of the NTIA petition.

As for those who commented suggesting that the FCC could interpret Section 230 to include a "negligence" standard, again, this is not how any of this works:

Several commenters, including Digital Frontiers Advocacy, have urged grafting onto Section 230 a requirement, derived from negligence law, upon which existing protections for content moderation would be conditioned. These requirements would add to Section 230 a “duty of care” or a “reasonableness” standard that cannot be found in the statute. As one example, the Petition (which is generically endorsed in its entirety by many individual commenters) would have the FCC require that content moderation decisions be “objectively reasonable,” as compared to the clear language of Section 230, which provides that the decision is to be that of “the provider or user.”

As the authors of this law, and leading participants in the legislative process that led to its enactment in 1996, we can assure the Commission that the reason you do not see any such requirement on the face of the statute is that we did not intend to put one there.

The proposed introduction of subjective negligence concepts would effectively make every complaint concerning a website’s content moderation into a question of fact. Since such factual disputes can only be resolved after evidentiary discovery (depositions of witnesses, written interrogatories, subpoenas of documents, and so forth), no longer could a website prove itself eligible for dismissal of a case at an early stage.

We intended to spare websites the death from a thousand paper cuts that would be the result if every user, merely by filing a complaint about a content moderation decision, could set in motion a multi-year lawsuit. We therefore wrote Section 230 with an objective standard: was the allegedly illegal material created or developed—in whole or in part—by the website itself? If the complaint adequately alleges this, then a lawsuit seeking to hold the website liable as a publisher of the material can proceed; otherwise it cannot.

And if you think Cox and Wyden are done exploring just how absurdly stupid this process has been, you haven't prepared yourself for the next section, in which they respond to the many ridiculous comments suggesting 230 enables the FCC to enforce "neutrality" on internet websites:

The Claremont Institute and scores of individual commenters have complained that particular websites are not politically neutral, and they demand that Section 230’s protection from liability for content created by others be conditioned on proof that a website is in fact politically neutral in the content that it hosts, and in its moderation decisions.

There are three points that must be made in reply. The first is that Section 230 does not require political neutrality. Claiming to “interpret” Section 230 to require political neutrality, or to condition its Good Samaritan protections on political neutrality, would erase the law we wrote and substitute a completely different one, with opposite effect. The second is that any governmental attempt to enforce political neutrality on websites would be hopelessly subjective, complicated, burdensome, and unworkable. The third is that any such legislation or regulation intended to override a website’s moderation decisions would amount to compelling speech, in violation of the First Amendment....

They respond to every idiot who misinterprets the line in the Findings part of Section 230 about "diversity of political discourse" by saying "we meant lots of different sites, not that every site has to host all your nonsense."

Section 230 itself states the congressional purpose of ensuring that the internet remains “a global forum for a true diversity of political discourse.” In our view as the law’s authors, this requires that government allow a thousand flowers to bloom—not that a single website has to represent every conceivable point of view. The reason that Section 230 does not require political neutrality, and was never intended to do so, is that it would enforce homogeneity: every website would have the same “neutral” point of view. This is the opposite of true diversity.

To use an obvious example, neither the Democratic National Committee nor the Republican National Committee websites would pass a political neutrality test. Government compelled speech is not the way to ensure diverse viewpoints. Permitting websites to choose their own viewpoints is.

And then there's that comment that was popular among individual filers (and lots of idiots on Twitter) that because Section 230 allows websites to take down lawful speech, that's somehow a violation of the 1st Amendment. We've discussed many, many, many times how ridiculous that is, but why don't we hear it from Wyden and Cox:

Many individual commenters complained that their political viewpoints have been “censored” by websites ostensibly implementing their community guidelines, but actually suppressing speech. Several of these commenters have urged the FCC to require that all speech protected by the First Amendment be allowed on any site of sufficient size that it might be deemed an equivalent to the “public square.” In the context of this proceeding, that would mean Section 230 would somehow have to be “interpreted” to require this.

Comments within this genre share a fundamental misunderstanding of Section 230. The matter is readily clarified by reference to the plain language of the statute. The law provides that a website can moderate content “whether or not such material is constitutionally protected.”... Congress would have to repeal this language, and replace it with an explicit speech mandate, in order for the FCC to do what the commenters are urging.

Government-compelled speech, however, would be a source of further problems. Because the First Amendment not only protects expression but non-expression, any attempt to devise an FCC regulation that forces a website to publish content it otherwise would moderate would almost certainly be unconstitutional. The government may not force websites to publish material that they do not approve. As Chief Justice Roberts unequivocally put it in Rumsfeld v. Forum for Academic and Institutional Rights (2006), “freedom of speech prohibits the government from telling people what they must say.”...

And then they point out that many commenters don't seem to understand the 1st Amendment:

The answer to the commenters’ complaints of “censorship” must be twofold. First, many of the comments conflate their frustrations about Section 230 with the First Amendment. As noted, it is the First Amendment, not Section 230, that gives websites the right to choose which viewpoints, if any, to advance. Furthermore, First Amendment speech protections dictate that the government, with a few notable exceptions, may not dictate what speech is acceptable. The First Amendment places no such restrictions on private individuals or companies. Second, the purpose and effect of Section 230 is to make the internet safe for innovation and individual free speech. Without Section 230, complaints about “censorship” by the likes of Google, Facebook, and Twitter would not disappear. Instead, we would be facing a thousandfold more complaints that neither the largest online platforms nor the smallest websites are any longer willing to host material from individual content creators.

And changing Section 230 in the manner these commenters seek wouldn't actually help them:

Eroding the law through regulatory revision would seriously jeopardize free speech for everyone. It would be particularly injurious to marginalized viewpoints that aren’t within “the mainstream.” It would present near-insuperable barriers for new entrants attempting to compete with entrenched tech giants in the social media space. Not least of all, it would set a terrible example for the rest of the world if the United States, which created the internet and so much of the vast cyber ecosystem that has enabled it to flourish globally as an informational, cultural, scientific, educational, and economic resource, were to undermine the ability that hundreds of millions of individuals have each day to contribute their content to that result.

In the absence of Section 230, the First Amendment rights of Americans, and the internet as we know it, would shrivel. Far from authorizing censorship, the law provides the legal certainty and protection from open-ended liability that permits websites large and small to host the free expression of individuals, making it available to a worldwide audience. Section 230 is a bulwark of free speech and civil discourse that is more important now than ever, especially in the current political climate that is increasingly hostile to both.

In short, so many of these commenters are confused about the law, the history, the technology, how free speech works, how the internet works, and more. That much of this is also true of the NTIA petition itself is a shame.

The Cox and Wyden comment concludes by underlining the fact that they wrote 230 with the explicit intent of keeping the FCC away from regulating internet websites.

On one point we can speak ex cathedra, as it were: our intent in writing this law was to keep the FCC out of the business of regulating websites, content moderation policies, and the content of speech on the internet. The Petition asks the Commission to reverse more than two decades of its own policy by becoming, at this late stage in the life of Section 230, its regulatory interpreter. In so doing, the FCC would assume responsibility for regulating websites, content moderation policies, and the content of speech on the internet—precisely the result we intended Section 230 to prevent. To reach this perverse result, the FCC would “clarify” the words of Section 230 in ways that do violence to the plain meaning of the statutory text.

One would hope that such a detailed response from the authors of the law would put this whole nonsense to rest. But it won't.

Read More | 38 Comments | Leave a Comment..

Posted on Techdirt - 22 September 2020 @ 9:27am

Blowback Time: China Says TikTok Deal Is A Model For How It Should Deal With US Companies In China

from the because-of-course dept

We've already covered what a ridiculous, pathetic grift the Oracle/TikTok deal was. Despite it being premised on a "national security threat" from China, because the app might share some data (all of which is easily buyable from data brokers) with Chinese officials, the final deal cured none of that, left the Chinese firm ByteDance with 80% ownership of TikTok, and gave Trump supporters at Oracle a fat contract -- and allowed Trump to pretend he did something.

Of course, what he really did was hand China a huge gift. In response to the deal, state media in China is now highlighting how the Chinese government can use this deal as a model for the Chinese to force the restructuring of US tech companies, and force the data to be controlled by local companies in China. This is from the editor-in-chief of The Global Times, a Chinese, state-sponsored newspaper:

That says:

The US restructuring of TikTok’s stake and actual control should be used as a model and promoted globally. Overseas operation of companies such as Google, Facebook shall all undergo such restructure and be under actual control of local companies for security concerns.

So, beyond doing absolutely nothing to solve the "problem" that politicians in the US laid out, the deal works in reverse. It's given justification for China to mess with American companies in the same way, and push to expose more data to the Chinese government.

Great work, Trump. Hell of a deal.

Meanwhile, the same Twitter feed says that it's expected that officials in Beijing are going to reject the deal from their end, and seek to negotiate one even more favorable to China's "national security interests and dignity."

So, beyond everything else, Trump's "deal" has probably done more to help China, and harm data privacy and protection, while also handing China a justification playbook to do so: "See, we're just following your lead!"

40 Comments | Leave a Comment..

Posted on Techdirt - 21 September 2020 @ 1:29pm

It's September 21st And Demi Abejuyigbe Has Another Great September 21st Video For Charity, Marred By Copyright Takedowns

from the copyright-ruins-everything dept

Copyright ruins freaking everything. Five years ago, today, Demi Adejuyigbe gifted the world with an incredible video of him dancing to Earth, Wind & Fire's classic song September. If you somehow have not seen it, I'm jealous of you for getting to watch it for the first time.

It's a reminder of the kind of gleeful content creation that only the internet enables. And it went pretty viral. So much so that Demi decided to do it again the following year. And then each year after that, with each video getting bigger and more ambitious (in somewhat incredible ways). He'd put them all threaded as replies to the original 2016 video. And I'd embed the tweets here, except for the fact that copyright ruins everything and Sony apparently decided to take down the 2017 and 2019 videos from Twitter:

This is even more ridiculous because Demi turned his September 21st videos into a successful fund raising tool for tons of good and important charities. And Sony made them disappear from Twitter. All for copyright. To be fair, the videos are still available on YouTube, but it's crazy that they're missing from Twitter.

Anyway, Demi has released the latest such video and it's very much what I needed today. I'll embed the YouTube version since apparently that has a better chance of not being stomped into the ground by a Sony copyright claim. Please watch the whole thing, and consider donating at Sept21st.com:

8 Comments | Leave a Comment..

Posted on Free Speech - 21 September 2020 @ 12:05pm

Judge Issues Preliminary Injunction Saying That The US Cannot Block WeChat, Says The Ban Raises 1st Amendment Concerns

from the good-to-see dept

While much of the news this weekend with regards to the President's plans to block Chinese messaging apps focused on the fake "deal" to avert a TikTok ban, things didn't go the President's way on his other planned ban. As you may recall, along with TikTok, Trump issued an executive order to ban WeChat, the very popular Chinese social network/messaging/everything app. Last week, we noted that a bunch of WeChat users in the US were trying to get an injunction to block the ban, as the Commerce Department's details about the ban proved that its stated goal of protecting Americans was nonsense.

The court held a hearing over the weekend (after also holding hearings on Thursday and Friday) and quickly issued a preliminary injunction, blocking the Commerce Department from putting the WeChat ban in place. As the judge rightly notes, there are significant 1st Amendment concerns with the ban. Basically, the court says that the WeChat users have rightly shown that banning the app likely violates the 1st Amendment and creates prior restraint:

On this record, the plaintiffs have shown serious questions going to the merits of their First Amendment claim that the Secretary’s prohibited transactions effectively eliminate the plaintiffs’ key platform for communication, slow or eliminate discourse, and are the equivalent of censorship of speech or a prior restraint on it.... The government — while recognizing that foreclosing “‘an entire medium of public expression’” is constitutionally problematic — makes the pragmatic argument that other substitute social-media apps permit communication. But the plaintiffs establish through declarations that there are no viable substitute platforms or apps for the Chinese-speaking and Chinese-American community. The government counters that shutting down WeChat does not foreclose communications for the plaintiffs, pointing to several declarations showing the plaintiffs’ efforts to switch to new platforms or apps. But the plaintiffs’ evidence reflects that WeChat is effectively the only means of communication for many in the community, not only because China bans other apps, but also because Chinese speakers with limited English proficiency have no options other than WeChat.

The plaintiffs also have shown serious questions going to the merits of the First Amendment claim even if — as the government contends — the Secretary’s identification of prohibited transactions (1) is a content-neutral regulation, (2) does not reflect the government’s preference or aversion to the speech, and (3) is subject to intermediate scrutiny. A content-neutral, time-placeor- manner restriction survives intermediate scrutiny if it (1) is narrowly tailored, (2) serves a significant governmental interest unrelated to the content of the speech, and (3) leaves open adequate channels for communication.... To be narrowly tailored, the restriction must not “burden substantially more speech than is necessary to further the government’s legitimate interests.”... Unlike a content-based restriction of speech, it “need not be the least restrictive or least intrusive means of serving the governments interests. But the government still may not regulate expression in such a manner that a substantial portion of the burden on speech does not advance its goals.”...

As for the supposed "national security" interests of the US government? The court says "yes, if only the DOJ shared any details."

Certainly the government’s overarching national-security interest is significant. But on this record — while the government has established that China’s activities raise significant nationalsecurity concerns — it has put in scant little evidence that its effective ban of WeChat for all U.S. users addresses those concerns. And, as the plaintiffs point out, there are obvious alternatives to a complete ban, such as barring WeChat from government devices, as Australia has done, or taking other steps to address data security.

The court did not go into the various other claims by the plaintiffs, though if the case continues they'll come up later. However, in closing out the ruling, the judge uses the President's own words in the executive order against him. After the DOJ told the court that the WeChat ban was important for human rights because China heavily censors communications on WeChat... the judge more or less says "um, isn't that what you're now trying to do?" and points to the President's own words about free speech in this very executive order:

Finally, at the hearing, the government cited a Washington Post article contending that a ban of WeChat is a net positive for human rights: “WeChat it is a closed system that keeps its 1.2 billion users in a parallel universe where they can communicate as long as they don’t cross the lines, and banning it might eventually strengthen the voices of the Chinese diaspora.” This is another important point: the federal government — based on its foreign-policy and national security interests —may not want to countenance (or reward) the Chinese government’s banning apps outside of the Chinese government’s control and, more generally, censoring or punishing free speech in China or abroad. But as the President said recently in Executive Order 13925,

Free speech is the bedrock of American democracy. Our Founding Fathers protected this sacred right with the First Amendment to the Constitution. The freedom to express and debate ideas is the foundation for all of our rights as a free people.


The growth of online platforms in recent years raises important questions about applying the ideals of the First Amendment to modern communications technology. Today, many Americans [including the plaintiffs and others in the U.S. WeChat community] follow the news, stay in touch with friends and family, and share their views on current events through social media and other online platforms. As a result, these platforms function in many ways as a 21st century equivalent of the public square.

End result: the court puts a nationwide injunction saying that the Commerce Department cannot implement the WeChat ban it described last week.

Read More | 20 Comments | Leave a Comment..

Posted on Techdirt - 21 September 2020 @ 9:39am

The TikTok 'Deal' Was A Grift From The Start: Accomplishes None Of The Stated Goals; Just Helps Trump & Friends

from the a-joke dept

A week ago, we explained that the announced "deal" between Oracle and TikTok was a complete joke and what appeared to be a grift to let Trump claim he had done something, while really just handing a big contract to one of his biggest supporters. That was based on the preliminary details. As more details came out, it became even clearer that the whole thing was a joke. TikTok's investors actively recruited Oracle because they knew they needed to find a company that "Trump liked."

Over the weekend, Trump officially gave the "okay" on the Oracle deal (which now also involves Walmart). And before we get into the details of the deal and why it's a total grift, I'd like to just step back and highlight:

It is positively insane, Banana Republic, kleptocratic nonsense that any business deal should hinge on whether the President himself gives it a thumbs up or a thumbs down. Do not let all the insanity of this current administration hide this fact. If this had happened during the Obama administration, how crazy do you think Hannity/Carlson/Breitbart/etc. would be going right now about "big government" and claiming that the President is corrupt beyond belief? We should never, ever be in a situation where any President is giving the personal thumbs up or thumbs down to a business deal (and that's leaving out the fact that he forced this business deal in the first place with a blatantly unconstitutional executive order.

Okay, now back to the actual deal. Oracle and Walmart will team up to create a "new" (very much in quotes) company called TikTok Global that will be headquartered in the US. Of course, this is a joke. TikTok already has US operations. Oracle and Walmart will end up with a small equity stake in this "new" company (combined about 20%), but the Chinese company ByteDance will still own the majority of the company and will still control the TikTok algorithm. While there is some chatter about how the data will be hosted in the US, for the most part that was already true. Oracle says that it will review things to make sure that the data is secure, but remember, this is the same Oracle that collects a shit ton of data on internet users via Blue Kai, and then leaked it all. It's also the same Oracle that works closely with US spy agencies and isn't exactly known as being particularly good at security.

As the NY Times notes, this deal appears to accomplish literally nothing. As we said before, it was all performative, letting Trump claim he had "done something," when the rationale for the deal ("national security") was always bogus, and this is proved by the fact that nothing in the new setup changes whatever national security questions there were about the app before. So, rather than force ByteDance to "sell" the company to protect "US national security" as the NY Times rightly notes all that came out of this was:

A cloud computing contract for the Silicon Valley business software company Oracle, a merchandising deal for Walmart and a claim of victory for President Trump.

As former FCC chair Tom Wheeler tells the NY Times:

Vetting deals “is normally a process that involves multiple thoughtful people coming to the issue from multiple different concerns,” said Tom Wheeler, a former Democratic chairman of the Federal Communications Commission. “This appears as though what passes for process is what pleases one man: Donald J. Trump.”

Again, Banana Republic kleptocracy.

The NY Times also noted that Walmart either seemed to rush out its press release over the deal, or whoever wrote it had a heart attack in the process of composing it:

“This unique technology eliminates the risk of foreign governments spying on American users or trying to influence them with disinformation,” the company said. “Ekejechb ecehggedkrrnikldebgtkjkddhfdenbhbkuk.”

And that's not even getting into the whole issue of the mysterious $5 billion education fund. With the announcement, Oracle and Walmart said the new company would "pay $5 billion in new taxes to the Treasury," and then separately that it would "develop an educational curriculum driven by artificial intelligence to teach children basic reading, history and other subjects." Those two points got conflated to suggest that it was putting $5 billion into that project -- which sounded suspiciously like the finder's fee Trump demanded when first talking about forcing TikTok to be sold.

This got even more insane when Trump declared that he wanted the $5 billion to be used for his new 1776 history project, which is his new fascistic indoctrination education program, which Trump and his idiot followers insist is necessary because they falsely believe that kids are being indoctrinated to hate America (they're not -- they're just finally being taught, at least a little bit, that slavery is a key part of American history).

And the story got even more crazy when, after Trump talked about all of this, ByteDance came out with a giant shrug, saying it was totally unaware of any $5 billion education fund and appeared to have no interest in actually doing that.

It seems that most of the confusion was -- as per usual -- on the part of our not very intelligent President, who couldn't comprehend that the small education fund was different than the $5 billion, which is merely just estimate future tax revenues to the Treasury that, given the tax breaks this same President has helped set up, will probably never actually materialize.

It's like a clusterfuck of stupidity, corruption and kleptocracy. It's America in 2020.

45 Comments | Leave a Comment..

More posts from Mike Masnick >>

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it