Late last year, while the COVID-19 pandemic was gearing up to hit its peak here in the States, we wrote about one college student and security researcher taking on Proctorio, a software platform designed to keep remote students from cheating on exams. Erik Johnson of Miami University made a name for himself on Twitter not only for giving voice to a ton of criticism Proctorio's software has faced over its privacy implications and inability to operate correctly for students of varying ethnicities, but also for digging into Proctorio's available source code, visible to anyone that downloads the software. But because he posted that code on PasteBin to demonstrate his critique of Proctorio, the company cried copyright infringement and got Twitter to take his tweets down initially as a result, before they were later restored.
But if Proctorio thought that would be the end of the story, it was wrong. The EFF has now gotten involved and has filed a lawsuit against Proctorio in an effort to end any online harassment of Johnson.
The lawsuit intends to address the company’s behavior toward Johnson in September of last year. After Johnson found out that he’d need to use the software for two of his classes, Johnson dug into the source code of Proctorio’s Chrome extension and made a lengthy Twitter thread criticizing its practices — including links to excerpts of the source code, which he’d posted on Pastebin. Proctorio CEO Mike Olsen sent Johnson a direct message on Twitter requesting that he remove the code from Pastebin, according to screenshots viewed by The Verge. After Johnson refused, Proctorio filed a copyright takedown notice, and three of the tweets were removed. (They were reinstated after TechCrunch reported on the controversy.)
In its lawsuit, the EFF is arguing that Johnson made fair use of Proctorio’s code and that the company’s takedown “interfered with Johnson’s First Amendment right.”
“Copyright holders should be held liable when they falsely accuse their critics of copyright infringement, especially when the goal is plainly to intimidate and undermine them,” said EFF Staff Attorney Cara Gagliano in a statement.
Frankly, it's difficult to understand what Proctorio's rebuttal to any of that would be. What Johnson did with his tweets and the replication of the source code that was the subject of his criticism is about as square an example of Fair Use as I can imagine. The use was not intended to actually replicate what Protctorio's software does. Quite the opposite, in fact. It was intended as evidence for why Proctorio's software should not be used. It was limited in its use as part of a critique of the company's software. And it was decidedly non-commercial in nature.
In other words, it was clearly an attempt by Proctorio to silence a critic, rather than any legitimate concern over the reproduction of the source code, which is again freely available to anyone who downloads the browser extension. It's also worth noting that there is a pattern of behavior of this sort of thing by Proctorio.
Proctorio has engaged critics in court before, although more often as a plaintiff. Last October, the company sued a technology specialist at the University of British Columbia who made a series of tweets criticizing the platform. The thread contained links to unlisted YouTube videos, which Proctorio claimed contained confidential information. The lawsuit drew ire from the global education community: hundreds of university faculty, staff, administrators, and students have signed an open letter in the specialist’s defense, and a GoFundMe for his legal expenses has raised $60,000 from over 700 donors.
It's the kind of behavior that doesn't end just because some tweets get reinstated or there is a modicum of public outrage. Instead, it takes a concerted effort by groups like the EFF to force a corporate bully to change its ways. Given Proctorio's bad behavior in all of this, let's hope the courts don't let them off the hook.
Summary: Google has long been responsive to court orders demanding the removal of content, if they're justified. Google has fought back against dubious orders originating from "right to be forgotten" demands from outside the US, and has met no small amount of DMCA abuse head on. But, generally speaking, Google will do what's asked if there's a legal basis for the asking.
But not everyone approaching Google acts in good faith. First, there are any number of bad actors hoping to game the system to juice their Google search rankings.
For a couple of years, these bad actors managed to make some search engine optimization (SEO) inroads. They were able to fraudulently obtain court orders demanding the removal of content. The worst of these companies didn't even bother to approach courts. They forged court orders and sent these to Google to get negative listings removed from search results.
This new system opportunistically preyed on two things: Google's apparent inability to police its billions of search results and the court system's inability to vet every defamation claim thoroughly.
But the system -- not the one operated by the US government or Google -- prevailed. Those targeted by bogus takedown demands fought back, digging into court dockets and the people behind the bogus requests. Armed with this information, private parties approached the courts and Google and asked for content that had been removed illicitly be reinstated.
Decisions to be made by Google:
Should Google act as an intercessor on behalf of website operators or should it just act as "dumb" pipe that passes no judgment on content removal requests?
Does manual vetting of court orders open Google up to additional litigation?
Does pushing back against seemingly questionable court orders allow Google to operate more freely in the future?
Questions and policy implications to consider:
Given the impossibility of policing content delivered by search results, is it wrong to assume good faith on behalf of entities requesting content removal?
Is it possible to operate at Google's scale without revamping policies to reflect the collateral damage it can't possibly hope to mitigate?
If Google immunizes itself by granting itself more discretion on disputed content, does it open itself up to more direct regulation by the US government? Does it encourage users to find other sources for content hosting?
Resolution: Google chose to take more direct action on apparently bogus court orders fraudulently obtained or created by reputation management firms. It took more direct action on efforts to remove content that may have been negative, but not defamatory, in response to multiple (private) investigations of underhanded actions taken by those in the reputation management field. Direct moderation -- by human moderators -- appears to have had a positive effect on search results. Since this outburst back in 2016, shadier operators have steered clear of manipulating search results with bogus court orders.
Last year, the EU's top court threw out the Privacy Shield framework for transferring personal data between the EU and US. The court decided that the NSA's surveillance practices meant that the personal data of EU citizens was not protected to the degree required by the GDPR when it was sent to the US. This was the second time that such an agreement had been struck down: before, there was Safe Harbor, which failed for similar reasons. The absence of a simple procedure for sending EU personal data to the US is bad news for companies that need to do this on a regular basis. No wonder, then, that the US and EU are trying to come up with a new legal framework to allow it, as this CNBC story notes:
Officials from the EU and U.S. are "intensifying negotiations" on a new pact for transatlantic data transfers, trying to solve the messy issue of personal information that is transferred between the two regions.
Even if they manage to come up with one, there's no guarantee that it won't be shot down yet again by the courts, unless the underlying issues of NSA surveillance are addressed in some way -- no easy task. Meanwhile, there's been a fascinating development on the US side, reported here by The Irish Times:
The US Senate is to debate a proposal to limit foreign countries' access to US citizens' personal data and to introduce a licence requirement for foreign companies that trade in this information.
The draft "Protecting Americans' Data From Foreign Surveillance Act", presented on Thursday by Democratic Senator Ron Wyden of Oregon, is aimed primarily at curbing the sale and theft of data by "shady data brokers" to "hostile" foreign governments such as China.
The law may be aimed primarily at China, but its reach is wide, and it could hit an unlikely target. As the Irish Council for Civil Liberties (ICCL) explains, the new Bill (pdf) aims to stop the personal data of US citizens being transferred to locations with inadequate data protection -- just as the EU's GDPR does. But according to the ICCL, one country that may fall into this category of dodgy data handling is Ireland:
ICCL understands from those who wrote the draft Bill that Ireland's failure to enforce the GDPR is of particular concern. The Bill intentionally uses language from the GDPR, and targets this enforcement failure. The draft Bill makes clear that merely enacting strong data protection law such as the GDPR is not enough. That law must be enforced.
Most digital giants have their European headquarters in Ireland. Under the GDPR, it is Ireland's Data Protection Commission (DPC) that must investigate and ultimately fine these companies for their GDPR infringements anywhere in the EU. The DPC has opened many data privacy inquiries (pdf), but has so far failed to impose serious fines. Without strict enforcement by the Irish authorities, there is a growing feeling that the GDPR could be fatally undermined. Hence the risk that the US might not allow personal data to be transferred to Ireland, if the new "Protecting Americans' Data From Foreign Surveillance Act" becomes law. Given the long-standing concerns over the protection of personal data flows from the EU to the US, that would be a rather ironic turn of events.
A decade ago we wrote a post about what we called Schrodinger's Download, which was that the big companies in the music space would refer to digital downloads as a sale or a license in varying ways depending on which benefited them the most. This was most evident in lawsuits between artists and labels, especially with contracts signed in the pre-digital era, where the royalties for "licensing" were much higher than the royalties for "sales." In those cases, the labels tried to claim that MP3 downloads were "sales" in order to pay lower licensing fees -- but, on the flip side, when there were cases about reselling those files, suddenly the labels would insist that wasn't allowed, since it wasn't actually a sale, but a license.
And, of course, over the years, we've seen this play out in many ways -- especially with our never ending series of posts on how you don't own what you've bought, as more and more companies try to use technology and DRM to retain control over things you've "purchased." Last year, we wrote about someone suing Amazon for claiming that she had "purchased" movie downloads, but the fine print showing that she was merely "renting" them. The argument was that this was false advertising. That case is still going, but what we hadn't realized was that someone else had filed a very similar case against Apple, arguing the same thing. And, yes, it's the same lawyers on both cases...
And even though the Apple case was filed three months after the Amazon case, it's actually seen more progress. This week the judge denied Apple's motion to dismiss (first spotted on Courthouse News), saying that there's enough of a case to move forward. Apple tried to argue that the harm here is merely speculative. It hasn't actually removed the plaintiff's downloads. But the court says that Apple's wrong about that:
Apple argues that Plaintiff’s alleged injury — which
it describes as the possibility that the purchased content may
one day disappear — is not concrete but rather speculative.... This, however, as Plaintiff points out,
misconstrues the injury. Plaintiff responds that his injury is
not that he may one day lose access to his content....
Rather the injury Plaintiff asserts, is that he spent money
purchasing the content that he wouldn’t have otherwise as a
result of Apple’s misrepresentation.... This occurred at the
time of purchase.
Another point raised by Apple is that "no reasonable consumer would believe" that when you buy a digital file, it means that it will always be available for you on iTunes. But the Court says, uh, yeah, actually, plenty of reasonable consumers probably would believe that, because that's what "buy" means:
Apple
contends that “[n]o reasonable consumer would believe” that
purchased content would remain on the iTunes platform
indefinitely. Id. at 12. But in common usage, the term “buy”
means to acquire possession over something. Buy Definition,
merriam-webster.com, https://www.merriamwebster.com/dictionary/buy (13 April 2021). It seems plausible,
at least at the motion to dismiss stage, that reasonable
consumers would expect their access couldn’t be revoked.
The court did dismiss a few extraneous claims, but the key ones can now move forward -- and there's a decent chance that this will eventually become a class action lawsuit.
While some might argue that it doesn't really matter that much whether or not you're "buying" or "renting" this content, it really does matter in the legal sense, and big companies have used the distinction to their own advantage (often swapping out the words when convenient, as noted up top). Forcing the companies to actually be upfront about this stuff would be much better -- and might even get them to provide some more accurate descriptions of what they're really providing.
When 17-year-old Darnella Frazier started recording video of Minneapolis policeman Derek Chauvin murdering George Floyd, she initiated a series of historic events that led to Chauvin’s conviction.
The constitutional protections enjoyed by U.S. citizens empower and encourage everyday Americans to discover, record, expose and distribute evidence of governmental malfeasance. This freedom to publicize crimes committed by state actors creates the possibility of improving policing and making the administration of justice more sensitive, effective and responsive.
To understand how the United States developed this unconstrained news culture, you need to return to Minneapolis, to a moment one century ago, when a newspaper exposed police corruption and provided a key turning point in protecting the American public’s right to expose governmental crimes.
Press abuse vs. press limits
Jay Near always knew there were bad cops in Minnesota.
Today, Near is remembered – if at all – for his legendary Supreme Court victory in the 1931 U.S. Supreme Court decision known as Near v. Minnesota.
In 1927, Near and his business partner were prevented from publishing because The Saturday Press was deemed in violation of Minnesota’s “Public Nuisance Law.” That law outlawed publishing or circulating “obscene, lewd, and lascivious” or “malicious, scandalous and defamatory” materials.
Near sued to lift the prohibition, and his case made it to the Supreme Court, where his publication rights were ultimately vindicated. Near v. Minnesota opened up the modern version of press freedom we recognize today. Calling the Minnesota Public Nuisance Law “the essence of censorship,” a five-justice majority struck it down.
Essentially, the high court ruled that the U.S. Constitution allowed the abuse of press freedom in order to protect the most vibrant and robust public discussion possible. The Court had no illusions – the judges were well aware The Saturday Press published inflammatory misinformation. But in assessing the costs of censorship versus the benefits of liberty, the majority sided with the racist crank against the state of Minnesota.
Making the connection
The expansive media freedoms originating in the First Amendment, and later enshrined in Supreme Court decisions like Near v. Minnesota, would continue into the internet age with Section 230 of the Communications Decency Act. That’s the law that allows people to post freely on internet sites while protecting the internet companies from legal jeopardy caused by those materials.
For better or worse, Section 230 establishes media freedom across the internet in the U.S. And it is this law, built on the traditions of media freedom, that allowed Darnella Frazier – and all citizens who follow in her footsteps – to stand up to the government in ways previously unimaginable.
A portion of the front page of The Saturday Press, Oct. 15, 1927, published by Jay Near that figures prominently in U.S. press freedom law.Minnesota Historical Society
But some stand ready to abandon these long-established legal and cultural protections.
Had Minnesota’s Public Nuisance Law survived Near’s challenge, it very well might have prevented publication of Frazier’s video. Those images could easily have been deemed “obscene,” or a “malicious” or “scandalous” incitement to violence.
But U.S. states can’t outlaw media organizations as “public nuisances.” Yet tensions over media freedom now exist that have the potential to lead to limits on the public’s ability to record and distribute police crimes.
Critics who want to get rid of Section 230 regularly blame it for the plethora of “fake news,” misinformation, and hate speech that infects our web and social media. Because Twitter, Facebook, TikTok and others can’t be held liable for users’ content, the companies have felt little pressure, until recently, to moderate the blizzard of material they publish every second.
The cost of limiting the press
But media freedom is always a double-edged sword. Without Section 230 protection, social media companies would likely behave cautiously to minimize even the hint of legal jeopardy. Frazier’s video, in such a world, might be deemed too risky to distribute.
The immunity provided by Section 230 encourages YouTube, Facebook, Twitter and others, to stimulate users to post pretty much any news, information or video their users deem newsworthy or interesting.
The repeal of Section 230 could result in a system in which inflammatory or provocative news or images that might outrage or incite people could be deemed too socially destructive or disturbing of the peace by internet companies. And this could include images and video such as the murder of George Floyd.
The idea that U.S. citizens can report, publish, print and disseminate information that might be terribly damaging to authority is a radical one. Even within the United States, this freedom is often considered too expansive. In Oklahoma, for example, a new bill criminalizing the filming of police officers recently passed both houses of the state legislature, and elsewhere the rights of citizens and journaliststo record police behavior occurring in public are regularly violated.
The direct line from Minneapolis in the 1920s to Minneapolis in the 2020s is the notion that protecting people’s rights promises to foster an active, aware and engaged citizenry – and that violating those rights by repressing or censoring information is deeply anti-American.
The Complete 2020 Learn Linux Bundle has 12 courses to help you learn Linux OS concepts and processes. You'll start with an introduction to Linux and progress to more advanced topics like shell scripting, data encryption, supporting virtual machines, and more. Other courses cover Red Hat Enterprise Linux 8 (RHEL 8), virtualizing Linux OS using Docker, AWS, and Azure, how to build and manage an enterprise Linux infrastructure, and much more. It's on sale for $59.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Ah, this one takes me back to the early days of Techdirt, when the biggest nonsense we were writing about was giant corporate bullies threatening (or in some cases suing!) over so called "Sucks Sites" (that's an article from almost 20 years ago!). The issue was that people who were upset with a particular company would register the domain of CompanySucks.com to (usually) put up a protest site. The company (and its lawyers) would then threaten to sue the individual for trademark infringement. There were some mixed rulings over those sites, but in general most have decided that sucks sites are not trademark infringement, and are protected under a variety of theories -- including a lack of any possible confusion and because they're nominative fair use.
You'd have hoped that, by now, big company lawyers would recognize all of this. Apparently not Facebook's. Now, to be fair, as we recently discussed, for companies like Facebook, often they carefully police domains that make use of similar URLs in order to cut off sketchy phishing and scam sites. But it's one thing to go after such scammers... and it's another to go after someone who is obviously engaging in criticism.
The site now is designed to pretty much do what it says on the tin: give you reasons why you shouldn't use Instagram. Whether or not you agree with that messaging, it's clearly not infringing on Facebook/Instagram's trademarks. Someone should probably tell Instagram's lawyers. Because they sent a threat letter. In fact, they sent this threat letter before he'd even launched anything at the site, basically trying to intimidate him out of the site before he'd even done anything with it.
To Whom It May Concern,
We are writing concerning your registration and use of the domain name dontuseinstagram.com, which contains the Instagram trademark.
You are undoubtedly familiar with Instagram and its worldwide renown in providing photo sharing and editing services, online networking and related products and services through a number of channels, including through its mobile application software and its website available at Instagram.com. Instagram owns exclusive rights to the INSTAGRAM trademark, including rights secured through common law use and registration in the United States (Reg. Nos. 4,170,675 and 4,146,057) and internationally. Instagram is a global leader in photography software for mobile devices, with over 800 million monthly active accounts. Due to Instagram's exponential growth and immense popularity, the Instagram brand, is frequently, if not daily, referenced in the media and pop culture. Its fame entitles it to broad legal protection.
We have recently discovered that you registered the domain name, which incorporates the famous INSTAGRAM mark. Instagram has an obligation to its users and the public to police against the registration and/or use of domain names that may cause consumer confusion as to affiliation with or sponsorship by Instagram, dilute the distinctiveness of its INSTAGRAM mark, or otherwise tarnish the mark. Accordingly, in addition to civil actions, Instagram and its parent Facebook have filed numerous proceedings pursuant to ICANN'S Uniform Domain-Name Dispute- Resolution Policy (http://www.icann.org/en/help/dndr/udrp) to secure the transfer of infringing domain names. Moreover, the Anticybersquatting Consumer Protection Act provides for serious penalties (up to $100,000 per domain name) against persons who, without authorization, use, sell, or offer for sale a domain name that infringes another's trademark.
While Instagram respects your right of expression and your desire to conduct business on the Internet, Instagram must take action to stop the misuse of its intellectual property. As you can imagine, various third parties around the world have attempted to wrongfully capitalize on Instagram's reputation by registering domain names that include or are derived from the INSTAGRAM brand. Such names are confusingly similar to, dilutive of, and can tarnish the INSTAGRAM mark.
We understand that you may have registered dontuseinstagram.com without full knowledge of the law in this area. However, Instagram is concerned about your use of the Instagram trademark in your domain name. Accordingly, we must insist that you immediately cease using and disable either delete or transfer to Instagram any site available at that address. You should not sell, offer to sell, or transfer the domain name to any third party.
Please confirm in writing that you will agree to resolve this matter as requested. If we do not receive confirmation that you will comply with our request, we will have no other choice but to pursue all available remedies against you.
Sincerely,
Instagram IP & DNS Enforcement Group
Instagram, Inc.
Kruczynski was able to line up the Cyberlaw Clinic at Harvard Law's Berkman Klein Center to help him respond to those Instagram lawyers, explaining to them in fairly great detail how totally full of shit their threat letter is in a letter from Kendra Albert,
The legal claims that your letters make are frivolous. Even worse, your
overreach imperils Mr. Kruczynski’s First Amendment rights. Mr. Kruczynski’s
domain name is not likely to cause consumer confusion, which Instagram
would be required to prove in order to succeed on a trademark infringement
claim. See Boston Duck Tours, LP v. Super Duck Tours, LLC, 531 F.3d 1, 12 (1st
Cir. 2008) (citing Borinquen Biscuit, 443 F.3d 116 (1st Cir. 2006)).
To establish likelihood of confusion, a trademark owner “must show more than
the theoretical possibility of confusion.” Int'l Ass'n of Machinists & Aero.
Workers, AFL-CIO v. Winship Green Nursing Ctr., 103 F.3d 196, 198 (1st Cir.
1996). For courts to find a likelihood of confusion, it has to be shown that there
is “a likelihood of confounding an appreciable number of reasonably prudent
purchasers exercising ordinary care.” Id. Given that Mr. Kruczynski’s domain
has not even been launched, Instagram cannot show more than a theoretical
possibility of confusion. Moreover, as Mr. Kruczynski’s website
dontuseinstagram.com currently resembles nothing like the Instagram website,
it is inconceivable that any reasonably prudent purchaser exercising ordinary care would confuse the two websites.
Even in the scenario that Mr. Kruczynski’s domain dontuseinstagram.com
becomes live and operates in a way Mr. Kruczynski originally intended it to,
Instagram will not be able to establish that there is likelihood of confusion in
Mr. Kruczynski’s registration and use of dontuseinstagram.com under the First
Circuit’s eight-factor test. See Oriental Fin. Grp., Inc. v. Cooperativa De Ahorro
Crédito Oriental, 698 F.3d 9, 17 (1st Cir. 2012) (citing Beacon Mut. Ins. Co. v.
OneBeacon Ins. Grp., 376 F.3d 8, 15 (1st Cir. 2004)). Mr. Kruczynski did not
intend to claim any associations with the Instagram mark and did not intend to
compete with Instagram. In fact, Mr. Kruczynski’s domain would serve as a
platform to criticize Instagram’s user privacy violations, not as a social media
platform for users to share photos and accumulate followers. The goods or
services provided by dontuseinstagram.com would be significantly different
from those provided by Instagram, and the channels of trade and advertising
would be very different as well. It is unimaginable that there would be evidence
of actual confusion where Instagram users actually confuse Instagram with a
website criticizing Instagram, starting from the domain name itself. Even
assuming that Instagram has a strong mark that most people recognize, it is
overreaching for Instagram to forbid others from registering or using any name
that mentions Instagram without due regard of relevant laws.
The existence of a parked page on Mr. Kruczynski’s domain does not create
trademark infringement where there previously was not any. See, e.g., Acad. of
Motion Picture Arts & Scis. v. GoDaddy.com, Inc., No. CV 10-03738 AB (CWx),
2015 U.S. Dist. LEXIS 120871 (C.D. Cal. Sep. 10, 2015) (holding that the plaintiff
failed to meet its burden of proving that a domain name registrar who operates
parked page programs acted with a bad faith intent to profit from the plaintiff’s
marks). In fact, the existence of the parked page is largely irrelevant to the
discussion of trademark infringement here, and you are overstepping by
demanding Mr. Kruczynski remove the parked page on his own registered
domain.
Your claim of Mr. Kruczynski’s alleged trademark infringement is ungrounded
in law. The non-infringing nature of the use would have been obvious had an
attorney even glanced at the name of the site.
To the extent that these emails were sent using an automated process that
merely checks to see if a domain contains the word Instagram, and then
automatically requests the transfer of a domain to you if it does, such behavior
plays on the threat of litigation to suppress potentially lawful speech. I am
aware that there may be many domains registered with the Instagram mark in
them, some of which may be used for phishing or other nefarious purposes.
But that does not justify a “spray and pray” strategy where you automatically
send notices of infringement without any human review. Such notices may
serve to unlawfully intimidate critics, requiring them to find legal counsel.
That reply was sent back in November and it requests that Instagram retract the threat letter and provide "a clear statement that you do not intend to file suit over his ownership of dontuseinstagram.com." Somewhat optimistically, it also said that such a letter should "be
accompanied by a discussion of what processes you will implement in order to
ensure any messages you may send to domain owners will not attempt to
intimidate lawful users of the Instagram wordmark."
Neither happened. Instead, Instagram just went silent. It's quite likely that the human being who received the response letter realized how bad an idea it was to send that original threat letter, even if automated, but has just moved on to threatening someone else. But Instagram deserves to be called out for its practices which can lead to real intimidation for people, unlike Paul, who don't have the ability to have a knowledgeable lawyer respond to the threat.
That's unfortunate, because it means that there are no consequences for sending out such bogus, censorial threat letters. Well, other than having a site like Techdirt call out your stupid threats.
You might recall how the Wisconsin GOP, with Donald Trump and Paul Ryan at the head of the parade, struck what they claimed was an incredible deal with Foxconn to bring thousands of high paying jobs to the state. Initially, the state promised Foxconn a $3 billion subsidy if the company invested $10 billion in a Wisconsin LCD panel plant that created 13,000 jobs. The amount of political hype the deal generated was utterly legendary, helping market Trump as a savvy dealmaker who'd be restoring technological greatness to the American Midwest.
Years later, and the deal continues to be exposed as little more than a taxpayer-funded bullshit parade.
After several years of reports making it very clear Foxconn never intended to live up to its promises (and a lot of half-truths and tap dancing by Foxconn), it finally acknowledged this week that the project was being dramatically scaled back:
"Taiwan electronics manufacturer Foxconn is drastically scaling back a planned $10 billion factory in Wisconsin, confirming its retreat from a project that former U.S. President Donald Trump once called “the eighth wonder of the world.” Under a deal with the state of Wisconsin announced on Tuesday, Foxconn will reduce its planned investment to $672 million from $10 billion and cut the number of new jobs to 1,454 from 13,000."
Experts had repeatedly warned that the deal was too good to be true, and likely would never recoup the taxpayer cost as structured. Those warnings were ignored. And unsurprisingly, as the subsidy grew fatter, the promised factory began to shrink further and further, to the point where it's incredibly unlikely much of anything meaningful will be built at all. What does get built will be far, far smaller, and Wisconsin will dole out "just" $80 million in incentives, a thirty-fold decrease from the original subsidy package.
Like most boondoggles, this could have all been avoided with just a modicum of attention and skepticism. Outlets like The Verge had noticed that this deal was going absolutely nowhere as early as 2019, and Foxconn spent several years throwing around completely meaningless jargon to try and obfuscate that fact:
"Throughout its gyrations, Foxconn maintained that it would create 13,000 jobs, though what those 13,000 people would be doing shifted gradually from manufacturing to research into what Foxconn calls its “AI 8K+5G ecosystem.” Other than buzzwords for high-resolution screens and high-speed cell networks, what this ecosystem is has never been fully explained. In February, a Foxconn executive cheerfully likened the company’s vague, morphing plans to designing and building an airplane midflight."
While Trump called the Foxconn deal "the eighth wonder of the world," it wound up being little more than a pile of taxpayer subsidized bullshit. And while scaled back, that's all cold comfort now to the Wisconsin residents forced to move, or the hosting town forced to pay $160 million to buy properties and relocate families in the constriction zone. Or state leaders forced to pay $200 million on road improvements, tax exemptions and grants to local governments for worker training and employment.
We tend to talk about many of the nuanced and intricate problems with our current copyright culture, but the 10,000 foot view of the problem is essentially that copyright tends to make culture disappear. It can do this in lots of ways, but one of the least recognized of them is simply that with a culture of copyright maximilism, many content producers simply don't release the content they want to release it out of fear of the reprisal that has been seen in other cases.
That's something of the case when it comes to 8-Bit Theater releasing a book featuring the entirety of the comics that were released, just without the pictures. Instead, it's just a "script" release. Why? Well, because those pictures are based on old Final Fantasy assets.
From 2001 to 2009, writer Brian Clevinger of Atomic Robo fame produced a hilarious webcomic called 8-Bit Theater, which follows the misadventures of a dysfunctional adventuring party. Unfortunately, that adventuring party is comprised of Final Fantasy game sprites, so Clevinger can’t reprint them without getting sued to hell by Square Enix. That is unless he leaves out the images and creates a 20th anniversary book featuring just the scripts. That’s what he’s doing. The script thing.
According to the Kickstarter page for the 8-Bit Theater 20th Anniversary Complete Script Book, which is just now reaching its $28,000 goal after less than a day since being posted, fans have been clamoring for some sort of print edition of the beloved series for years.
Now, to be clear, Square Enix didn't seem to have any problem with the web comic being produced to begin with. But it seems clear that no deal was worked out with the company to allow this physical book to be published using the images from the comic. And, as the Kickstarter results indicate, this is a book people very much want. Unfortunately, they very much won't get it in its original form, due to fear of copyright reprisal.
Instead, backers will get the 8-Bit Theater 20th Anniversary Complete Script Book Do Not Sue Edition.
Funny? Sure, in a way. But it's also a little sad and a lot irritating that something as transformative as this comic, still very much in demand by the public, cannot be produced the way it should in book form simply out of fear of being sued for copyright infringement. After all, Square Enix loses nothing by the production of this book.
But we all lose out on the loss of culture due to the fear of copyright culture.