Last week, we launched our latest t-shirt (and hoodie!) on Teespring: the Takedown tee. Now the campaign is nearly at an end, so if you want one you've only got until Monday, August 1st at 8:00pm PT. Otherwise you'll have to wait for the campaign to restart, which could happen soon or it could take ages — so don't delay!
Men's and women's t-shirts are $20, hoodies are only $35, and everything's available in a variety of colors. Hurry up and get yours today!
So you may have seen reports last week charging CloudFlare and some other tech companies with "aiding" internet malware pushers. The "report," called "Enabling Malware" was announced in a press release last week from the Digital Citizens Alliance -- a group that describes itself as representing consumer interests online:
Digital Citizens is a consumer-oriented coalition focused on educating the public and policy makers on the threats that consumers face on the internet and the importance for internet stakeholders – individuals, government and industry - to make the Web a safer place.
And while the story wasn't picked up that widely, a few news sources did pick it up and repeated the false claim that DCA is a consumer advocacy group. TorrentFreak, FedScoop and Can-India also picked up the story, and all simply repeated DCA's claim to represent the interests of "digital citizens."
But that leaves out the reality: DCA is a group mostly funded by Hollywood, but also with support from the pharmaceutical industry, to systematically attack the internet and internet companies, for failing to censor the internet and block the sites and services that Hollywood and Big Pharma dislike. DCA has been instrumental in pushing false narratives about all the "evil" things online -- "counterfeit fire detectors! fake drugs!" -- in order to push policy makers to institute new laws to censor the internet. DCA buries this basic fact in its own description, merely noting that it "counts among its supporters... the health, pharmaceutical and creative industries."
The organization was formed in late 2012, partly as a response to the MPAA's big loss around SOPA. Recognizing that it needed to change tactics, the MPAA basically helped get DCA off the ground to push scare stories about horrible internet companies enabling "bad things" online, and how new laws and policies had to be created to stop those evil internet companies. Much of this was merely speculation for a while, based on the fact that every DCA report seemed to wrongly blame internet companies for other people using those tools to do bad things online. However, it became explicit thanks to the Sony Hack, which revealed that a key part of the MPAA's anti-Google plan, dubbed Project Goliath, involved having the DCA pay Mississippi's former Attorney General Mike Moore (who mentored its current AG, Jim Hood), to lobby Jim Hood to attack Google.
That doesn't sound like a project of organizations just interested in "digital safety." It sounds like a project designed to attack internet companies. And, thus, it should be no surprise that every time DCA's name pops up, it's attacking internet companies. It was the organization that put out a report getting a variety of state Attorneys General (sense a pattern here?) to attack YouTube, because some criminals posted videos on YouTube. Rather than recognizing that this is a way to gather evidence and go after actual criminals, DCA decided that YouTube should be blamed for not taking those videos down fast enough. It was also the organization that put out a laughable report declaring the cloud storage site Mega a "haven" for piracy, where the methodology made no sense. Mega encrypts its content, but DCA and its researchers didn't seem to understand that, so they simply found a few links inbound to infringing works, and extrapolated out that a huge percentage of files on Mega were infringing.
DCA's boss, Tom Galvin, magically was chosen to present to the National Association of Attorneys General back in 2013, just months after the organization was founded, and in timing that (coincidentally, I'm sure) lines up almost exactly with the MPAA's decision (as revealed in the Sony emails) to focus on state Attorneys General to attack Google. DCA's Twitter feed regularly retweets the MPAA and various other front groups set up by the legacy copyright industries, such as the Copyright Alliance.
In short, the Digital Citizens Alliance is not an alliance of "digital citizens" at all. It's a front group set up by the MPAA and some big pharmaceutical companies to pressure policy makers into getting internet companies to censor the internet. Don't buy it.
Well known anti-Muslim troll Pamela Geller has teamed up with a group called the American Freedom Law Center to file one of the dumbest lawsuits we've ever seen. There's so much wrong here it's difficult to know where to start. Here's the lawsuit itself, which is filed against US Attorney General Loretta Lynch, even though Geller's own story about the lawsuit falsely claims she's suing Facebook. She's not. She's suing the US government because Facebook relies on Section 230 of the CDA in taking down some of her pages, and she claims, ridiculously, that Section 230 of the Communications Decency Act violates the First Amendment. The lawsuit is wrong on so many levels it's not even funny. Let's start with this, though -- Geller has long positioned herself as an extreme supporter of the First Amendment. And yet, she's now suing the government over CDA 230, a law which has probably done more than any other to guarantee that the First Amendment works on the internet.
The lawsuit talks up the vast open and public forums of the internet, which is accurate, but then argues that because there's so much content online, Section 230 no longer applies.
Unlike the conditions that prevailed when Congress first authorized regulation of
the broadcast spectrum, the Internet can hardly be considered a “scarce” expressive commodity.
It provides relatively unlimited, low-cost capacity for communication of all kinds.
And then it gets to the crux of her argument: that popular internet forums are so important, no one should ever be barred from using them:
Denying a person or organization access to these important social media forums
based on the content and viewpoint of the person’s or organization’s speech on matters of public
concern is an effective way of silencing or censoring speech and depriving the person or
organization of political influence and business opportunities.
Due to the importance of social media to political, social, and commercial
exchanges, the censorship at issue in this Complaint is an unmatched form of censorship.
Consequently, there is no basis for qualifying the level of First Amendment
scrutiny that should be applied in this case.
Except, this is really, really confused. Section 230 does not enable censorship. A private company is free to deny service or moderate its own services as much as it wants. That's their right as a private company. This is not a Section 230 issue at all. Geller and her lawyers are hellishly confused. Yes, Section 230's (c)(2) includes a so-called good-samaritan clause that basically says that a site does not take on new liability for taking down content, but that's separate from the issue of deciding to moderate content at all. Facebook can take down your page whenever it wants and it's not a First Amendment issue because Facebook isn't the government. And Section 230 has nothing to do with this at all, other than actually encouraging Facebook to leave up more speech since it's not considered liable for its users' speech.
But Geller's lawyers don't seem to understand the law they're whining about.
Section 230 permits content- and viewpoint-based censorship of speech. By its
own terms, § 230 permits Facebook, Twitter, and YouTube “to restrict access to or availability of
material that [they] consider to be obscene, lewd, lascivious, filthy, excessively violent,
harassing, or otherwise objectionable.”
Except that's not what Section 230 does at all. Companies are already permitted to do that because they're private companies. All Section 230 says is that in removing content, that doesn't mean those companies suddenly have liability for other content that they left up. Geller and her lawyers simply don't understand what Section 230 does and says. And yet they're suing over it.
Section 230 confers broad powers of censorship, in the form of a “heckler’s veto,”
upon Facebook, Twitter, and YouTube censors, who can censor constitutionally protected speech
and engage in discriminatory business practices with impunity by virtue of this power conferred
by the federal government.
Except it does no such thing. Actually, Section 230 frequently protects against the heckler's veto because it makes it clear that platforms don't have to do anything and they're still protected from liability. This is actually a stronger protection against a heckler's veto than basically every other country in the world, most of which have a DMCA-like "notice and takedown" system, which does lead to protected speech being deleted. Section 230 protects against that, and a very confused Geller and her lawyers get this backwards.
Section 230 is not tied to a specific category of speech that is generally
proscribable (i.e., obscenity), nor does it provide any type of objective standard whatsoever. The
statute does permit the restriction of obscenity, but it also permits censorship of speech that is
“otherwise objectionable, whether or not such material is constitutionally protected.” 47 U.S.C.
§ 230(c)(2)(A). Further, the subjective “good faith” of the censor does not remedy the vagueness
issue, it worsens it.
This is just further confusion. The lawsuit is arguing over an issue as if this is about the government censoring speech, rather than private companies moderating speech -- something they've always been able to do, and which itself is protected by the First Amendment.
This lawsuit is the legal equivalent of that idiot who claims that any company moderating content is violating the First Amendment. And to that, I've got an obligatory xkcd for you:
From there, she goes on to complain about Facebook, Twitter and YouTube all taking down some of her content for terms of service violations, and insisting that Section 230 is to blame (it's not) and that her free speech rights have been denied (they have not).
Section 230 of the CDA, facially and as applied, is a content- and viewpoint based
restriction on speech in violation of the First Amendment.
Section 230 of the CDA, facially and as applied, is vague and overbroad and lacks
any objective criteria for suppressing speech in violation of the First Amendment.
Section 230 of the CDA, facially and as applied, permits Facebook, Twitter, and
YouTube to engage in government-sanctioned discrimination and censorship of free speech in
violation of the First Amendment.
None of that is a remotely accurate description of Section 230. Not even close. Geller's blog post, which falsely claims she's suing Facebook, rather than the US government, then just is a long extended whine about the fact that Facebook takes down her content when she violates its terms. Now, we've been vocal critics of Facebook's willingness to silence content and it's almost arbitrary decision-making in determining what content is appropriate for Facebook and what is not, but we'd never suggest that Facebook doesn't have a legal right to make those decisions. To make a bizarre First Amendment argument here, trying to link Facebook to the government via the free speech protections of Section 230, is nonsensical. It's almost as if her lawyers didn't even realize the argument they're really trying to make (which would also be a non-starter), that Facebook, Twitter and YouTube are de facto public spaces, and thus went with the even more bat-shit crazy misinterpretation of Section 230.
As for her lawyers at the American Freedom Law Center (AFLC) they're just as confused in a blog post about the lawsuit:
Section 230 provides immunity from lawsuits to Facebook, Twitter, and YouTube, thereby permitting these social media giants to engage in government-sanctioned censorship and discriminatory business practices free from legal challenge.
It's not government sanctioned censorship. And the immunity it provides is just that these platforms don't lose their own protections against liability on the content they leave up just because they choose to take down some other content. Section 230 infers no special benefits to platforms to take down content. It just says that taking down content won't lose them other protections -- protections, I should remind you -- that help promote and protect free expression online.
While there have been some questionable CDA 230 rulings lately, this one is an easy one. It should be laughed out of court pretty quickly on the basis of "did you even read the law you're suing over?"
Google is big and successful. Some legacy entertainment companies have been struggling. For whatever reason, many of those companies have decided that Google's success must be the reason for their downfall, and they've been blaming Google ever since. It's pervasive and it's deeply ingrained. A few years ago, I ended up at a dinner with a recording industry exec (and RIAA board member) who was so absolutely positive that Google was deliberately trying to destroy his business that it was reaching delusional levels. Of course, these legacy players have been banging on this drum for so long that they've convinced some others that it must be true, including some content creators and politicians. They all believe that the correlation of Google's success and their own struggles must be about Google, and not their own failures to innovate. And their number one argument seems to be (ridiculously) that Google "profits" from piracy and therefore Google encourages piracy.
As this drumbeat has gotten louder and louder, Google has felt the need to respond. The company has, for many years, actually done plenty to try to stop piracy, rather than encourage it, and it's reached the point where Google is (stupidly, in my opinion, though perhaps politically necessary) actively appeasing the legacy industries, sometimes actively making its own search product worse. And, of course, as you would expect, these efforts are never enough for those industries. So now Google has taken to putting out a semi-regular report on how it fights piracy.
Today, Google came out with its latest such report which again shows that Google goes much, much, much further than the law requires -- and even much further than many are demanding the company already do. The headline pointer, which will get all the attention, is that YouTube's ContentID, by itself, has paid out over $2 billion. For some time now, Google has said that it's paid out over $3 billion to artists, but recent recording industry attack dogs have honed in on the fact that Google never broke out how much of the $3 billion was from ContentID. Now they're breaking it out somewhat -- noting $3 billion to the music industry and $2 billion from ContentID alone. The company also notes that over 98% of copyright management on Youtube is now via ContentID, rather than through DMCA takedown notices.
Of course, whether or not you think this is a good thing may depend heavily on your perspective. I appreciate that ContentID has created a new business model, but of course, we've seen how badly it performs in some situations leading to censorship or trollish behavior where some are using it to claim the revenue of other individuals.
The report also takes on the silly myth that Google likes to drive searchers to pirated information. They point out that the company has used the DMCA notices it receives as a signal to demote certain sites in search, and then points out that almost no one does the kinds of queries that still pop up infringing results (and it notes in a footnote that the examples it's using are ones that have been called out publicly by the RIAA and its friends):
Nevertheless, some critics paint a misleading picture by focusing
on the results for rare, “long tail” queries, adding terms like “watch”
or “free” or “download” to a movie title or performer’s name. While
the search results for these vanishingly rare queries can include
potentially problematic links, it is important to consider how rare
those queries are. Look at the relative frequency of these Google
searches in 2015:
“Star Wars The Force Awakens” searched 402× more often than “Watch Star Wars
The Force Awakens”
“Taylor Swift” searched 4534x more often than “Taylor Swift download”
“PSY Gangnam Style” searched 104× more often than
“PSY Gangnam Style download”
“Mad Max” searched 836× more often than “Mad Max stream”
“Pixels” searched 240× more often than “Watch Pixels”
And then there's this:
Google is obviously far from perfect, and as I've said in the past (and above!) I think the company goes way too far in trying to appease an industry that is placing a ton of misplaced blame on Google for its own failures to innovate and change with the times. But because so many people seem to be accepting the myths of the legacy industries, now Google feels the need to go even further and release these "guys, we're doing way more than any law has ever required" reports.
And while I haven't seen it yet, I can almost guarantee that the RIAA, MPAA and its various friendly groups will be rushing out press releases attacking this as "not enough." Because it's never enough when you can blame the more successful company for your own failures.
What almost no one knows is that these stickers and clauses are illegal under a federal law passed in 1975 called the Magnuson-Moss Warranty Act.
To be clear, federal law says you can open your electronics without voiding the warranty, regardless of what the language of that warranty says.
Apple (far from the only offender) maneuvers around this by hinting heavily that you're fucked if you choose to let anyone but an approved Apple tech crack open your iPhone.
Apple’s iPhone warranty is less explicit, but has this message in bold: "Important: Do not open the Apple Product. Opening the Apple Product may cause damage that is not covered by this Warranty. Only Apple or an AASP should perform service on this Apple Product." Apple is also known to refuse to service phones that have been opened by their owners or by third party repair professionals.
"May" and "should" are Apple's hedges against federal law. Apple figures this is enough to discourage people from doing repairs themselves or letting others do it for them. The letter of the law is respected. The spirit of the law, however, is subjected to a series of mean-spirited subtweets.
Apple further funnels repairwork to its own techs by refusing to release schematics and other repair-related info. Various "right to repair" bills are seeking to open up these walled repair gardens, but these have faced heavy opposition from several companies -- the same ones that periodically petition the Library of Congress to make repairs/modifications of their products illegal under the DMCA.
Louis Rossmann’s YouTube channel has been an invaluable source of detailed tutorials for DIY repairs, some of them detailing how to perform component replacements rather than the whole-board approach typically taken by Apple Stores. But in a somewhat vague video posted last night, Rossmann indicates that they may be about to disappear.
While Rossmann doesn’t say so explicitly, he implies that he has received a takedown notice for his videos, and Reddit is speculating that Apple may be behind it. It’s unclear what might form the basis of any takedown notice, though Rossmann does express strong views on Apple’s approach to repairs, and some of the videos do include Apple schematics. We’ve reached out to Apple and will update with any response.
The video is indeed cryptic. Rossmann points out that videos can be downloaded and hints that his channel may not be live for long. He also alludes to "being strong" and ready for a long fight, while noting all of his videos may soon be deleted.
The hints dropped here were enough that Game Revolution ran with the story, but subsequently deleted it when Rossmann provided more information. 9-to-5 Mac, however, kept its story live and updated it with Rossmann's comments. Apple isn't going to sue Rossmann or somehow shut down his YouTube account.
Louis Rossmann has posted a follow-up video in which he says that he has been contacted by IP lawyers acting for Apple but is not currently being faced with a lawsuit. He said there is an issue with a schematic, but Apple is said to like the channel.
This is better behavior than one would expect from Apple, considering its history of making repair/DIY-unfriendly devices. But it also shows Apple is still interested in limiting the amount of repair-related information the public has access to. If it's trying to keep him from displaying a schematic or using that info to help people repair their own devices, then its legal muscle will achieve the same end, without the collateral PR damage that would come from kicking an "unauthorized" repairman off the internet.
It's an understandable reaction to tragedy. When faced with the unthinkable -- like the death of a loved one in a terrorist attack -- people tend to make bad decisions. We saw this recently when the widow of a man killed in an ISIS raid sued Twitter for "providing material support to terrorists." Twitter's involvement was nothing more than the unavoidable outcome of providing a social media platform: it was (and is) used by terrorist organizations to communicate and recruit new members.
That doesn't mean Twitter somehow supports terrorism, though. Like most social media platforms, Twitter proactively works to eliminate accounts linked with terrorists. But there's only so much that can be done when all that's needed to create an account is an email address.
As difficult as it may be to accept, platforms like Twitter, Facebook, etc. are not the problem. Like any, mostly-open social platform, they can be used by terrible people to do terrible things. But they are not responsible for individual users' actions, nor should they be expected to assume this responsibility.
Another terrorist attack and another death has prompted a similar lawsuit [PDF] from the father of Nohemi Gonzalez, who was killed in the Paris terrorist attacks. The lawsuit contains a number of allegations, but every single one can be countered by Section 230. Reynaldo Gonzalez claims that Twitter, Facebook, and YouTube all provide "support" for terrorism by both refusing to take terrorist-related content/accounts down and not proactively policing their platforms for terrorist-linked users.
The lawsuit contains several quotes from pundits, terrorism experts, and government officials about ISIS's successful use of social media platforms. What it doesn't contain, however, is anyone offering support for the lawsuit's position: that social media platforms should be held directly responsible for terrorist attacks. But that's the sole purpose of this lawsuit: to make the platforms pay for a death they had nothing to do with.
There are calls from government and law enforcement officials for these platforms to "do more" contained in the lawsuit as well. But if there's anything we'll never run out of, it's government officials calling for "x non-government entity" to "do more" in response to [insert latest tragedy here].
As was pointed out earlier, Section 230 immunizes the defendants against lawsuits of this sort. And the fact that there's no direct connection between the terrorist attack and Twitter/Facebook/YouTube's actions means there's no way for Gonzalez's father to seek damages from these defendants for a terrorist attack carried out on foreign soil, as Twitter pointed out the last time it was sued for "providing material support for terrorism."
Whether or not Section 230's protections will hold up remains to be seen. This case has been filed in the Ninth Circuit, which just recently handed down a decision opening up service providers to new levels of liability if they fail to warn users about other, possibly more dangerous users. This isn't exactly the best fit for the bad en banc decision, but with the circuit leaning that direction thanks to recent precedent, lower courts may be more willing to reinterpret Section 230 in ways that will make the internet worse.
It's easy to say that "hate speech" is bad and that we, as a society, shouldn't tolerate it. But, reality is a lot more complicated than that, which is why we're concerned about various attempts to ban or stifle "hate speech." In the US, contrary to what many believe to be true, "hate" speech is still protected speech under the First Amendment. In Europe, that's often not the case, and hate speech bans are more common. But, as we've noted, while it seems like a no brainer to be against hate speech, the vagueness in what counts as "hate speech" allows that term to be expanded over and over again, such that laws against hate speech are now regularly used for government censorship over the public saying things the government doesn't like.
So consider me quite concerned about the news out of the EU that the EU Commission has convinced all the big internet platform companies -- Google, Facebook, Twitter and Microsoft -- to agree to remove "hate speech" within 24 hours.
Upon receipt of a valid removal notification, the IT Companies to review such requests against their rules and community guidelines and where necessary national laws transposing the Framework Decision 2008/913/JHA, with dedicated teams reviewing requests.
The IT Companies to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary.
In addition to the above, the IT Companies to educate and raise awareness with their users about the types of content not permitted under their rules and community guidelines. The use of the notification system could be used as a tool to do this.
In other words, it sounds a lot like these companies have agreed to a DMCA-like notice-and-takedown regime for handling "hate speech." Let's be clear here: this will be abused and it will be abused widely. That's what happens when you give individuals the ability to remove content from platforms. Obviously, these companies are private companies and can set whatever policies they want on keeping up or removing content, but when they come to an agreement with the EU Commission about what they'll remove and how quickly, reasonable concerns should be raised about how this will work in practice, what definitions will be used to determine "hate speech," what kinds of appeals processes there will be and more. And none of that is particularly clear.
And, of course, very few people will raise these issues upfront because no one wants to be seen as being in favor of hate speech. And that's the real problem. It's easy to create rules for censorship by saying it's just about "hate speech," since almost no one will stand up and complain about that. But that opens up the door to all sorts of abuse -- whether in how "hate speech" is defined, as well as in how the companies will actually handle the implementation. Two major human rights groups -- EDRi and Access Now have already withdrawn from the EU Commission forum discussing all of this in protest of how these rules were put together:
Today, on 31 May, European Digital Rights (EDRi) and Access Now delivered a joint statement on the EU Commission’s “EU Internet Forum”, announcing our decision not to take part in future discussions and confirming that we do not have confidence in the ill considered “code of conduct” that was agreed.
Their main concern was that the whole thing was set up directly between the EU Commission and the internet companies behind closed doors -- and when you're talking about issues that impact human rights and freedom of expression, that needs to be done openly and transparently.
In short, the “code of conduct” downgrades the law to a second-class status, behind the “leading role” of private companies that are being asked to arbitrarily implement their terms of service. This process, established outside an accountable democratic framework, exploits unclear liability rules for companies. It also creates serious risks for freedom of expression as legal but controversial content may well be deleted as a result of this voluntary and unaccountable take down mechanism.
I recognize why many people may cheer on this move, thinking that it's a way to stop "bad stuff" from happening online, but beware the actual consequences of setting up an opaque process with a vague standard for pressuring platforms to censor content based on notices from angry people. If you don't think this will be abused in dangerous ways, you haven't been paying attention to the last two decades on the internet.
from the 'this-will-end-the-criticism-once-and-for-all!' dept
Copyright: for when you just don't feel like being criticized. (Currently available for periods up to, and including, seventy years past your death!)
Matt Hosseinzadeh, a.k.a. "Matt Hoss," a.k.a. "Bold Guy," a.k.a. "Horny Tony," runs a moderately successful YouTube channel containing his moderately well-done videos of his "characters" performing feats of pickup artistry and parkour. It's all fairly ridiculous, but considering the depths pickup artists can plumb, the HossZone videos are actually fairly tame.
According to H3H3, it all began with a demand for the removal of the video and $3,750 in legal fees racked up so far by Hoss's lawyer. From there, it got stupider. After failing to secure instant capitulation, HossZone's lawyer altered the terms of the deal. ("Pray I don't alter it stupider...") H3H3 could avoid paying any money by apologizing via their channel for misappropriating Hoss's "art," say some nice stuff about him in their apology video, and throw additional compliments HossZone's way for a period of no less than 60 days. (I am not kidding. Watch the video above.)
H3H3 refused to do so, so Hoss has now filed a copyright infringement lawsuit against Ethan and Hila Klein. Hoss also hit H3H3 with a copyright strike, despite the fact that the video central to the complaint had been set to "private" shortly after his lawyer began issuing legal threats.
Unlike others who have sought to abuse copyright to censor critics, Hoss appears to have his end of it pretty much nailed down. He has a valid, registered copyright that predates the H3H3 reaction video and his complaint isn't filled with vagues assertions about ethereal property and even vaguer assertions about how it's been violated.
That being said, detailed allegations aren't always credible allegations. It appears that fair use is still misunderstood by a great deal of the population, including those representing plaintiffs in copyright infringement lawsuits. From the complaint:
On or about February 15, 2016, Defendants published a video on their YouTube channel that copied and displayed virtually all of Mr. Hoss’s original Work (the “Infringing Video”).
The Infringing Video features the Defendants purporting to discuss the Work in what they believe to be a humorous manner but in fact reproduces virtually all of the Work as nothing more than a prop in the Defendants’ “comedy routine.”
Contrary to what Hoss's lawyer implies here, there is nothing in caselaw that forbids the use of "virtually all" of a work under fair use. Judges and juries may be more sympathetic if you don't, but this does not automatically make a work infringing, rather than fair use.
The 13 minute h3h3 productions video in questionuses about three minutes of HossZone’s skit, while the rest of the video features Ethan and Hila talking about the setting, script, character development, and even the costume design used by HossZone. They also talk about random things pertaining to their life, as most vlogs of theirs do.
The original video runs 5:25, so H3H3 used a little more than half of it, but that half only makes up about a third of the total reaction video runtime. Not that all this math makes much of a difference when fair use is raised as a defense, but it does serve two purposes: it illustrates there was a great deal of commentary surrounding Hoss's content and it appears to contradict the claims made by the plaintiff.
The Infringing Video was created and published without license from Mr. Hoss in direct violation of Mr. Hoss's exclusive rights as an author pursuant to 17 U.S.C. § 106.
Fair use does not require the obtaining of a license from a copyright holder (no matter what Sony Music claims...) because that's exactly what "fair use" is: the use of copyrighted works in a non-infringing way.
The Infringing Video does nothing to alter the original Work with new expression, meaning, or message
The Infringing Video fails to contribute a single substantive comment, criticism, or even parody to or of the original Work.
These are opinions, not factual assertions. The court will determine how substantive Hoss's take on H3H3's video is, but even those standing far outside of the IP-wonk circle can plainly see these are purely subjective statements.
Aside from the fact, as described in greater detail above, that the Infringing Video does not constitute a transformative fair use, it is also the fact that the Defendants operate the Ethan and Hila YouTube channel, where they published the Infringing Video, as an entertainment channel via which the Defendants generate advertising revenues.
People make money from fair use all the time. This argument has been debunked so often, it should ingrained in the mind of any decent IP lawyer.
What's interesting about this lawsuit is that HossZone also accuses H3H3 of filing a "false" DMCA counter notification in response to Hosszone's takedown request.
On or about April 26, 2016, the Defendants submitted to YouTube a counter notification, pursuant to 17 USC § 512(g)(3), affirming under penalty of perjury that the Infringing Video was improperly removed because it was, among other reasons, a fair use and “noncommercial.”
And if it's Hoss's takedown that delivered a strike to H3H3's account is determined to be bogus, what then? Still going to go HAM on the "perjury" angle?
Hoss's lawyer seems to take particular issue with the possibility that the Klein's may have received ad revenue from their reaction video. In addition to claiming YouTube's third-party advertising makes any uploaded video a "commercial" product, the attorney claims that most of H3H3's popularity is due to Hoss's talent and inherent likability, rather than the commentary added to the video or the rest of H3H3's video productions.
Upon information and belief, the Defendants have unfairly derived profit from the Infringing Video in the form of their YouTube channel, which generates advertising revenue, increasing in popularity during the two-month period that the Infringing Video was displayed.
Upon information and belief, the Defendants’ YouTube channel more than doubled its number of subscribers due, at least in part, to the popularity generated by the Infringing Video.
The lawsuit also claims that Hoss is so charismatic his 3-minute appearance in a video mocking him somehow resulted in the Kleins being able to generate income from Patreon and Kickstarter.
All in all, it's a fairly ridiculous lawsuit which is made worse by its apparent motivation: to remove something Matt Hoss doesn't like from the internet. Even if this somehow works out for the parkouring pickup artist, the battle is already lost. A supporter of the Kleins set up a fundraiser for their legal defense, which amassed over $100,000 in under 24 hours. Meanwhile, what's left of Matt Hosszone's web presence is being savaged by dozens of angry commenters -- most of it far more brutal than anything the Kleins said during their criticism of his video.
Mitch Stoltz, over at EFF, has been writing about a ridiculous situation in which Sony Music has been using ContentID to take down fair use videos -- and then to ask for money to put them back up. As Stoltz notes, the videos in question are clearly fair use. They're videos of lectures put on by the Hudson Valley Bluegrass Association, teaching people about bluegrass music. They're hourlong lectures in a classroom setting, that do include snippets of music here and there as part of a lecture, with the music usually less than 30 seconds long.
HVBA’s use of clips from old bluegrass recordings is a clear fair use under copyright law. The clips are short, the purpose of the videos is educational, and the group does not earn money from its videos. Plus, no one is likely to forego buying the complete recordings simply because they heard a clip in the middle of an hour-long lecture.
Nonetheless, like so many others, HVBA had its videos disappear thanks to a ContentID match on some Sony music. Here's where the story gets much worse than the standard version of this story. HVBA reached out to Sony Music, asking it to release the claim, but Sony Music demanded money, saying it was an "administrative" fee.
When HVBA’s webmaster emailed Sony Music to explain that the use of music clips in the lecture videos was fair use, Sony’s representative responded that the label had “a new company policy that uses such as yours be subject to a minimum $500 license fee,” and that “if you are going to upload more videos we are going to have to follow our protocol.” Sony’s representative didn’t say that she believed the video was not a fair use. Instead, she implied that even a fair use would require payment, and that Sony would keep using YouTube’s Content ID system against HVBA until they paid up.
As the EFF post notes, this highlights (yet again) what a dangerous disaster "notice and staydown" would be. It would open up the ability for shakedowns and censorship like what happened above.
Of course, once EFF publicized the story, Sony Music quickly backed down, but not everyone will be able to have their story told by EFF.
Even worse, even in backing down, Sony Music refused to concede the point, and indicated it still believed that fair use needed to be paid use.
A Sony executive emailed HVBA to say that the company “has decided to withdraw its objection to the use of its two sound recordings” and “will waive Sony Music’s administrat[ive] fee.” That sounds like Sony was simply acting out of courtesy, when in fact the company had no right to demand a fee, by any name, for an obvious fair use. Other YouTube users with less knowledge of the law may have been convinced to pay Sony $500 or more, and provide detailed information, for uses of the music that the law makes free to all.
It does make you wonder if Sony Music has been successful in charging this $500 fair use "administrative fee" to others, in a move that would be pure copyfraud.
Either way, imagine how copyright trolls would react to this kind of situation if it were more global on the internet with a mandated notice and staydown provision. We've already written about cases where people falsely claim copyright on works to get stuff taken down on other sites, but if there's a way to not just censor with that, but also make money, you know it's going to get widely abused. Hell, we've even had a similar situation here, where a small publication in another country (which does not have a fair use regime) sent us a letter objecting to our linking and quoting them without reaching "an agreement." Giving more power to folks like that is a recipe for widespread censorship and shakedowns.
At the recent Copyright Office roundtable on the DMCA, a representative from Fox was adamant about pushing for stronger punishment for sites that hosted infringing content. But she also made sure to respond to a point raised earlier about abusive takedowns. Someone had pointed out that in 2013, Fox had issued a bogus DMCA notice that took down a copy of Cory Doctorow's excellent book Homeland, because its robotic censors couldn't distinguish Cory's novel from its TV show of the same name. Before launching into her speech pushing for expanding copyright laws to provide more power for censorship, she wanted to "explain" what happened with Cory's book, and said that it happened because Doctorow's book "was on torrent sites" -- as if this made it okay. That leaves out the kind of important fact that Doctorow released the book under a Creative Commons license that allowed it to be shared anywhere, including torrent sites.
Yes, of course, after TorrentFreak posted about this late last week and the news started to spread, the takedown was lifted -- either by Fox or by YouTube itself -- but it again highlights the problems with these demands for automated filtering or notice-and-staydown systems. They don't work very well in many, many situations. And they create complications like this one -- and not everyone will get a site with a large following to write a story about it, getting enough attention to get the situation fixed. So many people on the copyright legacy side of things keep insisting that it's "easy" to just take down actually infringing stuff. Yet, time and time again, that's been shown to be wrong. There are lots of mistakes, and when you're talking about expression, we shouldn't tolerate systems that allow someone to automatically censor speech.