As you may remember, Viacom once sued YouTube for $1 billion dollars over video clips on the site. Right before the case was set to start, Viacom had to scramble and remove some of the alleged infringements from the complaints, because the company realized that Viacom employees had uploaded the clips as part of their marketing campaign. Suing YouTube over clips that you yourself uploaded is not a good look, and it's a big part of the reason why Viacom's arguments fell flat in court. Viacom owns Paramount Pictures, and it would appear that the "level of care" that the company takes in sending DMCA notices has not improved much over the years.
Torrentfreak has the latest round of ridiculously bad DMCA takedown notices coming from a major Hollywood studio. Whereas in the old days, we'd see takedowns occur based on a single word, it appears that here, Paramount has upgraded its auto-censorbot to use two words. Here it appears that anything that is vaguely associated with a movie, plus the word "utorrent" must automatically be wiped from the internet. Take, for example, this conversation on the utorrent forums about how to configure Cyberghost VPN. It's all pretty innocuous, but Paramount Pictures apparently hired one of these fly-by-night censorship outfits by the name of IP-Echelon to take it down, because clearly any use of the word "Ghost" and "utorrent" must be infringing -- even when "ghost" isn't even written out as a separate word.
The Torrentfreak article has a number of similar situations, including one where someone said "imagine that" in a comment, and another where someone used the word "clueless" and Paramount/IP-Echelon insisted they were linking to infringing copies of the movies "Imagine That" and "Clueless." But that's clueless.
And, yes, it's certain that many of the other links in these notices were to actually infringing files. But just because you legitimately take down some links, it doesn't excuse trying to censor perfectly legitimate content.
As of late, Nintendo's relationship with YouTube and the YouTube community has been, shall we say, tumultuous. After rolling out a bad policy to share revenue with YouTubers on the basis that those personalities torpedo their reputations by promising only positive Nintendo coverage, claiming the monetization for a large number of "let's play" videos uploaded by independent YouTubers, and even going so far as to lay claim to the review of a Nintendo game created by well-known YouTuber "Angry Joe", Nintendo clearly seems to believe that YouTube is not so much an independent community as it is some kind of official public relations wing for the company. This is really dumb on many different levels, but chiefly it's dumb because it breeds ill-will amongst fans, of which Nintendo used to have many.
And the war drum beats on, apparently, as Nintendo has seen fit to issue massive takedowns of videos of fan-created Mario Bros. levels as the company releases its own Mario-level-builder, Super Mario Maker. What appears to be catching these YouTubers in Nintendo's crosshairs is if they used any emulators or hacks in order to make these levels.
Nintendo is targeting speedrunners and modders in a new round of YouTube copyright claims, issuing takedown requests to users who post footage from modified Super Mario World levels. The mass deletion coincides with the upcoming launch of Super Mario Maker, a Nintendo-licensed level creation toolkit for the Wii U console. Removed videos feature unauthorized Super Mario World levels created using freeware tools, rather than Nintendo’s official level design software.
Nintendo’s recent copyright claims impact speedrunners who have spent years crafting and documenting unsanctioned Super Mario World mods. According to a Kotaku report, YouTube user “PangaeaPanga” states that their channel was “wrecked” by copyright claims, resulting in the permanent removal of many popular videos.
In other words, modders had long beat Nintendo to the punch in creating software that allowed fans of Mario Bros. to create their own levels, upload them, and have folks like PangaeaPanga play them out and eventually master them. This was allowed to go on exactly up until Nintendo decided to jump into this arena, at which time the takedowns ensued. What you may not know is that there has been an active Mario Bros. modding community for these past few years, dedicated to building the most challenging levels for others to play and then post their runs on YouTube. In other words, these are huge Nintendo fans.
Super Mario World enthusiasts frequently create custom levels designed to challenge veteran players. Many of these levels require the use of little-known glitches and quirks within Super Mario World‘s engine, adding a degree of difficulty not present in the original game. Creative application of Super Mario World‘s hacking utilities has also produced unique autoplaying levels, including tributes that link in-game sound effects to backing music tracks.
Under the terms of YouTube’s copyright structure, users who have their videos claimed by copyright owners lose the ability to earn advertising revenue from their creations. Copyright holders have the option of claiming ad revenue from content-matched videos. As part of its most recent round of copyright claims, Nintendo instead opted to delete targeted videos entirely.
So we have Nintendo staring lovingly into the eyes of its biggest fans while pissing on their legs. And for what? Part of the reason Nintendo will likely make a killing with Super Mario Maker is that these dedicated fans had built up an interest in these modded levels and speedruns in the first place. Now, Nintendo intends on swooping in, killing off the videos of these fans, and yet cashing in on the market that the fans essentially created? How charming.
It's not that Nintendo can't do this, it's that it shouldn't. The company gains nothing except another round of fan discontent. Real smart, guys.
Some potentially good news this morning -- which may be undermined by the fine print. After many years of back and forth, the 9th Circuit appeals court has ruled that Universal Music may have violated the DMCA in not taking fair use into account before issuing a DMCA takedown request on a now famous YouTube video of Stephanie Lenz's infant dancing to less than 30 seconds of a Prince song playing in the background. Because of this, there can now be a trial over whether or not Universal actually had a good faith belief that the video was not fair use.
This case has been going on forever, and if you've watched the video, it's kind of amazing that a key case on fair use should be focused on that particular video, where you can barely even make out the music. The key question was whether or not Universal abused the DMCA in not first considering fair use before sending the takedown. This is fairly important, because, of course, DMCA takedowns suppress speech and if fair use is supposed to be the "pressure valve" that stops copyright from violating the First Amendment, it has to actually mean something. Section 512(f) of the DMCA says that the filer of a DMCA notice may be liable for damages for "misrepresentations," but historically that has been an almost entirely toothless part of the law (in part because of earlier rulings in the Lenz case). People hoped that would change with this ruling, and while the beginning of the ruling suggests 512(f) is getting teeth, the end yanks them all away.
The ruling in the 9th Circuit starts out great, but starts getting iffy pretty fast.
Her claim boils down to
a question of whether copyright holders have been abusing
the extrajudicial takedown procedures provided for in the
DMCA by declining to first evaluate whether the content
qualifies as fair use. We hold that the statute requires
copyright holders to consider fair use before sending a
takedown notification, and that failure to do so raises a triable
issue as to whether the copyright holder formed a subjective
good faith belief that the use was not authorized by law.
Sounds good, right? Anyone sending a DMCA notice needs to take fair use into account before sending a takedown. That may be trouble for all of those automated takedown filing systems out there, many of which we've written about. The court also reiterates that fair use is not "allowed infringement," but rather it's not infringement at all. This is also important (even though it says that directly in the law, many people pretend that it's just an "allowed" infringement). The court is not impressed by Universal Music's defense in the case, in which it argues that fair use is "not authorized by law" because, as Universal falsely claims, it is merely a "defense" to infringement. The court says that's wrong:
interpretation is incorrect as it conflates two different
concepts: an affirmative defense that is labeled as such due to
the procedural posture of the case, and an affirmative defense
that excuses impermissible conduct. Supreme Court
precedent squarely supports the conclusion that fair use does
not fall into the latter camp: “[A]nyone who . . . makes a fair
use of the work is not an infringer of the copyright with
respect to such use.”
So, that's all good. But... the details matter, and from that point on... they're weird. The court points to the earlier ruling, saying that the copyright holder "need only form a subjective good faith belief that a use is not authorized." Thus, as long as the issuer can come up with some sort of argument for why they didn't think it was fair use, they're probably safe.
As a result, Lenz’s request to impose a subjective
standard only with respect to factual beliefs and an objective
standard with respect to legal determinations is untenable.
And because of that, the court leaves a big out for just about any copyright holder. It says the court has no place in questioning how the copyright holder decided whether the use was authorized or not:
To be clear, if a copyright holder ignores or neglects our
unequivocal holding that it must consider fair use before
sending a takedown notification, it is liable for damages
under § 512(f). If, however, a copyright holder forms a
subjective good faith belief the allegedly infringing material
does not constitute fair use, we are in no position to dispute
the copyright holder’s belief even if we would have reached
the opposite conclusion.
The court says a copyright holder can't just "pay lip service" to the idea that it checked on fair use, but in the same paragraph admits that, well, it basically can. Even worse, it says that forming a "good faith belief" doesn't require actually investigating the details:
In order to comply with the strictures of
§ 512(c)(3)(A)(v), a copyright holder’s consideration of fair
use need not be searching or intensive. We follow Rossi’s
guidance that formation of a subjective good faith belief does
not require investigation of the allegedly infringing content.
So.... huh? (1) You need to take into account if it's fair use or not and you need to show a "good faith belief" that it's fair use, but... (2) you don't actually have to investigate anything, and the court cannot review your reasons for having a good faith belief. That's not a loophole. It's a blackhole that collapses 512(f) in on itself.
From there, it actually notes that automated takedowns... may be fine:
We note, without passing judgment, that the
implementation of computer algorithms appears to be a valid
and good faith middle ground for processing a plethora of
content while still meeting the DMCA’s requirements to
somehow consider fair use. Cf. Hotfile, 2013 WL 6336286,
at *47 (“The Court . . . is unaware of any decision to date that
actually addressed the need for human review, and the statute
does not specify how belief of infringement may be formed
or what knowledge may be chargeable to the notifying
entity.”). For example, consideration of fair use may be
sufficient if copyright holders utilize computer programs that
automatically identify for takedown notifications content
where: “(1) the video track matches the video track of a
copyrighted work submitted by a content owner; (2) the audio
track matches the audio track of that same copyrighted work;
and (3) nearly the entirety . . . is comprised of a single
So, uh, what? Automated takedowns may be fine because that's sort of a way to consider fair use because... no reason given. That is not at all helpful.
On a separate note, the court confirms that the trial cannot move forward by arguing that Universal had "willful blindness" about the likelihood of fair use in the case, because Lenz didn't really show that Universal had willful blindness. So that's another dead end.
Finally, the court rejected Universal Music's claim that Lenz had to show monetary damages in order to recover damages under 512(f). The court says 512(f) spans more than just monetary damages. Of course, that's almost entirely meaningless in a world in which everyone has an out through "subjective good faith" that doesn't even require investigating anything.
So this is a ruling that looks good up top, but gets bad as you read the details. There is a dissent, from Judge Milan Smith, pointing out some of the problems with the majority ruling, and the loophole that it creates. As the dissent notes, stating that something is infringing when you haven't done any fair use analysis is a misrepresentation, and 512(f) covers misrepresentations. So, in the end, a possibly important ruling is undermined with a massive loophole, which likely will lead to a continuing barrage of DMCA takedowns, including automated takedowns that suppress speech. That seems... wrong.
A few weeks ago, Brian Krebs published a fantastic article entitled how not to start an encryption company, which detailed the rather questionable claims of a company called Secure Channels Inc (SCI). The post is long and detailed and suggests strongly that (1) SCI was selling snake oil pretending to be an "unbreakable" security solution and (2) that its top execs had pretty thin skins (and in the case of the CEO, a criminal record for running an investment ponzi scheme). The company also set up a bullshit "unwinnable" hacking challenge, and then openly mocked people who criticized it.
Now enter Asher Langton, who has an uncanny ability to spot all sorts of scams (he was the one who initially tipped me off to the Walter O'Brien scam, for example). He seems to especially excel at calling out bullshit security products and companies. He's spent the past few weeks tweeting up a storm showing just how bogus Secure Channels is -- including revealing that they're just rebranding someone else's free app. He also noted that the company appeared to be (not very subtly) astroturfing its own reviews, noting that the reviews came from execs at the company:
So, uh, how did SCI respond? Let's just say not well. As detailed by Adam Steinbaugh at Popehat, a bunch of anonymous Twitter accounts magically appeared attempting to attack Langton, claiming that he was violating various computer crime and copyright laws. The accounts ridiculously argued that by posting screenshots of Secure Channel's source code, he was violating various statutes, including copyright law. This is wrong. Very wrong. Laughably wrong. In one of the screenshots posted by one of these "anonymous" accounts, other browser tabs were left visible -- and you'll notice the other two tabs.
You'll note Asher's tweet, but also a primer on "computer crime laws" and a "how to take screenshots" tab (apparently it didn't include a lesson on cropping). Oh, but more important, this tweet from a supposedly anonymous Twitter user also showed that the person taking the screenshot is logged in from a different account, that just happens to be the account of... SCI's director of Marketing Deirdre Murphy. It even uses the same photo.
This same Deirdre Murphy, back in Krebs' original article, used Twitter to attack another well recognized security expert who had been mocking SCI's claims:
James said he let it go when SCI refused to talk seriously about sharing its cryptography solution, only to hear again this past weekend from SCI’s director of marketing Deirdre “Dee” Murphy on Twitter that his dismissal of their challenge proved he was “obsolete.” Murphy later deleted the tweets, but some of them are saved here.
Right. It's entirely possible that Murphy is not behind the anonymous accounts, but she's pretty clearly connected to the screenshots that showed up on those anonymous accounts -- so even if it's not her directly... it seems likely that she's associated with whoever is doing the posting.
Oh, and then it gets worse. Right about the time Steinbaugh's article was published, someone claiming to be SecureChannels' CEO Richard Blech, sent Twitter a DMCA notice over some of Langton's tweets -- and Twitter took them down:
Twitter did this despite the fact that the DMCA claim itself was pretty clearly invalid. As summarized by Steinbaugh:
About an hour and a half after this post went live, SecureChannels CEO Richard Blech (or someone claiming to be him) sent a DMCA notice to Twitter for two of Langton's tweets, complaining that they consisted of "employee pics, company and personnel, posts copyright material, hacks products and posts copyright code from products, using trademarks, targeted harassment, slander to destroy commerce." As for the description of the "original work," Blech blathered: "Cracked an app and placed code online, uses trademarked logos to attack company."
This is a censorious abuse of copyright law to suppress criticism. It is, in essence, an attempt to use copyright law for everything except copyright. That SecureChannels would use copyright law to shield criticism on the basis that its trademarks are being used and because of "slander" is, well, hysterical. This is not a company interested in permitting people to criticize it.
A little while ago, I tweeted about how ridiculous it was that Twitter's legal team would go forward with the takedown on an obviously bogus takedown notice, and within 10 minutes, I was told by someone on Twitter's legal team that the notice had been reviewed and the posts had been restored.
Either way, for a company bragging that its "security" solution is "unhackable" -- you'd think the company would be more open to actual criticism. Instead, it seems to spend an inordinate amount of time attacking critics and abusing the law to try to silence them. Odd.
Danny O'Brien, over at the EFF's Deeplinks blog, has the story of how it appears China is pressuring the developers of tools for circumventing the Great Firewall of China to shut down their repositories and no longer offer the code. Two separate, non-commercial, developers of circumvention tools have quietly gone dark recently:
The maintainer of GoAgent, one of China's more popular censorship circumvention tools emptied out the project's main source code repositories on Tuesday. Phus Lu, the developer, renamed the repository’s description to “Everything that has a beginning has an end”. Phus Lu’s Twitter account's historywas also deleted, except for a single tweet that linked to a Chinese translation of Alexander Solzhenitsyn’s “Live Not By Lies”. That essay was originally published in 1974 on the day of the Russian dissident’s arrest for treason.
We can guess what caused Phus Lu to erase over four years’ work on an extremely popular program from the brief comments of another Chinese anti-censorship programmer, Clowwindy. Clowwindy was the chief developer of ShadowSocks, another tool that circumvented the Great Firewall of China by creating an encrypted tunnel between a simple server and a portable client. Clowwindy also deleted his or her Github repositories last week. In a comment on the now empty Github archive Clowwindy wrote in English:
Two days ago the police came to me and wanted me to stop working on this. Today they asked me to delete all the code from Github. I have no choice but to obey.
The author deleted that comment too shortly afterwards.
As you may recall, back in March, China launched a massive DDoS attack on Github, targeting another tool for getting around the Great Firewall, called Greatfire. It seems equally notable that in the last week, there was another big DDoS attempt on Github.
While it may not be surprising at all that China is looking to stop tools that allow people to get past the censorship wall that the Chinese government itself has created, it still is worrisome:
Chinese law has long forbidden the selling of telecommunication services that bypass the Great Firewall of China, as well as the creation or distribution of “harmful information”. Until recently, however, the authorities have not targeted the authors of non-commercial circumvention software, nor its users. Human Rights in China, a Chinese rights advocacy and research organization, told EFF that, based on its preliminary review, VPNs and circumvention software is not specifically prohibited under Chinese law. While the state interferes with people's ability to use such software, it has not outlawed the software itself.
In November, Phus Lu wrote a public declaration to clarify this point. In the statement, he stated that he has received no money to develop GoAgent, provided no circumvention service, nor asserted any political view.
As O'Brien notes, this is a reminder that code is speech -- and government intimidation to shut down code is a form of repressing speech. Though, as with many attempts to censor, it seems like this is more for show than actual impact:
It’s also as ultimately futile: while the Chinese authorities have chosen to target and disrupt two centralised stores of code, thousand of forked copies of the same software exist—both on other accounts on Github and in private copies around the Net. ShadowSocks and GoAgent represent hours of creative work for their authors, but the principle behind them is reproducible by many other coders. The Great Firewall may be growing more sophisticated in detecting and blocking new circumvention systems, but even as it does so, so new code blossoms.
Meanwhile the intimidation of programmers remains a violation of the human rights of the coder—and a blow to the rights of everyone who relies on their creativity to exercise their own rights.
A few weeks ago we noted that it appeared that Facebook was building its own ContentID system to try to takedown videos copied from elsewhere... and voila, here it is. Facebook has now announced its new system, which is powered by AudibleMagic -- the same company that powers every other such system that is not Google's ContentID. Audible Magic is the "default." It's basically the "buying IBM" of content/copyright filtering. And it tends to be pretty bad. Facebook notes that its videos are already run through Audible Magic and that has basically done nothing. So they're "working with Audible Magic to enhance the way the system works."
We'll see what that means in practice, but I expect there will be plenty of false positives and complaints about people's perfectly legitimate videos getting taken down. But, that's what happens when you live in a world where people censor first and ask questions later. Even worse, it appears that some of the new tools will only be available to a special class of Facebook users:
To this end, we have been building new video matching technology that will be available to a subset of creators. This technology is tailored to our platform, and will allow these creators to identify matches of their videos on Facebook across Pages, profiles, groups, and geographies. Our matching tool will evaluate millions of video uploads quickly and accurately, and when matches are surfaced, publishers will be able to report them to us for removal.
We will soon begin testing the beta version of this matching technology with a small group of partners, including media companies, multi-channel networks and individual video creators.
It's clear why Facebook is doing this, but it seems that following Google down this path is a pretty weak solution, rather than building something better, that doesn't take a "censor first" approach to things.
We've written a few times now about how the parent company of Ashley Madison, Avid Life Media, has been committing perjury and issuing completely bogus copyright demands to try to hide the information that was leaked after its servers got hacked. Last month, that tactic (despite not complying with the law) apparently worked briefly, until the full data dump happened last week. But that hasn't stopped the company from continuing to try. EFF wrote a long blog post detailing how this was a clear abuse of the law, but Avid Life Media doesn't seem to care.
After the leak came out, a few sites sprung up quickly to help people search the database. Whether or not you think it's appropriate to set up such a site (or to use it) is a separate issue, but what hopefully everyone can agree on is that such a site should not be taken down for copyright reasons. There were two main sites that got the bulk of attention for setting up such a database, and one has already shut down and the other has received a takedown demand (though not a copyright one). I won't link to either site, but here's what's now posted on one of the sites:
Meanwhile, the creator of the other main search engine has said on Twitter that he, too, has been hit with "a vexatious DMCA from lawyers acting on behalf of Avid Life Media" and reporters are similarly mistakenly calling it a DMCA, but according to the copy the guy posted to Pastebin, the letter sent by Avid Life Media's lawyers at giant law firm DLA Piper to CloudFlare is not actually a DMCA, but rather a weird "please, take this down because... vague reasons and terms of service violations." That is, there's no real legal threat (because there's no basis for one). It's just vaguely threatening hoping to scare off people:
Our firm is counsel to Avid Life Media, Inc. (“ALM”) with respect to its intellectual property and data privacy matters. As you may know, ALM is the parent company of the online dating and social networking service Ashley Madison. Because users entrust ALM with highly sensitive and intimate details (collectively the “Ashley Madison User Data”), the privacy of ALM’s users is of utmost importance. As a result, ALM proactively and arduously regulates any authorized (and unauthorized) use of Ashley Madison User Data.
This letter is to inform CloudFlare, Inc., and all related entities (collectively, “You”) that, upon information and belief, CloudFlare, Inc.’s client (“Your Client”), has posted a searchable database of the Ashley Madison User Data to a website hosted on a domain name hosted by You. Specifically, Your Client has posted the Ashley Madison User Data at the following URL: https://ashley.cynic.al/ (the “URL”). Your Client’s publication of the Ashley Madison User Data may constitute illegal disclosure of private personal information, and potentially expose millions of individuals around the world to identity theft.
Please note that this letter is made without prejudice to any other rights or remedies that may be available to ALM. Nothing contained herein should be deemed a waiver, admission, or license by ALM, and ALM expressly reserves the right to assert any other factual or legal positions as additional facts come to light or as the circumstances warrant.
CloudFlare, in response, told the guy that it had forwarded the name of the actual hosting provider (a non-US company) to the lawyers at DLA Piper, and at last check, the guy claims that his hosting company, ColoCall out of Ukraine, has not done anything about it. That may change, but it's not clear what legal basis ALM has for the demand. It's nice to see that ALM is no longer making totally bullshit copyright claims, but these weird "privacy and personal data rights" claims don't have much legal basis either.
We write frequently about those who abuse the DMCA either directly for the sake of censorship or, more commonly, because some are in such a rush to take down anything and everything that they don't bother (or care) to check to see if what they're taking down is actually infringing. The latter, while common, could potentially expose those issuing the takedowns to serious legal liability, though the courts are still figuring out to what extent.
Last week, we wrote about Boston public television station WGBH issuing a bogus takedown on some public domain (government created) video that Carl Malamud had uploaded to YouTube. That doesn't look like an automated takedown, but rather someone working for WGBH's legal team who just decided that anything with "American Experience" in a title must be infringing. Malamud has now published the letter that he sent YouTube, about the whole situation. It includes some more details concerning the insulting manner in which WGBH's legal team, Susan Kantrowitz and Eric Brass, handled the situation, including Brass telling Malamud that this wasn't a big deal because deleting this "particular film" was not that important.
Meanwhile, I finally reached the WGBH legal department. Susan L. Kantrowitz, General Counsel, wrote to me that “It is highly unusual for Amex to be in a title and not be one of our shows” and they would “address it on Monday.” Eric Brass, Corporate Counsel, wrote that “the take down request very well may have been an error, but given that it is late on a Friday afternoon in August, I may not be able to get back to you (or YouTube) until Monday.” He then wrote me back and indicated that while perhaps my YouTube account was important, this “particular film” was certainly not. I spoke to him on the phone and he repeated that no harm had been done, but and that after he completed his investigation he would,“follow up with something in writing that might be helpful for you if a question arises down the road about the take down.”
I want to stress that the timing of this takedown was not mine, it was instigated by WGBH and it was done deliberately as a formal legal action. Mr. Brass seemed quite peeved that I was upset, even though I was just minding my own business on the Internet when some hooligans from Boston came over and smacked me for no reason at all, then left for a weekend at the Cape.
The process of creating a copyright strike is not a casual one. WGBH had to go through several screens to identify the video, fill out their contact information, and checked numerous boxes indicating that they understood this was the beginning of a legal process, then signed a statement indicating that all statements were true and that they were in fact the true and correct owners of that film or portions of that film. In order to respond to that legal accusation, I had to go through a similar process of swearing under oath and accepting a court’s jurisdiction for my counter-claim.
Because of all of this, Malamud has suggested that YouTube institute a similar reverse three strikes policy for those who abuse the DMCA takedown process:
I believe that incorrectly posting a video that is under copyright is in fact worthy of a copyright strike. However, I think the opposite of that should be true. WGBH committed a copyright foul and should be prohibited from having the capability to take another user’s films down for a six-month period. If they commit 3 copyright fouls, their account should be revoked. WGBH personnel should be required to go to copyright school so that they fully understand their responsibilities under the law.
Given the blithe and uncaring attitude of WGBH legal staff, they should also be required to undergo copyright school. Their blase attitude was not impressive, and I can just imagine the reaction of WGBH if somebody had improperly taken down one of their media properties would not have been nearly so casual.
The idea of a reverse three strikes policy is not a new one. We first wrote about it back in 2008. Unfortunately, under the current wording of the DMCA, it would be very difficult to do it properly, but it does seem worth considering, considering just how frequency such a power is abused.
It's amazing the kind of trouble that Carl Malamud ends up in thanks to people not understanding copyright law. The latest is that he was alerted to the fact that YouTube had taken down a video that he had uploaded, due to a copyright claim from WGBH, a public television station in Boston. The video had nothing to do with WGBH at all. It's called "Energy -- The American Experience" and was created by the US Dept. of Energy in 1974 and is quite clearly in the public domain as a government creation (and in case you're doubting it, the federal government itself lists the video as "cleared for TV."
WGBH, on the other hand, has nothing whatsoever to do with that video. It appears that some clueless individual at WGBH went hunting for any videos having to do with the PBS show WGBH produces, called American Experience and just assumed that based on the title, the public domain video that Malamud uploaded, was infringing. Because that's the level of "investigation" that apparently the censorious folks at WGBH do when looking to issue takedown notices.
Malamud reached out to WGBH and apparently the folks there were most unhelpful. The station's general counsel refused to apologize and simply told Carl that since "American Experience" was "unusual" to be in the title, it was okay for them to issue a bogus DMCA notice. Another lawyer , Eric Brass, told Malamud that they wouldn't be able to do anything about it until next week.
Thankfully, someone at YouTube found out about all of this and restored the video so you can watch it:
While some may argue this is no big deal because by making noise about this, Malamud was able to get the video reinstated, that's ridiculous. WGBH is a public television station that claims in its mission statement that its "commitments" include:
Foster an informed and active citizenry
Make knowledge and the creative life of the arts, sciences, and humanities available to the widest possible public
Improve, for all people, access to public media
I'm curious how issuing bogus copyright takedowns on public domain material matches with any of those "commitments." Hell, why is such a public television station worried about so-called "copyright infringement" in the first place?
And, as Malamud notes, this little "accident" wasted the time of a bunch of people, and put his own YouTube channel at risk, since it initially counted as a "strike" against him. WGBH owes Malamud not just an apology, but an explanation for why this happened and what the station will do to prevent it from happening again.
I was going to start off this post by noting that, over the weekend, Andy at TorrentFreak had the story of how Columbia Pictures appears to have hired the "worst anti-piracy group" around to issue DMCA takedowns, but that's wrong. This kind of thing is all too common. Columbia Pictures appears to have hired basically your standard clueless "anti-piracy" group, and it's resulted in a DMCA takedown letter that took down basically every video on Vimeo with the word "Pixels" in the title, all because of Columbia's mega flopPixels, an Adam Sandler film that is being called "one of the worst movies of the year."
The DMCA notice sent by Entura International on behalf of Columbia Pictures, is so bad that whoever the genius was at Entura who put it together even notes in the "description" the full names of the videos it's taking down -- which should have been an indication that perhaps these were not the same videos as the Adam Sandler film. One of them is even clearly labeled as "the official trailer" of the Adam Sandler film.
Also, as some have noted, this takedown effort includes the critically acclaimed film that inspired the Adam Sandler film. Columbia Pictures bought the rights to Patrick Jean's video in order to make its own film, but those "rights" did not include being able to DMCA the original. It also took down other completely unrelated projects as one created by a Cypriot filmmaker for a non-profit NGO, a Hungarian student's final project for his degree, and a personal project involving Pantone paint swatches.
The TorrentFreak article notes that the NGO, named NeMe, has protested the takedown, pointing out that this is ridiculous and asking for help -- only to have Vimeo staff say that the only way to deal with it is to file a counternotice:
And while that is the official process, counternotices often scare people off, because if the other side disagrees, the next step in the DMCA is for the other side to file a lawsuit -- and many don't even want to take that chance.
And, yes, obviously, much of the blame for this ridiculous set of circumstances should fall on Entura International for being terrible at its own job in issuing bogus takedowns. And some of the blame should fall on Columbia Pictures for hiring Entura -- a company that clearly has no business sending out DMCA takedowns. But, also, much of it should fall on Vimeo for simply giving in and accepting the obviously bogus takedown requests. Just recently, we noted that Automattic (the company that makes WordPress) had published in its transparency report that it had rejected 43% of the DMCA takedown notices it had received -- and we suggested other companies start paying attention. Google also is known for rejecting bad DMCA takedowns.
However, it appears that Vimeo doesn't bother. Send a takedown, no matter how ridiculous, and apparently the company will comply and take it down -- and if you complain to support staff, the company tells you that you need to go through the legal process of sending a counternotice, rather than reevaluate its own faulty review process. Of course, if the story of bogus takedowns gets enough press attention then Vimeo might act and and ask Entura for an explanation leading the company to withdraw the takedowns and try to wait out the ridicule. But, really, that's ridiculous. Vimeo should be standing up for its users' rights and it did not. Vimeo failed.
Yes, we can argue that it's ridiculous the way the DMCA safe harbor process creates incentives for Vimeo to do exactly what it did here (in that it grants full liability protection for taking down any work if you receive a valid notice), but more and more companies are at least doing cursory reviews. Vimeo has clearly chosen not to do so, which, at the very least, should raise questions among users about if that's the right platform for them to use, when the company doesn't seem even remotely interested in making sure its own works are protected against bogus takedowns.