So, Vice's Motherboard has an amusing article about how the misleadingly named GuitarHeroFailure (misleading, because the guy's actually good at the game) tried to get around YouTube ContentID takedowns on one of his Guitar Hero videos (of Ozzy Osbourne's "Bark at the Moon") by singing an acapella version of the song over it. The overall effect is really quite amazing. Watch the video (and don't miss his, um, "variation" at the very end) below:
The guy claims, in a separate video that he wasn't really trying to comment on copyright law (he actually says "YouTube's copyright laws," which, you know, aren't actually a thing). But, no matter what it is commenting on it. He notes that he was really proud of how well he did in that particular game, and was disappointed that it got taken down by YouTube.
But, even if he didn't mean for it to be a comment on copyright law, it absolutely is. But here's the craziest part. It's likely that his new video also violates copyright law. Because, remember, when it comes to music licensing in particular, copyright law is insane. There are multiple licenses that you need. There's one for the sound recording -- and in this case, he doesn't have to deal with that one. But, if you're doing a cover song, you need a mechanical license for the composition of the song. And then, the fact that it's been put on a video raises a whole separate issue, which is the need for a totally different license called a synch license, for when you use a composition with a video.
Of course, YouTubers rarely (i.e., basically never) get such licenses at all, and it's mostly ignored by everyone. But that doesn't mean it will always be. And, again, that highlights the absolute insanity of music licensing these days. People are doing stuff that clearly is not taking away anything from the market for the original music, but because of the messy, patchwork setup of copyright laws and music licensing, it's almost impossible to be fully compliant no matter what you do.
And don't even get me started on the copyright questions raised by this other video in which someone took GuitarHeroFailure's acapella and synched it to the original Ozzy song. Because, really, there aren't enough hours in the day to analyze that mess...
There's an old saying that those that lie down with dogs will get back up with fleas. One modern derivative of that maxim might be: if you bend over backwards for copyright censors you will become censors yourself. No better example of this can be found than YouTube's ContentID system, the automated platform that scours YouTube videos looking for uploads of identical audio or video content and proactively takes them down in favor of the original content owner. That's how it works in theory, that is. In practice, ContentID appears to be most useful in taking down fair use content, trolling legitimate creators, and even silencing political speech, supposedly the most revered thing in this great Republic of ours. It's typical in these cases for the automation to be blamed, but that's a mistake. The real blame lies with Google for implementing such a flawed system, with the entertainment industry and trolls for abusing it, and with all of us for simply accepting it. Everyone, in other words, is to blame.
I came to that conclusion recently, when Hugh Hefner used Mario Bros. to show me how silly all of this is. The whole thing started when a Kotaku writer uploaded a video of some Mario Maker levels that play themselves.
Two days ago, I uploaded a video to YouTube. It featured some awesome automatic Mario Maker levels that basically play themselves. Today, I was dinged with a copyright notice for that same video. The claimant was none other than...Playboy? I’m serious. I didn’t get flagged by Nintendo. Rather, I got flagged by Hugh Hefner’s operation.
Playboy, obviously, does not own Mario. It did not create Mario Maker. It did not build the level on display in my video. And yet my video was still flagged. What gives?
What gave was that Playboy had uploaded a video that contained one of the same levels in the other video. Because these levels play by themselves, rather than being played by a human, the videos have the exact same content. So, faster than a Mario Brother running with the 'B' button mashed down, ContentID flagged Kotaku's video as infringing and sent out a notice. Other users likewise had videos of that Mario Maker level flagged in favor of Playboy, which I am very much certain doesn't own any of the IP surrounding Nintendo's center-piece franchise. Most, like the Kotaku writer, submitted disputes which were resolved quickly. Playboy, for its part, has been active in getting all of the claims dismissed...
...which is entirely besides the goddamn point. ContentID was dinging uploaders for copyright violations in an automated fashion, with no checks, on content owned by an unrelated party. That doesn't make any sense. And, in some cases, there can be actual harm done.
When you get flagged, the claimant has a whole 30 days to review your dispute, during which your video typically stays up while also making money for the claimant. Sometimes, the claimant will even be able to block the video from being viewed entirely. Even if the dispute gets dismissed, it might mean waiting days if not an entire month for the motion to actually get through. In the meantime, any YouTuber who supports themselves with ads and just wanted to show off the level to their subscribers, or perhaps added some good commentary to the footage, will lose revenue (as well as gain an unnecessary headache.)
Personally, I'd love to see "An unnecessary headache" as the epitaph on ContentID's gravestone.
A few weeks ago we noted that it appeared that Facebook was building its own ContentID system to try to takedown videos copied from elsewhere... and voila, here it is. Facebook has now announced its new system, which is powered by AudibleMagic -- the same company that powers every other such system that is not Google's ContentID. Audible Magic is the "default." It's basically the "buying IBM" of content/copyright filtering. And it tends to be pretty bad. Facebook notes that its videos are already run through Audible Magic and that has basically done nothing. So they're "working with Audible Magic to enhance the way the system works."
We'll see what that means in practice, but I expect there will be plenty of false positives and complaints about people's perfectly legitimate videos getting taken down. But, that's what happens when you live in a world where people censor first and ask questions later. Even worse, it appears that some of the new tools will only be available to a special class of Facebook users:
To this end, we have been building new video matching technology that will be available to a subset of creators. This technology is tailored to our platform, and will allow these creators to identify matches of their videos on Facebook across Pages, profiles, groups, and geographies. Our matching tool will evaluate millions of video uploads quickly and accurately, and when matches are surfaced, publishers will be able to report them to us for removal.
We will soon begin testing the beta version of this matching technology with a small group of partners, including media companies, multi-channel networks and individual video creators.
It's clear why Facebook is doing this, but it seems that following Google down this path is a pretty weak solution, rather than building something better, that doesn't take a "censor first" approach to things.
Earlier today Techdirt writer Tim Geigner pointed me to a YouTube video that used Twitter user names to create a punnish version of the 80s hit "Tainted Love" retitled Tweeted Love. It's pretty amusing:
In checking out the YouTube account of the guy who created it, Jim Mortleman, a more recent video posted just a few days ago popped up, entitled Nerdpunna - Smells Like Tweet Spirit. This was the same style video, using Twitter usernames to create an absolutely hilarious version of the famous Nirvana song. It was so well done (perhaps because Kurt Cobain's lyrics are so unintelligible) that I couldn't believe it had only around 2,000 views. So I tweeted it, joking that people should check it out before it got taken down.
A bunch of people started retweeting and linking to it, with many of them commenting on how great the video was or how funny it was. Even people who aren't Nirvana fans were talking about it. A few examples:
And there were many more like that. In short: the damn thing is really funny and super well done. After realizing that his video was suddenly getting an influx of traffic, the creator of it, Jim Mortleman (who says that the videos are actually a group project in finding the profiles, which he then puts together in the video) tweeted me that he was pretty sure he was safe because he'd been alerted that UMG was "monetizing" his video -- which is one of the options in YouTube for copyright holders if they want to make money on someone using their work, rather than taking it down.
From his YouTube screen, it actually showed that Universal Music had blocked the video in one country while monetizing it elsewhere:
However, just a few hours later, as the video started getting more and more attention, views and tweets... apparently Universal changed its mind -- and if you now visit the page, this is what you see:
Mortleman says that within YouTube it's now officially blocked in all countries. This is a ContentID match, rather than a direct takedown, though the company clearly made the decision to switch it from monetizing it to taking it down -- so someone made a decision.
And it's a hellishly stupid decision. The video was fantastic and didn't take anything away from the song. It certainly wasn't a replacement for the song and, if anything, was likely to draw a lot more interest to the song and remind people of its existence. I'm not a huge fan of the song, but have been humming it to myself all afternoon because of that video (which I ended up watching a few times).
Also, this seems like a pretty clear case of fair use -- though I imagine some will disagree. The hilarious use of twitter user names to create alternative lyrics to the song is quite transformative. No one was watching this video as a replacement for the original song, but because the video itself sort of celebrated the song with alternative lyrics made up entirely of Twitter profile names where "Here we are now, entertain us" because "Huey Long Gnarl Emma Talus" (if you haven't seen the actual video... it's much funnier in the way it was presented). And now it's all gone and you can't see it.
All because of copyright law and UMG's total lack of a sense of humor.
Even if you think the fair use case is bunk and that the video is infringing and UMG is totally, 100% in the right to do what it did, I'm curious how this helps UMG in any way, shape or form. It doesn't help them get any more money, and it just makes people pissed off. How is that a smart business decision?
Update: Jim has now posted a silent version of the video so you can see what it looks like, though it's really not the same effect (though you can try to line up the audio with it to try to replicate the effect):
Hopefully you know who singer Dan Bull is by now. We've written about him many times. He's written and performed a bunch of songs about topics that we're interested in (and recently composed the awesome new theme song for the Techdirt Podcast (which you do listen to, right?). Dan has been able to build a career around giving away his music, and letting others do stuff with it. But he keeps running into ridiculous issues with YouTube's ContentID system. There was the time his video got silenced after another singer used the same sample he did, and then claimed the original work as his own. Or the time he got his video taken down because another rapper, Lord Finesse, was pissed off that Bull was criticizing Finesse's lawsuit against yet another rapper, Mac Miller. While YouTube has been a key place where Bull has built his audience, his run-in's with bogus claims and other problems even led him to write an entire diss track about ContentID.
And, wouldn't you know it, he's having yet more problems with it. As we've discussed, in the last few years, there's been a rise in a new breed of trolls, known as ContentID trolls, who claim to hold the copyright in music that they don't have copyright in, and then use ContentID to "monetize" other people using that work for themselves. There are a number of companies and middlemen that help them do this, including one called Horus Music, which has become the perfect tool for ContentID trolls. The trolls take someone else's work, sign up with Horus, upload that other person's music, claim it as their own, and then start making claims on other people's videos. Free money.
That's what just happened to Dan Bull -- who actively encourages people to use and share his own music (over which he claims no copyright restrictions). A fan of Dan's reached out to him, after a video he had made received a copyright claim, supposedly covering a song that the fan had used from Dan Bull. Bull reached out to Horus Music, telling them that its user, "DrewMCGoo72" was claiming copyright on other people's music, and asked the company to investigate the situation, and to explain "how this happened, and what exact steps will be taken to prevent such a thing from occurring again."
The company issued a weak apology, saying that the DrewMCGoo72 account had already been suspended but "this must have been missed." And then they tell Dan (who encourages people to share his music) "It is a real shame that people feel that it is acceptable to steal someones music!" Except this isn't about "stealing music." This is about filing bogus copyright claims and claiming revenue or harming individuals who used music that they knew to be without copyright restrictions. Dan responded to Horus noting that he wasn't satisfied with the company's response:
Horus Music's system has been exploited with the following results:
A) An anonymous stranger has walked away with revenue from fraudulently claiming my music as their own, facilitated by Horus Music
B) A child has received a copyright claim through Content ID from Horus Music and as a result has removed his 100% legitimate video out of fear of the consequences
C) I look like a hypocrite and a dick for telling kids they can use my music, and they then receive a copyright claim on their videos for using the very same music
You say you can only apologise - is an apology really all you are going to do?
Horus' only response was that since the kid took down his original video, the company can't do anything to release the claim "but I assume we aren't claiming it any longer."
It seems pretty clear that this is not the only time this has happened, since you can find other examples of Horus being used in this manner. This seems to raise a pretty serious question about how those companies are allowed to continue using the ContentID platform. After all, ContentID has a three strikes program for people who receive copyright violation claims. Why doesn't it have a similar three strikes program for those who abuse ContentID to claim copyright over projects they have no right to?
Either way, we'll leave you with Dan's song about ContentID, as it seems only fitting:
It's amazing the kind of trouble that Carl Malamud ends up in thanks to people not understanding copyright law. The latest is that he was alerted to the fact that YouTube had taken down a video that he had uploaded, due to a copyright claim from WGBH, a public television station in Boston. The video had nothing to do with WGBH at all. It's called "Energy -- The American Experience" and was created by the US Dept. of Energy in 1974 and is quite clearly in the public domain as a government creation (and in case you're doubting it, the federal government itself lists the video as "cleared for TV."
WGBH, on the other hand, has nothing whatsoever to do with that video. It appears that some clueless individual at WGBH went hunting for any videos having to do with the PBS show WGBH produces, called American Experience and just assumed that based on the title, the public domain video that Malamud uploaded, was infringing. Because that's the level of "investigation" that apparently the censorious folks at WGBH do when looking to issue takedown notices.
Malamud reached out to WGBH and apparently the folks there were most unhelpful. The station's general counsel refused to apologize and simply told Carl that since "American Experience" was "unusual" to be in the title, it was okay for them to issue a bogus DMCA notice. Another lawyer , Eric Brass, told Malamud that they wouldn't be able to do anything about it until next week.
Thankfully, someone at YouTube found out about all of this and restored the video so you can watch it:
While some may argue this is no big deal because by making noise about this, Malamud was able to get the video reinstated, that's ridiculous. WGBH is a public television station that claims in its mission statement that its "commitments" include:
Foster an informed and active citizenry
Make knowledge and the creative life of the arts, sciences, and humanities available to the widest possible public
Improve, for all people, access to public media
I'm curious how issuing bogus copyright takedowns on public domain material matches with any of those "commitments." Hell, why is such a public television station worried about so-called "copyright infringement" in the first place?
And, as Malamud notes, this little "accident" wasted the time of a bunch of people, and put his own YouTube channel at risk, since it initially counted as a "strike" against him. WGBH owes Malamud not just an apology, but an explanation for why this happened and what the station will do to prevent it from happening again.
YouTube star (among other things) Hank Green recently wrote an interesting post slamming Facebook for the way it treats video makers like himself. He has a few specific complaints, including that Facebook creates major incentives for people to upload videos directly to Facebook rather than just linking/embedding from other platforms (basically Facebook will bury your non-native videos) and that Facebook plays some questionable tricks in determining what counts as a "view" allowing it to claim more video views than YouTube when the truth is probably that it has about 1/5 as many views. But Hank's biggest complaint is that because of the incentives for native views, it's quite common for people to take other people's YouTube videos and upload them to Facebook themselves -- and that Facebook is not very responsive to takedown requests:
According to a recent report from Ogilvy and Tubular Labs, of the 1000 most popular Facebook videos of Q1 2015, 725 were stolen re-uploads. Just these 725 "freebooted" videos were responsible for around 17 BILLION views last quarter. This is not insignificant, it’s the vast majority of Facebook’s high volume traffic. And no wonder, when embedding a YouTube video on your company’s Facebook page is a sure way to see it die a sudden death, we shouldn’t be surprised when they rip it off YouTube and upload it natively. Facebook’s algorithms encourage this theft.
What is Facebook doing about it?
They’ll take the video down a couple days after you let them know. Y’know, once it’s received 99.9% of the views it will ever receive.
Leaving aside whether or not you think this is a big deal, what's really interesting is the first comment (highlighted by Fred von Lohmann) which suggests Facebook is gearing up to launch its own ContentID-like system. The comment is from Matt Pakes, a Facebook product manager for its video products. He responds to each of Green's complaints, putting a pro-Facebook spin on each of them (though, those responses appear to be a little questionable) and then indicates that the company is getting ready to launch something new, a la ContentID, but made special for Facebook:
Finally, we take intellectual property rights very seriously. We have used the Audible Magic system for years to help prevent unauthorized video content on Facebook. We also provide reporting tools for content owners to report possible copyright infringement. As video continues to grow rapidly on Facebook, we’re actively exploring further solutions to help IP owners identify and manage potential infringing content, tailored for our unique platform and ecosystem. This is a significant technical challenge at our scale, but we have a team working on it and expect to have more to share later this summer.
Of course, as Hank pointed out in his original article, the reason why some content creators actually like ContentID isn't so much the fact that you can pull down copied videos, but because it's created a revenue stream for them that goes back to the original creators. It's not at all clear how Facebook could even do that:
But even if they do have a system, it won’t function as well as Content ID. Content ID works so well largely because YouTube is good at monetizing content. So, instead of taking a video down, a copyright holder can claim the video and receive revenue from it. Content ID has claimed millions of videos and is responsible for over a billion dollars in revenue so copyright holders love it. But without a good system of monetization, Facebook can only remove videos, not send big checks to the owners of stolen content. For the copyright holder, interfacing with a profitless system is just a pain in the ass with no upside.
I guess we'll wait and see what comes out of Facebook, but perhaps people are going to start getting used to Facebook's equivalent of the YouTube frowny face for blocked videos.
PS: I noted that Pakes' response seems questionable on multiple levels, but I want to call out one big fat ridiculous claim, concerning why it pushes native uploaded videos much harder than YouTube videos:
With regard to the reach of video posts, the goal of Facebook’s News Feed is to show the right content to the right people at the right time. If you’re the type of person who likes to watch videos, you should be seeing more videos in your News Feed. If you tend to skip over videos, you will likely see less of them. Over years of developing and tuning News Feed, we know that clicking on a link to play video is not a great user experience, so people tend to interact slightly less with non-native video, and the posts get less engagement. Native video posts with auto-play tend to see better engagement, more watch time and higher view counts. It’s a nuanced but important point: native videos often do better than video links, but this is because people tend to prefer watching native videos over clicking on a link and waiting for something to load.
I find this difficult to believe. First, anyone who has used the Facebook video player and the YouTube video player knows that Facebook's video player is terrible. The quality is terrible and the whole experience is annoying. For whatever reason, YouTube's video player just tends to work better than nearly every other alternative (though in some cases Vimeo is nice too). Facebook's just feels clunky. And it's a bit ridiculous to argue that "clicking on a link to play video is not a great user experience." No one seems to have a problem with it elsewhere. And I see plenty of complaints about Facebook's annoying "autoplay" on videos, which would distort this data anyway, since Facebook counts "views" after 3 seconds.
Let me start out by saying that I think online harassment and bullying is a significant problem -- though also one that is often misrepresented and distorted. I worry about the very real consequences of those who are bullied, harassed and threatened online, in that it can often lead to silencing voices that need to be heard, or even causing some to not even bother to participate for fear of the resulting bullying. That said, way too frequently, it seems that those who are speaking out about online bullying assume that the best way to deal with this is to move to push for censorship as the solution. This rarely works. Too frequently we see "cyberbullying" being used as a catchall for attacking speech people simply do not like. Even here at Techdirt, people who dislike our viewpoint will frequently claim that we "bullied" someone, merely for pointing out and discussing statements or arguments that we find questionable.
There are no easy answers to the question of how do we create spaces where people feel safer to speak their minds -- though I think it's an important goal to strive for. But I fear the seemingly simple idea of "silence those accused of bullying" will have incredibly negative consequences (with almost none of the expected benefits). We already see many attempts to censor speech that people dislike online, with frequent cases of abusive copyright takedown notices or bogus claims of defamation. Giving people an additional tool to silence such speech will be abused widely, creating tremendous damage.
So, imagine what a total mess it would be if we had a ContentID for online bullying. And yet, it appears that the good folks at SRI are trying to build exactly that. Now, SRI certainly has led the way with many computing advancements, but it's not clear to me how this solution could possibly do anything other than create new headaches:
But what if you didn’t need humans to identify when online abuse was happening? If a computer was smart enough to spot cyberbullying as it happened, maybe it could be halted faster, without the emotional and financial costs that come with humans doing the job. At SRI International, the Silicon Valley incubator where Apple’s Siri digital assistant was born, researchers believe they’ve developed algorithms that come close to doing just that.
“Social networks are overwhelmed with these kinds of problems, and human curators can’t manage the load,” says Normal Winarsky, president of SRI Ventures. But SRI is developing an artificial intelligence with a deep understanding of how people communicate online that he says can help.
This is certainly going to sound quite appealing to those who push for anti-cyberbullying campaigns. But, at what cost? Again, there are legitimate concerns about people who are being harassed. But one person's cyberbullying could just be another person's aggressive debate tactics. Hell, I'd argue that abusing tools like contentID or false defamation claims are a form of "cyberbullying" as well. Thus, it's quite possible that the same would be true of this new tool, which can be used to "bully" those the algorithm decides is bullying as well.
Determining copyright infringement is already much more difficult than people imagine -- which is why ContentID makes so many errors. You have to take into account context, fair use, de minimis use, parody, etc. That's not easy for a machine. But at least there are some direct rules about what truly is "copyright infringement." With "bullying" or "harassment," there is no clear legal definition to match up to and it's often very much in the eye of the beholder. As such, any tool that is used to "deal" with cyberbullying is going to create tremendous problems, often just from misunderstandings between multiple people. And that could create a real chilling effect on speech.
Perhaps instead of focusing so much technical know-how on "detecting" and trying to "block" cyberbullying, we should be spending more time looking for ways to positively reinforce good behavior online. We've built up this belief that the only way to encourage good behavior online is to punish bad behavior. But we've got enough evidence at this point showing how rarely this actually works, that it seems like perhaps it's time for a different approach. And a "ContentID for harassment" seems unlikely to help.
YouTube and the music collection society GEMA have been at war for many years. Five years ago, I was at Berlin Music Week and it was one of the major points of discussion. YouTube was blocking all music videos, since GEMA insisted that YouTube should pay rates on par with digital sales (iTunes) rates for each play. Musicians I met with in Germany were furious at GEMA's obsessive control over their own music -- with one musician even showing me how he had an official website that GEMA was aware of, and an "unofficial" website his band showed to fans, which offered up free music (something GEMA refused to allow). The various court rulings in the case have been a mixed bag with some finding YouTube liable for user uploads, and even saying that YouTube needs to put in place a keyword filter.
German Courts also haven't been too happy with YouTube's custom message for (accurately) explaining why so much music is blocked in Germany. While YouTube and GEMA have tried negotiating a deal (as collection societies in basically every other country have done), in Germany it never seems to happen.
The latest ruling, in one of the key court cases is an appeals court ruling that upholds the lower court ruling saying that YouTube is not liable for infringing uploads by users and doesn't have to proactively search for infringing content. This is good. But, the court also appears to suggest that YouTube's ContentID is not enough -- and suggests it supports a sort of "notice and staydown" kind of system:
“However, if a service provider is notified of a clear violation of the law, it must not only remove the content immediately, but also take precautions which ensure that no further infringements will be possible.”
While that may appear reasonable at first glance, in practice it's a mess. The only way to even try to do that is to over-aggressively block any and all uses of that particular work -- which will undoubtedly lead to overblocking. Song playing in the background? Blocked. Parody video? Blocked. Algorithm not sure? Blocked.
A more detailed ruling is expected in a few weeks, but this seems like a mixed bag.
As you may have heard, DARPA, the wonderful government agency folks who helped bring us the precursors to the internet and self-driving cars, held a giant robotics competition this weekend, known as the DARPA Robotic Challenge, or DRC. It was full of amazing robots -- though everyone seems focused on the ones that fell over, despite the amazing advancements in robotics that were on display.
One bit of "robotics," whose best work is not on display, is the robotic nature of YouTube's ContentID copyright censorship. If you go to check out the six hour YouTube video of the DRC Finals Workshop on YouTube you'll get to witness everything, but not hear a damn thing. Because, apparently, there was a copyright-covered song playing somewhere in the background, YouTube muted the whole damn thing:
So, yup, rather than learning about the latest advancements from our soon to be robotic overlords, we'll just silence everything so someone's copyright isn't infringed because it was playing quietly in the background at a daylong event.