Jay-Z Claims Copyright On Audio Deepfake Of Him Reciting Hamlet

from the to-claim-copyright-or-not-to-claim-copyright dept

Andy Baio always digs up the absolute best stories. His latest involves layers upon layers of fascinating issues and legal questions. The key part, though, is that Jay-Z and his company Roc Nation, were able to convince YouTube to remove two “audio deepfakes” by claiming both copyright infringement and “unlawfully using AI to impersonate our client’s voice.” Both of these are highly questionable claims. But let’s take a few steps back first.

We’ve discussed how there seems to be a bit of a moral panic around deepfakes, with the idea being that more and more advanced technology can be used to create faked video and audio that looks or sounds real — and that might be used to dupe people. So far, there’s little evidence of the technology ever actually being used to really deceive people, and there’s plenty of reason to believe that society can adjust and adapt to any eventual attempts at using deepfakes to deceive.

Still, in part because of the media and politicians freaking out about the whole idea, a number of social media platforms have put in place fairly aggressive content moderation policies regarding deepfakes, so as to (hopefully) avoid the inevitable big media “expose” about how they’re enabling nefarious activities by not pulling such faked videos down. But, as we’ve noted in some of those previous articles, the vast majority of deepfake content these days is purely used for entertainment/amusement purposes — not for nefarious reasons.

And that’s absolutely the case with the anonymous user Vocal Synthesis, who has been playing around with a variety of fun audio deepfakes — just using AI to synthesize the voice of various famous people saying things they wouldn’t normally say (or singing things they wouldn’t normally sing). The creator releases them as videos, but it’s just a static image, and even when they’re “singing” songs, it’s without any of the music — just the voice. So, here’s Bob Dylan singing Britney Spears’ “… Baby One More Time”:

And here’s Bill Clinton’s rendition of Sir Mix-A-Lot’s “Baby Got Back”:

Some other people have taken some of those audio deepfakes and put them to music, which is also fun. Here are six former President’s singing N.W.A.’s “Fuck the Police”:

A few of the audio deepfakes use Jay-Z’s distinctive voice — and apparently Jay-Z or his lawyers got upset about this and issued takedown notices to YouTube on two of them. As I type this, those two videos (one of Jay-Z reciting the famed “To Be, Or Not To Be” soliloquy from Hamlet and another of him doing Billy Joel’s “We Didn’t Start the Fire”) are back up with YouTube saying that the original takedown notices were “incomplete” and therefor the video had been reinstated. But they were taken down originally, and it’s possible that more “complete” takedowns will be sent, so for the time being (as Andy Baio did) I’ll also point to the same content hosted by LBRY, a decentralized file storage system:

And here’s where things get odd. As Andy notes in his post (which is so detailed and worth reading), the takedown from Roc Nation made two separate claims: first that the videos infringe on Jay-Z’s copyright, and the second that each video “unlawfully uses an AI to impersonate our client?s voice.” But what law is being broken here? And if it was illegal to impersonate someone, a bunch of impressionists would be in jail. Andy goes through a detailed fair use analysis on the copyright question:

There?s a strong case for transformation with the Vocal Synthesis videos. None of the original work is used in any recognizable form?it?s not sampled in a traditional way, using an undisclosed set of vocal samples, stripped from their instrumentals and context, to generate an amalgam of the speaker.

And in most cases, it?s clearly designed as parody with an intent to entertain, not deceive. Making politicians rap, philosophers sing pop songs, or rappers recite Shakespeare pokes fun at those public personas in specific ways.

Vocal Synthesis is an anonymous and non-commercial project, not monetizing the channel with advertising and no clear financial benefit to the creator, and the impact on the market value of Jay-Z?s discography is non-existent.

We have talked about Conde Nast previously using a copyright claim to take down a deepfake of Kim Kardashian that was highly questionable, but at least in that case it was clearly making use of an original video from Conde Nast. Here, it’s not even clear that Roc Nation can say what the registered copyright they have that is being used here.

Andy’s post also includes an interview with the (still anonymous) creator of these videos, and I suggest reading the whole thing, but here’s one short snippet that I found super interesting:

Mainly, I?m just making these videos for entertainment. Sometimes I just have an idea for a video that I really want to exist, and I know that if I don?t make it myself, no one else will.

On the more serious side, the other reason I made the channel was because I wanted to show that synthetic media doesn?t have to be exclusively made for malicious/evil purposes, and I think there?s currently massive amounts of untapped potential in terms of fun/entertaining uses of the technology. I think the scariness of deepfakes and synthetic media is being overblown by the media, and I?m not at all convinced that the net impact will be negative, so I hoped that my channel could be a counterexample to that narrative.

As noted, YouTube has currently put the videos back up, but I imagine we’ll see a lot more of this in the near future.

Oh, and I forgot to mention that Vocal Synthesis originally announced the takedown by creating an audio deepfake of Barack Obama and Donald Trump explaining the situation.

Filed Under: , , , , , ,
Companies: roc nation, youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Jay-Z Claims Copyright On Audio Deepfake Of Him Reciting Hamlet”

Subscribe: RSS Leave a comment
13 Comments
This comment has been deemed insightful by the community.
Anonymous Anonymous Coward (profile) says:

Talking about impressions

It would be interesting if Vocal Synthesis has actually registered his products with the copyright office. Then when Jay-Z and his lawyers go to sue him they will be caught with their proverbial pants down when they claim they were honest in their DMCA claim that they represented, or owned the copyright on the content in question. Not that there would be any repercussions for that act, and it is likely that any embarrassment would act upon those lawyers like rain on a duck. Still, more than a few good meme’s might come out of it.

cpt kangarooski says:

The claim regarding impersonation should be a publicity rights claim. It’s been argued successfully before, most notably in Midler v. Ford Motor Co., 849 F.2d 460 (9th Cir. 1988). That case involved a human impersonator, but it shouldn’t matter whether it’s done that way or with an AI. Of course it’s harder where it isn’t to sell a product or anything.

This comment has been deemed insightful by the community.
Samuel Abram (profile) says:

Re: Re:

All of that raises a question why Shawn Carter/Jay-Z/Roc Nation didn’t use the legal rubric of personality rights (which would be far more appropriate IMHO) than copyright infringement, considering that Vocal Synthesis made the audio and the "To Be Or Not To Be" soliloquy in Hamlet is in the public domain. Could it be that there is no DMCA or similar legislation for personality rights or that YouTube has an itchy takedown trigger finger?

Anonymous Coward says:

Re: Re: Re:

Could it be that there is no DMCA or similar legislation for personality rights

Yes, of course that’s why. They’d actually have to go to court to get it taken down for non-copyright reasons. Someone could defend themselves, claiming parody and whatnot, and the trial would only bring more attention to it—whereas most DMCA takedowns just happen without any press attention. Both because there are too many to report on, and because who’s going to de-anonymize themselves, hire an American lawyer, etc., just to reinstate a silly video out of principle?

This comment has been deemed insightful by the community.
Bruce C. says:

Mutually exclusive...

It seems like the claims are contradictory at best: If it’s a fake it can’t violate Jay-Z’s copyright, and if it’s a copyright violation, it can’t be a fake.

In my IANAL view, the copyright claim is probably garbage unless there’s an actual recording out there of Jay-Z reciting Hamlet. The best argument for a copyright claim would seem to be sampling of his voice, but individual words aren’t copyrightable. And obviously, Jay-Z doesn’t own the (non-existent) copyright to Hamlet.

For the other videos sampled above, maybe there’s a song-writer’s copyright to claim? But none of those involve Jay-Z.

Impersonation? Maybe. The video titles embedded don’t mention they are fakes, so "right of publicity" laws could come into play as the title could be considered misleading.

Scary Devil Monastery (profile) says:

Re: Mutually exclusive...

"he best argument for a copyright claim would seem to be sampling of his voice, but individual words aren’t copyrightable. And obviously, Jay-Z doesn’t own the (non-existent) copyright to Hamlet."

There’s at least one case on the books in Germany where the band Kraftwerk managed to sue another band because there was a two-second long stretch of industrial noise at one point in the new band’s production which was identical to that which kraftwerk had in one of their songs.

The US still has their "blurred lines" decision which i don’t think has been completely overturned.

All it takes is for Jay-Z to obtain a court ruling that he owns the rights to his own voice spectrogram and you’ll have a precedence on the books with very, VERY scary consequences.

ECA (profile) says:

thats step 2...1 more to go..

http://america.aljazeera.com/watch/shows/techknow/blog/2014/3/21/digitizing-actorsforfilmandtvafterdeathpresentsauniquechallenge.html

https://www.technologyreview.com/2018/10/16/139747/actors-are-digitally-preserving-themselves-to-continue-their-careers-beyond-the-grave/

https://www.digitaltrends.com/cool-tech/digital-domain-digitizes-actors-performers/

Digital water marks have been around along time, but it looks as if its going to get More interesting.
they Started it in most released video’s, and NOW we are going to need it for audio.
Then to combine them for some very interesting times.
There is a problem with Digital signing of this DATA. DATA can be read, and copied, including the water marks, and you can erase or copy them. TIME consuming, but possible.
Water marks on Analog (depending on a few things) were not as easy to copy/remove. NOW we have digital this and that and the other. But is this the Idea that the corps create new formats so we have to keep up to show all our Movies/shows/Music. They are advancing this tech harder then we are. And much of it is DRM, and needing a Internet connection to make things work. DRM has failed so many times, Even Itunes hates it.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...