Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Studies: Can Baby Yoda GIFs Defeat The DMCA Force? (2019)

from the copyright-on-the-dark-side dept

Summary: In the fall of 2019, Disney launched its Disney+ streaming service to instant acclaim. While it offered up access to the extensive Disney catalog (including all of its Marvel, Star Wars, and 21st Century Fox archives), the first big new hit for the service was a TV series set in the Star Wars universe called The Mandalorian, which featured a character regularly referred to as ?Baby Yoda.?

Baby Yoda was a clear hit online, and people quickly made animated gif images of the character, helping spread more interest in The Mandalorian and the Disney+ service. However, soon after Vulture Magazine put up a story that was all just Baby Yoda GIFs, it was discovered that Giphy, a company that has built a repository of GIFs, had taken all of the Baby Yoda GIFs down. This caused many to complain, blaming Disney, highlighting that such GIFs were clearly fair use.

Many people assumed that Disney was behind the takedown of the Baby Yoda GIFs. This may be a natural assumption since Disney, above and beyond almost any other company, has a decades-long reputation for aggressively enforcing its copyright. The Washington Post even wrote up an entire article scolding Disney for ?not understanding fans.?

That article noted that it was possible that Giphy pre-emptively decided to take down the images, but pointed out that this was, in some ways, even worse. This would mean that Disney?s own reputation as an aggressive enforcer of copyrights would lead another company to take action even without an official DMCA takedown notice.

Giphy itself has always lived in something of a gray area regarding copyright, since many of the GIFs are from popular culture, including TVs and movies. While there is a strong argument that these are fair use, the company has claimed that most of its content is licensed, and said that it does not rely on fair use.

Decisions to be made by Giphy:

  • Should the company rely on fair use to cover certain GIFs that are uploaded, or should it try to license everything?

  • Regarding uploaded GIFs, how aggressive should the company be in searching for and taking down content? Should it only do so after receiving a takedown notice, or should it proactively remove content?

  • For popular content, like Baby Yoda images, should Giphy reach out to the copyright holder (in the case Disney) to either get permission or to work out a partnership?

Questions and policy implications to consider:

  • Popular culture content is frequently used in memes and GIFs. Is copyright law properly calibrated to allow this kind of activity?

  • Fair use is found in only a few countries (most notably, the US). How do differences in international copyright law impact the ability of a service like Giphy to exist?

  • Is Disney better off encouraging fans to spread GIFs, such as those of Baby Yoda, than exercising whatever copyright enforcement powers it has?

  • If Giphy took down the Baby Yoda images preemptively, does that indicate that fear of copyright litigation is holding back cultural sharing?

Resolution: Soon after the story went viral, Giphy issued an apology to Disney, and to Vulture which had posted the original article full of Baby Yoda GIFs. The apology suggested that Giphy had made the decision to remove the GIFs without any takedown notice or other input from Disney.

“Last week, there was some confusion around certain content uploaded to Giphy and we temporarily removed these Gifs while we reviewed the situation,” said the image-hosting website in a statement.

“We apologise to both Disney and Vulture for any inconvenience, and we are happy to report that the Gifs are once again live on Giphy.”

In multiple articles about this, it is noted that Disney refused to comment on the issue, leaving some reporters to wonder if Disney had played a role but did not wish to discuss it publicly. Either way, there are now many Baby Yoda GIFs on Giphy.

Originally posted on the Trust & Safety Foundation website.

Filed Under: , , , , ,
Companies: disney, giphy

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Studies: Can Baby Yoda GIFs Defeat The DMCA Force? (2019)”

Subscribe: RSS Leave a comment
1 Comment
That Anonymous Coward (profile) says:

Part of the problem is there is no actual definition of fair use.

Lawyers for the copyright gatekeepers claim everything is a violation of their copyrights costing them trillions, but the bajillion gifs out there sort of prove that wrong.

No company has gone under because of gifs of their content.
One has to wonder if the fear is if we actually got & used our rights they might not have as much control as they want.

They like to claim 3 notes is the threshold for infringement in music when it comes to sampling & demand a lions share of the profits meaning less artists bother to use samples.
This cuts back on artists sharing their vision of things & leads to the awful rulings that the "feel" of music can belong to the dead artists 14th cousin 3 times removed.

Gifs are made by fans, who want to share what they are enjoying with their friends. Stomping them out means you are pissing on fans & making it harder for new people to discover your content & become fans.

UMG targeted me for posting a short video that put part of the video for ‘Party Rock’ over the music for ‘Uptown Girl’ showing that they shared a beat structure. It wasn’t the entire song or video, it was maybe a minute in length.
It existing didn’t let people DL the whole song or video, but UMG freaked out.
That whole nyms having to give names to fight in court means I accepted the takedown.
It wasn’t worth fighting back b/c courts favor copyright holders & seem to forget the public has a stake in this too.
Even winning would have been pyrrhic victory because there are no damages when a copyright holder makes an oops & they never have to pay my court costs for ignoring fair use.
UMG could just keep sending takedowns until the end of the world, & never face anything… where if I get enough takedowns I lose my account.

The system is out of whack & until the publics rights are spelled out & given the same sort of possible damages it won’t change.

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow