from the tay-tay dept
I don’t know much about Taylor Swift, but I do know two things. First, she apparently has built a career out of making music about men with whom she’s had breakups, real or fictitious. Second, it sure seems like she spends nearly as much time gobbling up every type of intellectual property right she can and then using those rights to threaten everyone else. She trademarks all the things. She tosses defamation and copyright claims around to silence critics. She sues her own fans just for making Etsy fan products. Some of these attacks are on more solid legal ground than others, but there appears to be a shotgun approach to it all.
Which is why perhaps it only comes as a mild surprise that Swift once threatened to sue Microsoft. Over what, you ask? Why, over Microsoft’s racist chatbot, of course!
In the spring of 2016, Microsoft announced plans to bring a chatbot it had developed for the Chinese market to the US. The chatbot, XiaoIce, was designed to have conversations on social media with teenagers and young adults. Users developed a genuine affinity for it, and would spend a quarter of an hour a day unloading their hopes and fears to a friendly, yet non-judgmental ear.
The US version of the chatbot was to be called Tay. And that, according to Microsoft’s president, Brad Smith, is where Swift’s legal representatives got involved. “I was on vacation when I made the mistake of looking at my phone during dinner,” Smith writes in his forthcoming book, Tools and Weapons. “An email had just arrived from a Beverly Hills lawyer who introduced himself by telling me: ‘We represent Taylor Swift, on whose behalf this is directed to you.’
“He went on to state that ‘the name Tay, as I’m sure you must know, is closely associated with our client.’ No, I actually didn’t know, but the email nonetheless grabbed my attention. The lawyer went on to argue that the use of the name Tay created a false and misleading association between the popular singer and our chatbot, and that it violated federal and state laws,” Smith adds.
Note here that Swift sic’d her lawyers on Microsoft before Tay evolved into its most infamous form. See, Tay was designed to learn from its interactions with humanity to make it appear and react more human-like. This went exactly as should have been predicted, with Tay morphing into a solidly racist hate-machine that spat vitriol at nearly all who interacted with it.
But before that occurred, Swift had trademarked her nickname, “Tay.” And then sent Microsoft a cease and desist notice claiming that the public would confuse its chatbot AI as having some association with Taylor Swift. That’s not how any of this works. Taylor Swift, to my knowledge, is not herself an AI chatbot nor has she created one herself. Nothing in trademark law allows a pop singer to control language for a technology company.
It’s only by virtue of Microsoft’s good sense that we didn’t get to see an epic legal battle between the two.
Tay had been built to learn from the conversations it had, improving its speech by listening to what people said to it. Unfortunately, that meant that when what Smith describes as “a small group of American pranksters” began bombarding it with racist statements, Tay soon began repeating the exact same ideas at other interlocutors. “Bush did 9/11 and Hitler would have done a better job than the monkey we have now,” it tweeted. “WE’RE GOING TO BUILD A WALL, AND MEXICO IS GOING TO PAY FOR IT,” it added.
Within 18 hours, Microsoft disconnected the bot from the Tay Twitter account and withdrew it from the market. The event, Smith writes, provided a lesson “not just about cross-cultural norms but about the need for stronger AI safeguards”.
Any chance we could make some room for safeguards against this insane ownership culture we have?