Twitter Bot 'Issues' Death Threat, Police Investigate
from the am-I-my-bot's-keeper? dept
We’ve seen a partial answer to the question: “what happens if my Silk Road shopping bot buys illegal drugs?” In that case, the local police shut down the art exhibit featuring the bot and seize the purchased drugs. What’s still unanswered is who — if anyone — is liable for the bot’s actions.
These questions are surfacing again thanks to a Twitter bot that somehow managed to tweet out a bomb threat.
This week, police in the Netherlands are dealing with a robot miscreant. Amsterdam-based developer Jeffry van der Goot reports on Twitter that he was questioned by police because a Twitter bot he owned made a death threat.
As van der Goot explained is his tweets (all of which can be viewed at the above link), he was contacted by an “internet detective” who had somehow managed to come across this bot’s tweet in his investigative work. (As opposed to being contacted by a concerned individual who had spotted the tweet.)
So, van der Goot had to explain how his bot worked. The bot (which was actually created by another person but “owned” by van der Goot) reassembles chunks of his past tweets, hopefully into something approaching coherence. On this occasion, it not only managed to put together a legitimate sentence, but also one threatening enough to attract the interest of local law enforcement.
The explanation didn’t manage to completely convince the police of the bot’s non-nefariousness. They ordered van der Goot to shut down the account and remove the “threatening” tweet. But it was at least convincing enough that van der Goot isn’t facing charges for “issuing” a threat composed of unrelated tweets. The investigator could have easily decided that van der Goot’s explanation was nothing more than a cover story for tweets he composed and issued personally, using a bot account to disguise their origin.
The shutdown of the account was most likely for law enforcement’s peace of mind — preventing the very occasionally evil bot from cobbling together algorithmically-derived threats sometime in the future. It’s the feeling of having “done something” about an incident that seems alarming at first, but decidely more banal and non-threatening by the end of the investigation.
The answer to the question of who is held responsible when algorithms “go bad” appears to be — in this case — the person who “owns” the bot. Van der Goot didn’t create the bot, nor did he alter its algorithm, but he was ultimately ordered to kill it off. This order was presumably issued in the vague interest of public safety — even though there’s no way van der Goot could have stacked the deck in favor of bot-crafted threats without raising considerable suspicion in the Twitter account his bot drew from.
There will be more of this in the future and the answers will continue to be unsatisfactory. Criminal activity is usually tied to intent, but with algorithms sifting through data detritus and occasionally latching onto something illegal, that lynchpin of criminal justice seems likely to be the first consideration removed. That doesn’t bode well for the bot crafters of the world, whose creations may occasionally return truly unpredictable results. Law enforcement officers seem to have problems wrapping their minds around lawlessness unmoored from the anchoring intent. In van der Goot’s case, it resulted in only the largely symbolic sacrifice of his bot. For others, it could turn out much worse.
Filed Under: autonomous computing, bots, death threats, investigation, jeffry van der goot, police, tweets
Comments on “Twitter Bot 'Issues' Death Threat, Police Investigate”
It came from the internet, so it is actually Google’s fault.
I think this is already settled
The principle of civil asset forfeiture historically derives from what to do when somebody’s ox gores a neighbor.
The ox (the property) is judged guilty rather than its owner. The authorities seize and dispose of/punish/deal with the ox.
A bot seems no different than an ox.
Re: I think this is already settled
“Okay, here’s my bot on a floppy disk. Do with it what you will.”
Re: I think this is already settled
Bots replicate a tad faster than oxen. However, I think the rest of the law carries over well — the bot is judged guilty, and the owner and/or other influencers can be found guilty of other things such as negligence or tampering should they unleash a bot in a manner that they should have reasonable expectation will result in something illegal.
Where this gets even trickier is when you move things over into the physical world — what about self-driving cars? If one harms a person, do we destroy the car? Give it to the victim?
Re: Re: I think this is already settled
Again, it’s already covered thanks to the magic of asset forfeiture laws.
As Techdirt has pointed out a few times, police already have countless legal actions against assets, rather than the assets’ owners. Like (actual case): United States v. Article Consisting of 50,000 Cardboard Boxes More or Less, Each Containing One Pair of Clacker Balls. And of course against peoples’ homes, cars, bank accounts etc.
The US Government sues the item of property, not the person; the owner is effectively a third-party claimant. This does away with any annoying “presumption of innocence” and other rights.
And since it’s well established that police can seize assets based on dubious suspicions or common-sense advice from a bank, there’s no need to wait until a car harms a person. Fast car? Obviously it’s meant for speeding.
Re: I think this is already settled
I think this is largely unsettled. If your ox gored a person and you were lax in its control, you are responsible for damages in the same way you would be if your dog bit someone. This is far more nebulous and rife with unanswered legal questions.
Can you be threatened or damaged by a non-entity? A program is the brainchild of the developer. If their child breaks the law, are they responsible for their crimes? If the end user has to surrender their copy of the bot, can they just download another? The new iteration would be completely innocent of their brother’s crime. Can the user or programmer ever be said to be responsible for actions of a program that are essentially random?
So, what did it said?
“That doesn’t bode well for the bot crafters of the world”
Actually, the bot author made out fairly well here. It was the poor operator running the bot who got in trouble with law enforcement.
Alternate Title
Alternate Title: Police Raid and Kill Unarmed Robot.
In america…
Two words: True Threat
Two more words: Prior Restraint
Re: Re:
Does prior restraint apply when the defense is “I had no control over that speech”? How can the First Amendment be implicated when the speech in question is disavowed?
Bots don’t have free speech rights. People do. I don’t think you can simultaneously claim that shutting down the bot is prior restraint, AND that the user had no control over what was said.
(Ignoring that this was in the Netherlands, of course, where the First Amendment doesn’t apply. Also ignoring that he was apparently asked – not ordered – to shut down the account.)
Re: Re: Re:
“Bots don’t have free speech rights. People do.”
If corporations can have free speech rights, then why not bots? There’s not a huge amount of difference between the two, really.
Re: Re: Re: Re:
It’s actually very similar, yes. In either case, it’s not the bots or corporations that really have the rights, but the human owners and operators which are just using the bot or corporation to speak.
If a bot was programmed to randomly tweet from a list of political messages that the owner agreed with, the bot would undoubtedly be protected speech. Not because the bot itself really has any rights, but because the person operating the bot has the right to use the bot to further his speech.
Of course it's the owner's responsibility
Just like a dog attack. You didn’t create the dog – you bought it from a store. You didn’t train the dog – you paid someone else to do that. But when it rips the face off a toddler, you’re the one to pay any damages and put the dog down.
Re: Of course it's the owner's responsibility
Whether or not you trained the attack dog, when it attacks then it’s doing exactly what you intended it to do. That’s a bit different than what this bot did.
Also, are you really equating bodily harm with a twitter message?
Re: Re: Of course it's the owner's responsibility
Sticks and stones may break my bones, but bot tweets are repugnant and must be stopped at all costs.
Re: Re: Of course it's the owner's responsibility
I have a sweet dog that I got at a shelter. He loves us to death.
At no point did they tell me that it was part pit bull.
In any event, he doesn’t like people with tattoos or that smoke. Since his previous owner was locked up on drug charges, I’m going to guess that he was sometimes abused by people with tattoos that smoke.
Him attacking people is NEVER what I intend for him to do. But, nevertheless, he will attack anyone he perceives as being “evil”.
And yet, if he attacks someone, I am still responsible, even though the shelter lied to me about his breeding.
Re: Re: Re: Of course it's the owner's responsibility
Breeding does not make a dog into an attack dog. Training does that. Or abuse, which can act as training.
Re: Re: Of course it's the owner's responsibility
Apparently the police did.
Re: Re: Of course it's the owner's responsibility
I see nothing wrong with the analogy. I made a similar one in a previous article about bot liability (although in mine, the dog only harmed chickens.) In this case, the bot made a threat to harm someone, so comparing it to actual harm is not out of line.
Whoa. You think that dogs never attack when their owners don’t want them to? He also didn’t say “an attack dog”, he said “a dog attack”. That’s like calling the bot here a “threat bot” instead of calling what happened a “bot threat.” Changing the word order here matters.
Re: Re: Re: Of course it's the owner's responsibility
“In this case, the bot made a threat to harm someone”
No, it did not. To make a threat requires intent. The bot had no such intent, it was just stringing random phrases together. It was certainly not a threat.
Re: Re: Re:2 Of course it's the owner's responsibility
Well, OK, the bot had no intent. But if it wasn’t clear that the bot was a bot, then people wouldn’t KNOW that and it would be reasonable for them to feel threatened.
Re: Re: Re:3 Of course it's the owner's responsibility
I suppose that it might be reasonable for someone to feel threatened — it’s hard to tell, since I can’t find the actual “threatening” tweet. But that someone felt threatened shouldn’t be (and isn’t in the US) the sole point that determines if something is a threat or not.
Re: Re: Re:4 Of course it's the owner's responsibility
You might think that, but that’s not how LEOs think today. Now, they go by “better safe than sorry.” Yeah, he got off, but he’s likely out of a job now. Be careful out there.
Re: Of course it's the owner's responsibility
I see the point you’re driving at, but it’s a very flawed analogy in this case.
The words created and tweeted by the bot are only a threat coming from someone capable of carrying out that threat. A twitter bot cannot manufacture and place a bomb according to its threat, so the words are meaningless in that context.
So, by your analogy, it’s not that the dog attacked someone, it’s that someone interpreted the way it barked as being an imminent threat despite the fact that it was secured in a place where it could not attack. It might have scared the toddler, but that’s all the harm it was capable of doing.
Re: Re: Of course it's the owner's responsibility
That’s not QUITE the case. If I mail a white powder to an enemy, it doesn’t matter that it’s not anthrax and I have no idea how to obtain anthrax. It’s still a threat, because the person on the other end doesn’t know that.
It’s like if the dog is behaving like it’s about to attack but it’s behind an invisible fence. The passerby would have every reason to be concerned because they don’t know that the dog can’t escape the yard.
So the question becomes: how obvious was it that this was a bot?
Re: Re: Re: Of course it's the owner's responsibility
“If I mail a white powder to an enemy, it doesn’t matter that it’s not anthrax and I have no idea how to obtain anthrax.”
Still a crappy analogy. You would have had to deliberately put white powder in a box, mail it knowing that white powder is suspicious, deliberately mailed it to a specific person, etc. This is nothing like that – it’s merely words, randomly generated ones at that it seems.
“So the question becomes: how obvious was it that this was a bot?”
I don’t know, since the account had been deleted and I can’t investigate it. Regardless, I’m not saying it should not have been investigated, only that these analogies are hideously bad.
So if we have an infinite number of blots we should get the complete works of William Shakespeare and 50 Shades of Gray.
Re: Re:
Infinitely easier than evolution. Start it up and let us know when it finishes…
Re: Re:
and Skynet. Never forget we are just a bot tweet away from Skynet.
That is a major reason why there is an Unclassified and Classified network in the military. Plug your Unclassified thumbdrive into the Classified network and you could unleash a bot not only able to create bomb threats, but also the ability to carry them out, with ICBM nukes. No need for a super intelligent A.I.
P.S. Don’t trust the silicon diode, and we should be OK.
Re: Re:
So if we have an infinite number of blots we should get the complete works of William Shakespeare and 50 Shades of Gray.
Nah. It would just calculate for 7.5 million years and then spit out an answer of 42.
Re: Re:
So if we have an infinite number of blots we should get the complete works of William Shakespeare and 50 Shades of Gray.
Mighht get the lyrics to 50 cent at worst
And in other cases, say cases where something bad can really happen (as in impact negatively other people’s life), like stock exchange… the answer seems to be “nobody”…
Totally disagree with you!
I totally disagree with you. First, bots make threatening tweets, then they get access to nukes, the human race becomes hunted by these bots and the next thing you know, we’re sending people back in time to stop those tweets from ever happening!
That hero of a detective may have just stopped Sky.Net before it ever gained sentience!
Captial punishment
So, is deleting the bot the functional equivalent of capital punishment?
Re: Captial punishment
Yes, but with all the benefits of the Humanoid Cylons:
(1) Clones (more than one copy)
(2) Reincarnation (backups of originals that have expanded their learning databases).
Forget Skynet....
…when a hacker puts something like this into (insert any institution here)’s system and it starts sending out such threatening messages then I’ll be worried. And as already mentioned the system’s owner will still be responsible.
I suspect he will be arrested for resisting arrest and accidently fall down the stairs, if the police decide to harass him over this as they fear for their safety
I want to know what the threat was that got the police interested.
Random Words
If random numbers can be illegal, why not random words?
Nobody wants to reprint the bot's bomb tweet? Cowards!
Has been asked a few times in these comments -> Where is the offending tweet? The link to fusion.net story DOES NOT include a reprint of the tweet in question.
Why are we all being cowards for not reposting the tweet as part of a critical discussion of this phenomena?
Re: Nobody wants to reprint the bot's bomb tweet? Cowards!
From the linked story:
“He is not identifying the bot and says he has deleted it, per the request of the police”
So, the tweet is no longer publicly visible and the author is not telling anyone which account was used. Unless someone happened to take a screenshot when it was up, it will be hard to get one – although if this did go to court it would presumably become public knowledge at that point.
Nobody’s being a “coward”, they’re just running with the information available. I’m sure that if/when the data becomes available it will be reported on.
google cars!
So if my google- car kills my neighbour’s kid/pet/grandma…
do I have to kill it?
How exactly expects the police to have it killed? Only in a bureaucratic- expensive- government- approved- robot- recycling facility?
Are this fees covered by the insurance?
or by Google?
do I get my money back from Google?
Or do I get a just a new car from Google (with the new firmware)
Do all the cars that share the same firmware as my car have to be recalled too?
Bots Ain't Folks
The case law discussion shows how outdated the precedents are involving the status of bots or related apps. And what if the bot was open source? Who could be sanctioned for malware or theft outcomes in that case?