Hide Techdirt is off for the long weekend! We'll be back with our regular posts tomorrow.

Stop Expecting Tech Companies To Provide ‘Consequences’ For Criminal Behavior; That’s Not Their Job

from the stop-blaming-the-tools dept

Whose job is it to provide consequences when someone breaks the law?

It seems like this issue shouldn’t be that complicated. We expect law enforcement to deal with it when someone breaks the law. Not private individuals or organizations. Because that’s vigilantism.

Yet, on the internet, over and over again, we keep seeing people set the expectations that tech companies need to provide the consequences. That’s even when those who actually violate the law already face legal consequences.

None of this is to say that tech companies shouldn’t be focused on trying to minimize the misuse of their products. They have trust & safety teams for a reason. They know that if they don’t, they will face all sorts of reasonable backlash from advertisers or users leaving, due to negative media coverage and more. But demanding that they face legal consequences, while ignoring the legal consequences facing the actual users who violated the law… is weird.

For years, one of the cases that we kept hearing about as an example of why Section 230 was bad and needed to be done away with was Herrick v. Grindr. In that case, a person who was stalked and harassed sued Grindr for supposedly enabling such harassment and stalking.

What’s left out of the discussion is that the guy who stalked Herrick was arrested and ended up pleading guilty to criminal contempt, identity theft, falsely reporting an incident, and stalking. He was then sentenced to over a year in prison. Indeed, it appears he was arrested a few weeks before the lawsuit was filed against Grindr.

So, someone broke the law and faced the legal consequences. Yet some people are still much more focused on blaming the tech companies for not somehow “dealing” with these situations. Hell, much of the story around the Herrick case was about how there were no other remedies that he could find, even as the person who wronged him was, for good reason, in prison.

We’re now seeing a similar sort of thing with a new case you might have heard about recently. A few weeks ago, a high school athletic director, Dazhon Darien, was arrested in Baltimore after using some AI tools to mimic the principal at Pikesville High School, Eric Eiswart. Now Darien may need to use his AI tools to conjure up a lawyer.

A Maryland high school athletic director is facing criminal charges after police say he used artificial intelligence to duplicate the voice of Pikesville High School Principal Eric Eiswert, leading the community to believe Eiswert said racist and antisemitic things about teachers and students.

“We now have conclusive evidence that the recording was not authentic,” Baltimore County Police Chief Robert McCullough told reporters during a news conference Thursday. “It’s been determined the recording was generated through the use of artificial intelligence technology.”

Dazhon Darien, 31, was arrested Thursday on charges of stalking, theft, disruption of school operations and retaliation against a witness after a monthslong investigation from the Baltimore County Police Department.

This received plenty of attention as an example of the kind of thing people are worried about regarding “deepfakes” and whatnot: where someone is accused of doing something they didn’t by faking proof via AI tools.

However, every time this comes up, the person seems to be caught. And, in this case, they’ve been arrested and could face some pretty serious consequences including prison time and a conviction on their record.

And yet, in that very same article, NPR quotes professor Hany Farid complaining about the lack of consequences.

After following this story, Farid is left with the question: “What is going to be the consequence of this?”

[….]

Farid said there remains, generally, a lackluster response from regulators reluctant to put checks and balances on tech companies that develop these tools or to establish laws that properly punish wrongdoers and protect people.

“I don’t understand at what point we’re going to wake up as a country and say, like, why are we allowing this? Where are our regulators?”

I guess “getting arrested and facing being sentenced to prison” aren’t consequences? I mean, sure, maybe it doesn’t have the same ring to it as “big tech bad!!” but, really, how could anyone say with a straight face that there are no consequences here? How could anyone in the media print that without noting what the focus of the very story is?

It already breaks the law and is a criminal matter, and we let law enforcement handle those. If there were no consequences, and we were allowing this as a society, Darien would not have been arrested and would not be facing a trial next month.

I understand that there’s anger from some corners that this happened in the first place, but this is the nature of society. Some things break the law, and we treat them accordingly. Wishing to live in a world in which no one could ever break the law, or in which companies were somehow magically responsible for guaranteeing no one would ever misuse their products is not a good outcome. It would lead to a horrific mess of mostly useless tools, ruined by the small group of people who might misuse them.

We have a system to deal with criminals. We can use it. We shouldn’t be deputizing tech companies which are problematic enough to also have to take on Minority Report “pre-crime” style policing as well.

I understand that this is kinda Farid’s thing. Last year we highlighted him blaming Apple for CSAM online. Farid constantly wants to blame tech for the fact that some people will misuse the tech. And, I guess that gets him quoted in the media, but it’s pretty silly and disconnected from reality.

Yes, tech companies can put in place some safeguards, but people will always find some ways around them. If we’re talking about criminal behavior, the way to deal with it is through the criminal justice system. Not magically making tech companies go into excess surveillance mode to make sure no one is ever bad.

Filed Under: , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Stop Expecting Tech Companies To Provide ‘Consequences’ For Criminal Behavior; That’s Not Their Job”

Of course we can’t expect private companies to enforce the law. But we can expect them not to abet criminals.

— Anonymous

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Subscribe: RSS Leave a comment
45 Comments
This comment has been deemed insightful by the community.
TKnarr (profile) says:

We not only don’t usually expect private entities to enforce penalties for violating the law, we usually make it illegal to do so. The term is “vigilantism”. That ought to be the tech companies’ response to demands that they “do something”: “We’re not vigilantes. We aren’t the Pinkertons. We won’t provide a platform for what are unarguably criminal activities, but deciding whether someone has broken the law or not is the job of law enforcement and the courts.”.

Anonymous Coward says:

Hany Farid is a professor at the University of California, Berkeley with a joint appointment in electrical engineering & computer sciences and the School of Information. He is also a member of the Berkeley Artificial Intelligence Lab, Berkeley Institute for Data Science, Center for Innovation in Vision and Optics, Development Engineering, Vision Science Program, and is a senior faculty advisor for the Center for Long-Term Cybersecurity. His research focuses on digital forensics, forensic science, misinformation, image analysis, and human perception.

Notably lacking from those credentials is a degree in criminal science, law enforcement, or public policy. Perhaps if the professor was more proficient in the area he was speaking of, he would have opinions that should be given more weight.

As it is, his best credential is “he answered the phone”.

Boba Fatt (profile) says:

Re: expert opinion

Paraphrasing something I heard long long ago:

Well, I have a degree in Computer Science, which makes me an expert in that and related fields, such as those beginning with C, or S, or any other letter of the alphabet.

Anonymous Coward says:

No Kidding.

That mentality is (unfortunately) at ALL levels of Society, with the “look at THAT!! YOU! must! do! … something!!” (aka Moral Panic) Then those who called out for said action get to stand back, seemingly ‘so innocent’ while the feeble-minded rush in. Thus leading to that well-known cumulative moral panic of “look what THEY! DID! I better do something!!” rinse/repeat ad nauseam. are we done yet!? >:[

Koby (profile) says:

Not Worth The Hassle

Of course, the same thing happened when people realized that Bitcoin was being used to purchase drugs. Perhaps that’s why Satoshi Nakamoto used the ailias.

Anonymous Coward says:

Re:

omg! What did ppl do when they realized government-issued currency is used to purchase drugs? Why is no one talking about this?!

Anonymouse says:

I don’t think it’s as easy a question as you make it. I mean, what’s the legitimate use for creating an AI image and voice of another living person without their permission? I’m sure somebody creative will come up with something, but it seems like that’s a use for AI that’s really hard to justify.

And, we know that those tools are being used to cause harm. We know that scammers are using AI-generated voices to get money out of worried relatives. We know that teenagers are using AI to generate fake nudes of their teenage classmates and then distributing them. We don’t currently have any idea how often either of those things happens–and I think they’re both really rare right now–but at some point we have to weigh those harms against the potential/actual good those tools are doing. Even at this early stage, asking if this type of AI tool should be allowed at all seems like a legitimate conversation. It’s not unreasonable to ask a company that sells something that can only be used illegally to stop selling that thing. They may still be able to do it (e.g., the radar detector), but that doesn’t mean we can’t at least ask.

Clockwork-Muse says:

Re:

Your comment immediately falls into the same trap as most of the problems around copyright bots, namely:
– The services have no way to tell that a particular use is unauthorized –

The first problem you’re going to run into is that the sites have no way to know that the images/voice they’ve generated is not that of the person using the site.
Even if it is of someone else, it could be authorized for any number of legitimate purposes (eg, friends joking around).
And even when not authorized, some uses may still be legitimate (eg, satire, as the various videos of world leaders doing various things).

Note that in the cases of your examples (scams, nudes), the legitimate/big-tech services are trying to prevent most of those use cases, for exactly the reasons listed in the article – it’s mostly the open-source tools that are being used for things like this… which you can’t bake protections into, because they can be removed just as easily. And trying to prevent publishing of a general tool like that would run into the first amendment immediately (you might be able to get certain specific models banned in some cases, and content might be illegal for a couple of reasons in a court of law, but not the base tools).

Also, note that radar detectors are legal in some states….

Rocky says:

Re:

I mean, what’s the legitimate use for creating an AI image and voice of another living person without their permission? I’m sure somebody creative will come up with something, but it seems like that’s a use for AI that’s really hard to justify.

I don’t consider myself very creative but the first thing I thought of was satire.

We don’t currently have any idea how often either of those things happens–and I think they’re both really rare right now–but at some point we have to weigh those harms against the potential/actual good those tools are doing.

Crowbar. Gun. Computer. Knife. Car. Telephone. Camera. Pen.

Take your pick of the tools above, everyone one of them are used for illegal activities. Now, try to explain to people why any of the tools above should never be sold because of the possibility they are used to perpetrate a crime.

It’s not unreasonable to ask a company that sells something that can only be used illegally to stop selling that thing.

Are you actually putting forth the argument that these tools cannot be used legally at all?

Anonymous Coward says:

Re:

Would like to amputate your arms and legs?

After all, if used incorrectly, they can kill people.

And you need those, or at least some sort of passable prosthesis, to live comfortably…

Anonymous Coward says:

Re:

that doesn’t mean we can’t at least ask

You can ask.

But at some point you either have to take “No” for an answer, or recognize that relentlessly begging for a miracle isn’t going to actually make it happen.

Anonymous Coward says:

Re: Re:

Basically this.

If law enforcement’s response to things like encryption and AI is to scream at tech companies to “nerd harder”, then they have very little grounds for complaint when the tech companies eventually get fed up with their shit and respond by asking them to “police harder”.

bhull242 (profile) says:

Re:

It’s not unreasonable to ask a company that sells something that can only be used illegally to stop selling that thing.

That you can’t think of any legal, non-immoral way to use tools like these shows a severe lack of imagination on your part. You can use them for parodies, to bring life to long-dead figures, make jokes with your friends, and those are just the ideas I came up with on the spot.

Stephen T. Stone (profile) says:

Re: Re:

You can use them for parodies, to bring life to long-dead figures, make jokes with your friends

And look how well that turned out for the guys who did that George Carlin AI special.

Anonymous Coward says:

Re: Re: Re:

What I noticed about that is how Carlin’s estate sued for copyright infringement. Infringement of what? The material written by the ‘AI’ (actually a human) was completely original.

Anonymous Coward says:

Re: Re:

That you can’t think of any legal, non-immoral way to use tools like these shows a severe lack of imagination on your part.

That or criminality.

Anonymous Coward says:

Re:

I mean, what’s the legitimate use for creating an AI image and voice of another living person without their permission?

Oh gee, I wasn’t aware that parody stopped being a thing. 🤦‍♂️

Anonymous Coward says:

Better to have not addressed the law at all

There’s a version of this story that separates the question of criminal and regulatory, as Cathy manages in the linked Gridnr story (emphasis mine):

There is no question that the ex-boyfriend’s behavior was terrible, frightening, inexcusable, and, if not already illegal under New York law, deserving to be. But only to the extent that such a law would punish just the culprit (in this case the ex-boyfriend who created the fake profile).

Two quotes from the NPR article

Baltimore County Executive John Olszewski said during Thursday’s press conference that this case highlights the need “to make some adaptions to bring the law up to date with the technology that was being used.”

Farid said there remains, generally, a lackluster response from regulators reluctant to put checks and balances on tech companies that develop these tools or to establish laws that properly punish wrongdoers and protect people.

Letting “regulators” and “checks and balances on tech companies” overwhelm the question of how the legal regime handles a case with similar facts is an unfortunate decision, as is throwing up a shruggie while apparently deciding that the prosecution of Darien should settle things.

stalking, theft, disruption of school operations and retaliation against a witness

The stalking misdemeanor is the only one that would apply to a general citizen targeted.

Farid’s an ass who is either willfully or earnestly incapable of comprehending how obviously terrible governments are at tech regulation even when they’re not (hi, FCC!) captured by the groups they’re supposed to be regulating. While I’d prefer an article that focused on the current legal environment for victims of this sort of harassment, it’s your blog, Mike, and thus your prerogative to center the piece on the regulatory aspect. And there are bones of a solid structure for that here.

Instead, the piece is peppered with comments like

This received plenty of attention as an example of the kind of thing people are worried about regarding “deepfakes” and whatnot: where someone is accused of doing something they didn’t by faking proof via AI tools.

However, every time this comes up, the person seems to be caught.

And

I guess “getting arrested and facing being sentenced to prison” aren’t consequences? I mean, sure, maybe it doesn’t have the same ring to it as “big tech bad!!” but, really, how could anyone say with a straight face that there are no consequences here? How could anyone in the media print that without noting what the focus of the very story is?

It already breaks the law and is a criminal matter, and we let law enforcement handle those. If there were no consequences, and we were allowing this as a society, Darien would not have been arrested and would not be facing a trial next month.

I understand that there’s anger from some corners that this happened in the first place, but this is the nature of society. Somethings break the law, and we treat them accordingly.

Which gloss over legitimate questions about the state of the legal regime (and takes an uncharacteristically charitable stance towards law enforcement). To borrow a phrase, maybe it doesn’t have the same ring to it as “idiot academic begs government to force private companies to police pre-crime”, but, really, how could anyone believe this stalking statute indisputably covers the breadth of AI Deepfake criminality?

(a) In this section:
(1) “stalking” means a malicious course of conduct that includes approaching or pursuing another where:
(i) the person intends to place or knows or reasonably should have known the conduct would place another in reasonable fear:
1.
A. of serious bodily injury;
B. of an assault in any degree;
C. of rape or sexual offense as defined by §§ 3-303 through 3-308 of this title or attempted rape or sexual offense in any degree;
D. of false imprisonment; or
E. of death; or
2. that a third person likely will suffer any of the acts listed in item 1 of this item; or
(ii) the person intends to cause or knows or reasonably should have known that the conduct would cause serious emotional distress to another; and
(2) “stalking” includes conduct described in item (1) of this subsection that occurs:
(i) in person;
(ii) by electronic communication, as defined in § 3-805 of this subtitle; or
(iii) through the use of a device that can pinpoint or track the location of another without the person’s knowledge or consent.
(b) The provisions of this section do not apply to conduct that is:
(1) performed to ensure compliance with a court order;
(2) performed to carry out a specific lawful commercial purpose; or
(3) authorized, required, or protected by local, State, or federal law.
(c) A person may not engage in stalking.
(d) A person who violates this section is guilty of a misdemeanor and on conviction is subject to imprisonment not exceeding 5 years or a fine not exceeding $5,000 or both.
(e) A sentence imposed under this section may be separate from and consecutive to or concurrent with a sentence for any other crime based on the acts establishing a violation of this section.

As a Maryland citizen, how confident should I feel that this sort of harassing impersonation is covered by that statute? What if I’m not a high status, respected, straight white man like the victim here and the man harassing me isn’t a black man already under investigation for other crimes? What if we’re not literally in the same county as where The Wire was set?

There are ample, credible arguments for not trying to craft laws to specifically address these behaviors (Techdirt is littered with more than enough stories to predict how governments and police departments would use those laws to suppress dissent and penalize protected speech), but this piece does not even attempt to make them. Which makes the aforementioned comments a shame.

tl;dr: Far be it from me or any reader to insist you cover my preferred angle/frame of a story… but the work you’ve done here leads me to expect you to do better than this when you decide to.

Anonymous Coward says:

Re:

While your actual concern is pretty legitimate, when you actually get around to stating it, that’s not a subject for corporate regulation. That’s something that should be addressed by addition to law (it doesn’t have to be idiotically specific, like “duh, cellphone distraction” when distraction has always covered all the things), but apparently someone should have made the credible arguments against that instead of reporting on the tech and free speech aspects (and yes, idiot professor says stupid things that affect the national conversation).

Maybe you could have just given us the ample credible arguments about crafting addenda to laws. That is likely also valuable discourse.

That Anonymous Coward (profile) says:

Lets throw in the Athletic Director who crafted a series of voice fakes & shared them with himself & 2 other teacher, one of which immediately forwarded to a student to make it go viral.

Of course the person impersonated is still off the job as they try to find out if bad guy really really made the deep fakes or if it was a real recording where he insulted all sorts of students.

Shades of that poor guy Kash Hill covered in Canada where 1 crazy woman made his life hell & even with a court tossing her ass in jail people still don’t want to be around her target because in their minds there is a chance that crazy woman who was out to ruin him might have maybe told the truth & after someone rings the pedobell we have to just treat the target as evil… just in case.

Arianity says:

Whose job is it to provide consequences when someone breaks the law?

That depends on the consequence. We expect the legal system to enforce legal consequences. But we expect a platform to provide consequences like banning/moderation, and/or prompt reporting to law enforcement.

Yes, tech companies can put in place some safeguards, but people will always find some ways around them

The question is whether those safeguards are sufficient. You can very reasonably make the argument that in many cases, tech companies are far too lax about it. It does not need to be perfection.

For example:

In that case, a person who was stalked and harassed sued Grindr for supposedly enabling such harassment and stalking.

It seems like there was a very easy solution for Grindr here, such as disabling his location after he was harassed the first few times. To quote from a Wired article about the incident:

Herrick reported the fake profile to Grindr, but the impersonations only multiplied

Herrick’s civil complaint against the company states that despite contacting Grindr more than 50 times, Grindr hasn’t offered a single response beyond auto-replies saying that it’s looking into the profiles he’s reported.

Even after a judge signed an injunctive relief order Friday to force Grindr to stop the impersonating profiles, they persist

Sure seems like Grindr fucked up here, and didn’t take it seriously. No offense, but that’s absolutely ridiculous. Similarly:

Herrick contrasts Grindr’s alleged lack of direct communication or action on the spoofed accounts to the behavior of a lesser-known gay dating app, Scruff. When profiles impersonating Herrick began to appear on Scruff, he filed an abuse complaint with the company that led to the offending account being banned within 24 hours, according to Herrick’s complaint against Grindr.

But demanding that they face legal consequences, while ignoring the legal consequences facing the actual users who violated the law… is weird.

If they’re negligent in responding to things like harassment, they should face legal consequences. Their liability is a separate question from the user’s consequences. Whether Grindr is liable/negligent or not is not dependent on whether the user faces consequences for their behavior or not. It’s entirely possible that the user did go to jail/suffered consequences, and also that Grindr was negligent.

There’s no good argument for why Grindr couldn’t have acted to try to stop the abuse earlier. It may not have been perfect, yes, but that’s not an excuse not to act, either.

We expect law enforcement to deal with it when someone breaks the law. Not private individuals or organizations. Because that’s vigilantism.

We expect law enforcement to enforce legal consequences. We expect platforms to moderate their platforms (which can include a consequence of banning someone and/or reporting them to law enforcement promptly), as you yourself mentioned. That is not vigilantism. Both law enforcement and platforms have different roles here, and they’re not replacements for each other.

None of this is to say that tech companies shouldn’t be focused on trying to minimize the misuse of their products.

You seem to have picked a really bad example to make that point.

Not magically making tech companies go into excess surveillance mode to make sure no one is ever bad.

What would be excessive about addressing the Grindr example?

That Anonymous Coward (profile) says:

Re:

Don’t let any of the facts get in your way…

Of the alleged 50 times he contacted Grindr how many were just him & his friends reporting the fake accounts?

Did he ever have the police contact Grindr?
Did he instead just scream that Grindr secretly used his GPS coordinates to send random men to meet him at work for the secks??? (Yes, yes he did when his crazy ex knew where & when he worked… sometimes when you hear hoofbeats its just a horse & not a zebra.)

There are 1000 ways to whine & complain about Grindr that do not actually inform Grindr of the situation… this dude found 999 of them.

As I noted in the coverage of this, I chat with lots of porn guys and they are forever being reported as fake while creeps who stole their pics never get their accounts shutdown even as the actual real verifiable porn star is reporting it & their account is shuttered.

Tech companies lack the ability to demand records, investigate claims (in any real way) and mine got!? we have this whole legal system thingy who can.

Let us not also gloss over the history of him & the ex both making fake Grindr profiles for each other and being childish assholes in the relationship.

Some asshole used Grindr to torment his ex, rather than do anything in the legal system he complained about the guys turning up on his door step (notice how there is nothing about how he responded to those dude before they got mad and yelled & “tore down” his little you’ve been catfished sign.) Dude & his friends mass reported the account. Long time later he tries to get a restraining order but a lack of mention of the ex-bf ending up in jail.

Half of the claims were Grindr did things that made his torment worse using super secret tech to monitor him and send him unwanted hookups.
When you remove the Grindr was setting up these meetings on their own and using his GPS to direct the men to him, gee its not really anything Grindr did.

Dude & Crazy Ex play games on Grindr
Dude & Crazy Ex break up
Crazy Ex uses Grindr to annoy Dude
Dude expects Grindr to solve his domestic problems
Dude claims aliens stalked him via the app using secret hidden features
Dude has randoms show up on his doorstep & somehow these men get so angry they yell (but of course Dude never ever did anything rude to make the situation worse)

Dude spent more time crafting his lolsuit, than actually trying to use the law to make his Crazy Ex stop… but Grindr is the badguy,

Got it.

Anonymous Coward says:

Re: Re:

This comment was just a heaping helping of victim-blaming and ignoring that a judge signed an injunctive relief to get Grindr to stop the impersonating accounts but Grindr continued to do nothing.

Anonymous Coward says:

Something I've noticed…

Techdirt: “Man uses hammer to kill puppies.”

Techdirt commenters: “Lock him up!”

Techdirt: “Man uses tech to stalk other people.”

Techdirt commenters: “Lock him up!”

Techdirt: “Unknown individual gets legitimate content pulled by sending false DMCA notices.”

Techdirt commenters: “Destroy copyright!”

Guys, copyright’s a tool. It’s as open to abuse as any other tool. Instead of giving maximalists grist for their mill in calling for it to be abolished, why not vote for politicians based on their understanding of IP (you can ask them questions, you know), and subsequently get them to draft bills to punish abuses of copyright, trademark, etc.?

Rocky says:

Re:

Copyright isn’t actually a tool, it doesn’t produce anything useful or tangible when you wield it because it is just a collection of words that can do nothing on their own. Laws are only tools in the metaphorical sense, they are normally used to curb, steer or encourage behavior in a desired direction which in extension can produce something tangible or useful.

And when it comes to copyright and the criticism it receives from many people, you very conveniently ignored the fact the same people also criticizes those who wield it to stop other people expressing themselves. Why is that?

You did manage to construct quite a strawman though, but since you ignored the above it all falls apart.

Anonymous Coward says:

Re: Re:

A hammer isn’t actually a tool, it doesn’t produce anything useful or tangible when you wield it because it is just a conglomeration of metal and plastic that can do nothing on its own. Your point?

Anonymous Coward says:

Re: Re:

Copyright isn’t actually a tool, it doesn’t produce anything useful or tangible when you wield it because it is just a collection of words that can do nothing on their own.

Oh, so a hammer isn’t actually a tool because it can’t do anything on its own? You did manage to construct quite a strawman though, but since you ignored the facts (probably because you find them inconvenient, just like maximalists do), it all falls apart.

Anonymous Coward says:

Re:

Techdirt: “Man uses tech to stalk other people.”

Techdirt commenters: “Lock him up!”

And pass a privacy law so that would-be stalkers can’t so easily buy so much information about their victims on the open market; and stop companies from collecting most of that data in the first place. But maybe you missed all of those comments.

It’s as open to abuse as any other tool.

If that were true, then stories of men killing puppies with hammers would be as common as stories of DMCA abuse.

The latter are so common that popular YouTube creators need only say that a video was removed because of copyright, and the whole audience immediately understands that the claim was an abuse. This happens every day. If the words “using a hammer” were synonymous with “killing a puppy” in most people’s minds, then maybe you would have been right.

On the other hand, how “open to abuse” a particular tool is only considers the means, ignoring motive and opportunity as significant factors. Abuse of DMCA takedowns occurs frequently because of financial motivations, and because the law gives companies the opporunity to do this with almost no consequence. Neither is true for hammers, so there is no comparable need for better laws with regard to hammers.

Anonymous Coward says:

Re: Re:

And pass a privacy law so that would-be stalkers can’t so easily buy so much information about their victims on the open market; and stop companies from collecting most of that data in the first place. But maybe you missed all of those comments.

In a lot of places (most notably the EU and the UK), such laws already exist, and in the States, Congress is focusing on individual companies (like TikTok) rather than the actual problem. But maybe you ignored those facts.

Anonymous Coward says:

Re:

We really going to have to go through this again with you, don’t we Bayside Advisory simp?

You truly are copyright law’s best and brightest.

Anonymous Coward says:

Re: Re:

…Bayside Advisory simp…

Every accusation a confession, albeit subversively. I never said that the way copyright is currently abused is a good thing, and the fact you read that in what I actually wrote show a low level of reading comprehension on your part as well.

Anonymous Coward says:

Re:

Not this apologist nonsense again.

Everything is grist for the maximalist mill to you. Not paying more than the asking price for content is grist for the mill. Borrowing a book or game from a friend is grist for the mill. Mocking Prenda Law is grist for the mill. Is there anything we do that doesn’t make your lip tremble and voice stammer calling us “minimalists”?

Copyright and its pearl-clutching shills like you get a bad rap because you consistently choose to ignore the bad actors among your midst, who insist on foisting anti-consumer consequences and lawsuits on everyone else because you think you’re not sufficiently enriched.

Anonymous Coward says:

Re: Re:

I’d rather get on my knees and deepthroat Prenda Law than enable pirates like you, minimalist.

The only reason why Prenda got away with what they did was because YOU created the market for copyright lawyers.

Anonymous Coward says:

Re: Re: Re:

Predictable as ever, BDAC.

You’ll do anything but admit that the way copyright law is structured inherently incentivizes, promotes, and encourages abuse of the system at the expense of the average person.

Don’t let the 150k statutory damages hit you on the way out.

Anonymous Coward says:

Re: Re:

…apologist nonsense…

Every accusation a confession, albeit subversively. I never said that the way copyright is currently abused is a good thing, and the fact you read that in what I actually wrote show a low level of reading comprehension on your part as well.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...