"We know it when we see it" is what comes after "It's impossible to define".
That's a poor excuse for a massive increase in surveillance.
If they operate in America, they have to follow the laws there.
Is it really a small group of users? Reference(s)?There are six references in the link at the very beginning of the article.
Oversight mostly by parents and teachers, not the law.Yes, Masnick is also arguing against doing this through law.
I am interpreting what you have said from this and you prior posts as, “The media portrays social media as a big problem, but mountains of evidence show that it’s not, indeed, a problem.”Your interpretations aren't really on Masnick when he has stated in multiple articles on this topic that the issue is overblown, and that evidence(the evidence cited in the link I mentioned) shows that social media, overall, are a net benefit for young people. He has never said that it's not a problem for anyone.
You seem to refute the notion that social media can be harmfulHe doesn't.
I'm sure the incumbent ISPs will update their infrastructure any day now...
Given the regulars in the comments section, I didn't expect to find myself on all three top 10 lists. A surprise, to be sure, but a welcome one.
It’s like you literally want chaos, anarchy, abuse and predation online.And it's like you literally want governments to destroy privacy and security on the internet.
I think a problem is that when a Russian robocaller tries to persuade the recipient to vote for Trump[...]And your source for this actually happening is...?
you have to ask yourself why the sudden explosion in numbersGenerally when society looks like it might be more accepting of something, more people are willing to demonstrate how they actually feel, though I know that accepting minorities is a foreign concept to you.
along with the precipitous decline in birth rates.It's not precipitous; the US birth rate has been trending downward for decades.
sigh One can only hope...
The underlying factor of all of this is the skyrocketing cost of making so-so content these days.That phrasing makes it sound like a natural evolution, which isn't really the case. The reason so much content has exploded on cost over just the past five years is the increased reliance on VFX with no control and no supervision: "We'll fix it in post". A tightly controlled use of VFX reduces cost significantly. That's how a movie like Godzilla - Minus One was made for less than $15 million.
Sure, piracy’s an option. Until your ISP catches you doing it and kills your service.It's not the ISP's job to enforce copyright law. They make money off you, they have no incentive whatsoever to proactively do anything about it.
Having to continually cancel and re-signup to various services to watch the stuff you're interested in is not much of a boon. Piracy still wins.
...hallucinated nobody mentally competent, ever. (Sorry, Toom)
Hey man, don't shoot The Messenger!
You’re linking studies done by Facebook? How dumb can you be?Two studies by Oxford University, one by Pew Research, one by the US Surgeon General, one by the APA and one published in the Journal of Pediatrics. Where's the Facebook study?
And studies that just survey actual teens? Like they would know whether it is harming them.How do you think you find out how teens are doing, if not by asking them? Since you seem so knowledgeable on the subject.
The owners of the AI system are the ones liable for copyright violation because it’s their system that is producing the copyrighted work based on copyrighted work that they have incorporated.That's like blaming the developers of Bittorrent for the users downloading pirated software. It makes no sense.
The owners of the AI system are the ones making the copies; the user provided a prompt, and the AI system reproduced a copyrighted work.Even if that actually happened, the user is the one who made it happen. The owners created a tool that could.
When a device makes illegal copies of copyrighted material that its owners have placed within it, the owners are liable.A user using a tool in a particular way doesn't make the creators of the tool liable for the crime. By that logic, car companies would be responsible for mass killings, but they're not. The drivers are. It should be the same with AI tools.
This is deranged, anti-police hate speech deployed in service of a popular narrative on TechDirt that law enforcement is bad and society would be better off under anarchy.That's one hell of a strawman.
No, you're confusing him with the trolls.