I don't think the nastiness is warranted, and it does not serve to advance the point. It seems like you're upset because you'd like me to see that the goals are related. I certainly agree that they are, as I said to AC above.
It makes sense to me that the goals behind the "defund the police" slogan align with the project Tim describes. I think, however, that the slogan (and the comment above) focus on a negative and less important objective--defunding police departments--rather than the more difficult, positive work of finding better alternatives. The positive goals, I think, also have broader appeal, and carry an attitude of hope and community development. We've got an encouraging example of this in Denver.
Hill's comparison with "traditional media" is especially thoughtless and unintentionally ironic. He, like hundreds of other opinion writers, is allowed to express uninformed and disingenuous opinions, and his publishers are allowed to print papers containing them, without fear of liability.
Like the return of "war on terrorism" rhetoric, this is a very unfortunate response that is reminiscent of the worst abuses of the Bush administration. Clearly Thompson and other lawmakers, like most of us, were shocked by last week's events and are struggling to come up with a response that seems adequate; without any clear way to solve the underlying problems, though, it seems that they've resorted to louder table-pounding. (Re-labeling criminals as terrorists may not carry much legal weight, but it does make it clear to your constituents that you think that this Really Bad Thing was actually a Really, Really Bad Thing.) Unfortunately, this posturing is not free, in the sense that it further consolidates the use of extra-legal, due-process-free weapons like no-fly lists as mainstream punishments.
But, it's kind of a weird thing to focus on "the children" when the companies themselves focusing on location data -- as bad as they are -- are not actually tracking kids unless you, the parent, give them a phone with location sharing turned on.
I guess the point here is that mobile devices should very clear on whether location data is being shared, but this is still somewhat disengenous. Apparently it's the user's responsibility to (A) be aware of these location data abuses, and (B) figure out how to disable location sharing? Maybe so, but this doesn't give the location-sharing industry a pass. "If they didn't want us to us to collect it, they would have secured it" is a bankrupt ethical position.
This is a familiar stance from some of Mike's other articles on privacy--while he is very right to stress the importance of giving users control over data sharing, there's also an annoying tendency to suggest that users who aren't proactive about privacy are at fault.
I'd apply Hanlon's Razor and say that this is the result of Musk's promise being reviewed with horror by some very shortsighted corporate policy types. "All rights reserved" appears in so many idiotic places that it's hard not to believe that many people think of "intellectual property" in security terms--if even a small component is left uncopyrighted or untrademarked, the whole system is "vulnerable". Or something.
And, in this case, there's still the question of whether it would have been done at all (Rep. Adam Schiff's decision to call public attention to the report) if it wasn't politically expedient.
Exactly right. In 2013, would any member of Congress have been willing to go up against the Obama administration to unearth a complaint on FISA warrant violations--never exactly a political hot topic--by some Ed Snowden?
Trusting in “official channels”, it seems, means trusting in political expedience. And thus reform is critically needed.
How much of this confusion is due to the vague, unfamiliar term involved? ‘API’ is jargon-y and suggests difficult-to-understand technology, which the actual idea involved is fairly simple--the key term is ‘interface’, i.e. a notion defining how two things interact, that is, a language (to borrow a perspective from Sussman & Abelson’s Structure And Interpretation Of Computer Programs). It may be a rather nuts-and-bolts language of procedure calls or structures understood by the communicating parties, or it may be an actual interpreted language. Like most languages, an interface is usually specified in English, or some other natural language; even expressed in a formal (e.g. programming) language, a language is not ‘executable’ in any meaningful sense.
While documents describing languages can be copyrighted, it’s reasonably clear (in the US, at least) that languages themselves cannot be copyrighted. Indeed, if the issue were presented in those terms, most people would probably find it somewhat ludicrous to claim otherwise. At the very least, it would be clear that, since languages are at most “methods of operation” (as Mike writes), patent law would be the correct field for this issue. But we’re stuck with ‘API’, and the endless misunderstandings created by jargon.
Most people are going to read this story and emphasize the "government employees watching porn!" aspect, but I'm not sure why the content they were wasting time on is important. How is this fundamentally different than, say, spending working hours on Facebook or reddit? Similar observations go for all forms of "safe for work" censorship, which seem to mainly be an outlet for moralizing, rather than serious attempts to curb time-wasting.
Hopefully Techdirt readers (and writers) agree that what's important here isn't the juicy content, but the inefficiency and bureaucratic wagon-circling done to hide it.
It's worth pointing out that Futurama, like many, many other media that make fun of copyright maximalism, is itself all-rights-reserved, copyrighted material. The same goes for the books mentioned by Doctorow's post: despite the jokes, the authors (including Doctorow) still allowed their books to be published under the standard restrictive copyright terms of the publishing industry.
While inserting pranks in copyright statements is funny, it does nothing to fix the problem. If the author isn't making every effort to publish their work under open licenses, it's smug posturing.
... makes me wonder if you have any kids of your own. Because... seriously?
Ah, such a valuable observation! The reasoning of the article is immediately demolished by this challenge.
But in the real world, things are a lot more complicated...
Again! The richness and freshness of these insights!
I for one am very happy that parental monitoring tools exist. Without them, we very easily might not have found out that...
A bad thing might have happened, anecdotally! It was stopped by parental monitoring--we claim! Although the commenter makes no argument as to why this justifies the extreme parental surveillance detailed by Mike, clearly it is so justified--things like this anecdote might almost happen again! And Life360, et al, will, uh, stop that!
The next time I wonder whether the occasional crime justifies facial recognition, location data sharing, or worldwide dragnet signals collection, I'll recall the overwhelming argument made above--surveillance, generally speaking, is a good thing, since bad things might happen otherwise. It might not sound like much, but since it's from a parent, it must have deep wisdom that I can’t fully fathom.
Here's the question: how does society benefit from letting people run around spreading hate? Is there any benefit at all?
This is not ‘the question’. It is a deliberately one-sided rhetorical framing of the issue of censorship, and your comment amounts to a content-free endorsement of broad censorship.
In this case, the person being censored is repellent and was sharing this material for repellent reasons, so it's easy to think there is no downside to punishing him. But how does this affect people who post “terrorist material” for the historical record? And does it create an abusable precedent for persecuting anyone who posts “offensive” content? Pretending that the answer is “obviously not” is extremely myopic--consider China’s treatment of any material related to the Tiananmen Square massacre.
You bluster and frame the issue in black-and-white: It's about stopping people from “spreading hate” (Popehat's Trope One). You ignore the difficult-in-general questions of defining “hateful” content, evaluating the speaker's reasons for posting the content, etc., and deceptively pretend these problems don't exist.
People who have no interest in “spreading hate” have suffered and continue at this moment to suffer under laws purporting to protect people from “dangerous” content. You ignore this--which is abhorrent--and have the gall to ask “why shouldn't we censor?”
I'm not promoting terrorism. There's a world of difference. The slope isn't that slippery.
Ah, it's fortunate that we're dealing with such a clearly-defined, black-and-white accusation like "promoting terrorism". It's incredibly unlikely that a charge like that would ever be excessively extended or used against politically-unpopular people. /s
A second complaint from users may derive from data collection … it may affect the kind of content she encounters, which … may serve … to "radicalize" her, anger her, or otherwise disturb her.
This is the complaint about data collection? Not the use of collected data (possibly from private communications) to profile speakers, and the dissemination of that data to domestic and foreign governments? The chilling effect created by the nowhere-to-hide paradigm of mass data collection is a major threat to speech and should be far more disturbing than chance exposure to unpleasant content.
But this exaggerated emphasis on “bad speech” leads me to question the drift of “ecosystem” metaphors. When concerns about the emotional impact of speech are raised to the same level of importance as government censorship, the “ecosystem” language makes it far too easy to argue for the suppression of unpleasant speech—after all, if speech is an ecosystem, shouldn’t “harmful” and “viral” elements be excluded from our habitat?
While I agree that the binary government ⇔ citizen model is too simple, we should be wary of biological metaphors that (among other things) suggest it’s reasonable to suppress upsetting speech. Our traditional, simplistic model nevertheless includes a commitment to the belief that, while we should all enjoy free speech, free speech is not always enjoyable, and that intellectual maturity is essential to living in a free society. Any “ecosystem” model that lacks such a commitment is, IMHO, doomed to be abused by the powerful and hypersensitive.
Thank GOD this website is speaking up for the big guy.
And you, John Smith, since apparently you have the freedom to comment here, um, on the Internet.
The internet is a PURGE where normal laws don't apply, where people have no right to defend their reputation, or their copyright
(1) “Normal” law applies to the Internet—try committing fraud and see how far “I used a network connection to do it!” gets you as a defense. (2) See 1, libel laws are frequently used to remove content from the Internet. (3) Ever hear of content being taking down following a copyright claim? I know, it happens so infrequently…
Perhaps in your next comment you might try to respond to the article rather than spewing frequently-debunked talking points.
After a valiant struggle, Mason Wheeler tackles his strawman, Future of Freedom style.
The Tahrir Square protests and the subsequent Arab Spring were absolutely an “upsurge in democracy”, regardless of whether you approve of the governments that these movements elected.