...at which point Cloudflare would go bankrupt and someone else would replace them. You realize that Cloudflare isn't a government, right? Seems like a lot of people in this thread talking about free speech rights don't seem to get that part. They aren't locking people up at gunpoint, they're just refusing to speak things that they don't want to speak. They aren't taking money at gunpoint to keep themselves in business either; if you don't want to support them, then don't.
So, are you going to head down to your nearest Nazi meeting and help them hand out flyers? Are you going to stand on the street corner with a megaphone screaming their message for them? Because you seem to be saying that Cloudflare should do exactly that. There's a difference between actively censoring a message and just not helping to spread it further.
No, the reason it exists is because there is no single entity that can be trusted for all eternity with power over what ideas people may express. Every time that power has existed, it has been abused. Your suggestion empowers Nazis by giving them one more reason to defend their speech, while diempowering the rest of us by giving one more reason not to fight back. Stopping that speech IS the correct response, it's just not a response that we can trust to be taken responsibly by the government.
Very true, and I think it's also worth pointing out that the piercing that illusion is not a binary condition. Someone who seriously intends to hijack a plane is probably researching those scanners and screening policies more than the average citizen, and a large terrorist organization probably wouldn't mind if the TSA still caught 90% of their attempts -- they can just send twenty people. You think this whistleblower changes anything after they've been seeing people sneaking knives and hacksaws and loaded firearms and thermite and igniters through these kinds of screenings for YEARS already? By the time "everyone knows" it's useless, the few people who matter likely suspected they were sufficiently useless for quite a long time. The reason we haven't seen another successful hijacking recently is because nobody with any notable level of resources has tried. Or if they have, their plans did not reach the point of actually getting to an airport.
It doesn't matter how they FEEL about it, it WILL slow them down. Then they have to either pay for more guards (and they never like paying for things) or slowly dismantle one part of the system to keep another part running (as described in this article) -- and that may include dismantling the scanners if they decide that the opt-outs are the least important thing to be dealing with.
The other thing to keep in mind is that these "best practices" are being written and required by the same government who has proven multiple times that it is unable to keep this kind of filth off THEIR OWN networks. So either they can't obey their own best practices, or those best practices don't actually do anything to help.
I figured they wanted a jury verdict because nearly every living American despises the cable companies and would be eager to exploit this opportunity for revenge. I'm sure the judge hates Cox just as much, but he's got a bigger obligation to remain "professional". Granted, most of us despise the RIAA too, but they have less name recognition.
So...I've got some friends on Cox, who get disconnected every couple months, and they call up customer service and give some story about "Piracy? What's that? Secure Wifi? I don't know what any of that means!"...and then they get reconnected. Meanwhile, there's people who download CONSTANTLY on Verizon and have never heard a word about it. So I assume Verizon's policy is essentially "Come back when you've got a court order"?
It says here that a large part of the problem is that Cox did not obey their own policy for dealing with repeat infringers. Sounds like they might not have lost if their policy was "That's not our problem". They chose to become an enforcement agency, and they got sued for failing to do that job well enough. That's what you get when you try to help a group of crooks like the RIAA...
"the SF legislature has already amended its ban to allow city use of smartphones with biometric security features"
"Municipal agencies are once again allowed to procure devices that utilize facial recognition tech as long as they're "critically necessary" and there are no other alternatives."
Considering that there are MANY ways to lock and unlock a phone, and the facial-recognition is certainly not the most secure option...did they add the "critically necessary and no other alternatives" exception and then add a SECOND exception for the "smart""phone" "security" garbage?
Typical of both government and corporations these days though...ram through a half-baked measure with insufficient analysis, then ram through some further ammendments once you realize how thoroughly you've screwed yourself. Take a principled stance, until it gets marginally inconvenient, and then those principles get thrown right out the window....
And the desk, being part of the police department, surely takes priority over any of those other plebs!
...compared to America, where I pay a few hundred a month, in addition to what my employer pays, for private health insurance that I've never once been able to use? I call a doctor and say I need a flu test or I need to get an infection looked at, and they tell me "next available appointment is in eight months." Utterly useless. So I pay a few thousand a year for insurance purely so I don't go bankrupt if I'm in some catastrophic accident, and then I pay a few hundred again out of pocket any time I have any actual healthcare needs because that's the only way to get something treated without waiting in line for a year...instead of seeing an actual doctor I'm ordering blood tests online and getting meds prescribed by some guy in a call center in Georgia. A six figure income still won't get you halfway decent healthcare in this country...
Unfortunately, the telecom companies have already had great success in banning that pre-emptively...
No, nobody can explain it because Techdirt has no policy governing when or why comments get hidden. The mob didn't like it, and that's the only thing that matters here.
Much like cops don't want people to find out through social media when they shoot someone, and politicians don't want people to find out through social media if they're caught taking a bribe, and college students don't want (certain) people to find out through social media if they're drunk and stupid at a party, and criminals don't want anyone posting the surveillance footage of them.. And none of those are illegal either. That is not a sufficient basis to prohibit something.
"But if it was posted on social media - and this goes for anything, that removes the person who is photographed from any decision on who/what/where that picture is disceminated to. Another person viewing the picture doesn't have the facts - just what the person posting it wants to portay (good or bad)"
That would seem to cover literally any photo that isn't a selfie or a completely human-free landscape. And even selfies if there's anyone visible in the background, or if you're dong a selfie with someone else. If that's the line you use to determine what is legal, you'd be turning nearly everyone with a camera into a criminal.
There's different kinds of moderation, and plenty of sites DO use mid-stream moderation that requires access. Facebook, for example. I like the idea posted by AC below where they suggest moderation vs filtering, although those words already have different uses so they probably aren't the best choice. I'd call it something like "policy moderation" vs "user moderation". Policy moderation is like Facebook, where you set a bunch of rules about what is and is not allowed, you let users file reports of specific content, but then you have hired moderators who review that content and determine if it is actually in violation. Some sites also use immediate policy moderation, where your post will be reviewed by a human to see if it complies before it is ever visible. Some sites use a mix, with automated filters which will determine if a comment should be held for human review. But all of those options require administrators at the company to be able to review the posted content. So either the company needs to be able to decrypt everything, or at the very least they need to insert code that will take the decrypted message from the user and pass that back to the company unencrypted. Either way they're getting unencrypted access. And obviously you can't count on any automatic filtering on the client end -- for example, if you do the thing where automatic filtering can flag a comment as requiring human review, the client can easily prevent that code from running on their end. You can use that to prevent things from being viewed, but not from being posted and distributed. For "user moderation", you just count downvotes and hide anything with enough downvotes. That could be done without direct access by the company to the decrypted content. But it doesn't let you set any kind of consistent rules, and it can often get abused, especially in larger communities. Things will get flagged because people just don't like the opinion expressed or the person expressing it...and there's not much you can do to prevent that.
Telecom is already working -- fairly successfully -- to prevent that as well. Some states already have laws on the books prohibiting public broadband. Just search "municipal broadband" right here on Techdirt and you'll find plenty of stories about it.
The problem is that they AREN'T really doing their jobs, and they are not really supporting the agency's mission either. They're doing a much lazier task that just externally looks similar to doing their job. And they've probably been doing that long enough that even those at the top don't remember what their actual job was supposed to be. Actually doing their job would require balancing the enforcement actions against the privacy and civil rights concerns, and obeying the spirit of the law rather than finding loopholes. They clearly aren't bothering with that part.
All true more or less, but HIPAA is a bit more than just not publishing the information. The first part is that you shouldn't even have the information unless you absolutely need it. You're a doctor, you see your neighbor coming in to your clinic, but if they aren't your patient you are not allowed to go look up their file regardless of what you do (or don't do) with that information. The second part is that you don't share that information yourself, which ought to be obvious enough. And the third part is that you actively protect that information. Encrypt it, shred it, lock the shredded documents in a padlocked trash can, don't let random people walk around the office, don't take a photo of the office holiday party just on the risk that it might have some information visible in the background. Just because you aren't the one sharing it doesn't mean you aren't the one responsible for it being shared. So, if Joe Blow the paparazzi is standing in the clinic's parking lot photographing everyone who comes in...there's a fairly strong argument that the clinic security/staff needs to go tell him to leave or have him arrested for trespassing. It's a bit less clear if he's standing on a public street with a big zoom lens, although if they want to play it safe they may want to consider putting up a barrier or awning or something. HIPAA does require you to take reasonable precautions against incidental disclosure, not just intentional leaks. They can't arrest him for trying to take pictures, because that's not illegal. They do have to try to prevent him from taking pictures of anything confidential anyway, as they have a duty to protect that information. If they're spending so much time trying to block him that they aren't able to actually help the person, then I could see an argument that his actions were illegal interference. But in order to reach that point you probably need some reasonable belief that he's actually trying to photograph confidential medical information, rather than just photographing public employees at work in a public place -- there's no duty to protect against the latter.
The cops are legally allowed to stand there and watch while someone stabs you to death -- this has been upheld by the courts more than once -- but you expect a court to rule that they are required to intervene when the perp is one of their own? Never gonna happen...
Re:
"The tricky part, of course, is defining what is "harm." There can be negative actions that do not bring about harm. There can be positive actions that do bring about harm. Harm can include mental harm, such as fear or loathing (and not just in Las Vegas). And actions that could be harmful in the presence of one person could be perfectly benign in the presence of another. Not to mention that I might think someone else is being harmed even though they do not feel that way. Where lies the boundary between actual harm, and merely disagreement?" That is indeed the difficulty...and I would add one other consideration, as in addition to people being harmed even though they don't feel that way, there are plenty of people who will argue that they are being harmed even when they are not. Although...I suppose it could also be argued that the mere fact that you are considering taking some action in response implies that you are being harmed in some way. Is being annoyed "harm"? Certainly not physically, but mentally? How do we draw that line? I think a better question would be: "Is it harming me more than it is helping them?" But even that is tricky, and I think it ought to be weighted so that it is closer to "Would it harm a reasonable person more than it is helping them?". If you work night shift, you're the one outside of the average and it is more reasonable for you to invest in earplugs than to ask all of your neighbors to not mow their lawns during the day, regardless of how unnecessary that lawn mowing might ultimately be. Unless they're running that mower for hours every day. But then there must be a strong component of "what is typical in this society"...but unfortunately then you get into race/class/ethnicity issues, as what is normal for you may not be normal for the family next door...and what is no big deal to you might be a significant harm to them. There is no rule which can be applied, there is no algorithm which can determine the solution...what is required is a good dose of compassion and empathy. You need to understand both sides of the issue, not only your own.