from the NSA:-Home-Version dept
Well, we finally received some surveillance reform with the passage of the USA Freedom Act, which, even with its built-in six-month waiting period is still more surveillance reform than we've seen in the past thirty years. So, of course, the intelligence "community" is seeking to counterbalance its "losses" with gains from the private sector. Self-spying will have to replace government spying, if we're expected to run a secure nation.
Social media sites such as Twitter and YouTube would be required to report videos and other content posted by suspected terrorists to federal authorities under legislation approved this past week by the Senate Intelligence Committee.But there is a silver lining, although it makes absolutely no sense.
The measure, contained in the 2016 intelligence authorization, which still has to be voted on by the full Senate, is an effort to help intelligence and law enforcement officials detect threats from the Islamic State and other terrorist groups.
It would not require companies to monitor their sites if they do not already do so, said a committee aide…So… to better secure the nation, companies that already do this thing would be forced to continue doing this thing, even though they've had no problem doing so voluntarily. Those who don't wish to do this won't be forced to do it. The only change then would be the "reporting" aspect, which I imagine is also already in place for most of those voluntarily removing terrorist-related content.
The Senate Intelligence Committee is wasting tax dollars on redundancy, and not the good kind of redundancy that keeps government entities from permanently destroying public records. This is the bad kind of redundancy that is "fighting terrorism" by telling companies to do the thing they already do, unless they don't, in which case, never mind.
The government isn't out of stupid, though.
Although officials are generally pleased to see such accounts taken down, they also worry that threats might go unnoticed.On one hand, the government complains that leaving the content up could result in "radicalization" of the few citizens that haven't already been swept up by the FBI's Radicalization Program. On the other, it complains that taking the content down makes it harder to keep an eye on those radicalizing potential terrorists. Its solution is to act like the Internet's Recyle Bin. Toss your terrorist posts here so we can browse them before deletion.
“In our discussions with parts of the executive branch, they said there have been cases where there have been posts of one sort or another taken down” that might have been useful to know about, the aide said.
Service providers and tech companies are calling it a violation of users' privacy and state that additional monitoring and adding another step in the takedown process will be technically difficult. National security experts, however, aren't nearly as concerned about privacy violations or technical hurdles. National security is the priority. Everything else is just extraneous noise.
“In a core set of cases, when companies are made aware [of terrorist content], there is real value to security, and potentially even to the companies’ reputation,” said Michael Leiter, a former director of the National Counterterrorism Center, now an executive vice president with Leidos, a national security contractor. “Rules like this always implicate complex First Amendment and corporate interests. But ultimately this is a higher-tech version of ‘See something, say something.’ And in that sense, I believe that there is value.”The technical problems are skirted completely and the tiny nod towards citizens' privacy is swallowed up by "see something, say something" and "value." Intelligence at any cost -- especially if the majority of the cost is absorbed by civil liberties and the private sector.
This casual dismissal of concerns is unsightly. Here's a Senate Intelligence Committee aide also lowballing the cost to people's rights and tech companies' bottom lines:
The committee aide said the measure presents “a pretty low burden” to companies, who would have to report only activity that has been reported to them. “We have heard from federal law enforcement that it would be useful to have this kind of information,” he said.Basically, it's the same non-argument Michael Leiter makes: "value" and "use" to government agencies is really the only thing that matters. These other concerns aren't even worthy of a thoughtful response.
It's highly discouraging to see that the same mentality prevails despite nearly two years of damaging (to the intelligence community's public reputation, not its actual capabilities) leaks. These reps of the intel world can't even be bothered to sincerely address the public's concerns. All they can think about is how "useful" this would be to them.
And so, they've put together a half-assed law (in response to a Facebook-"enabled" terrorist attack) that can't even be bothered to enforce the strengths of their very minimal convictions. It's almost as though the intelligence community said, "This would be kind of nice to have. Why don't you guys see if you can get that for us?" If the community truly felt this information was "valuable" and "useful," the proposed law would demand that all companies comply, rather than limiting it to those who voluntarily police their platforms. But it doesn't. It just asks for some companies to do what they already do and for others to add them to the "reported posts" mailing list. It's nothing more than an attempt to create informal government informants with the added bonus of turning voluntary actions into mandatory requirements.