"Short of pulling out the battery (notably not an option in some phones), there seems to be little anyone can do to prevent the device from being tracked and/or used as a listening device."
It's not that hopeless. As pointed out in some of the previous comments a faraday cage or bag is sufficient to prevent remote activation of your cell phone. These are now being made and will probably become more common. If you don't care about style, you can just use a mylar bag. There are 2 caveats to keep in mind; 1). Not any bag made from metallized film will do. I have tested anti-static bags that don't work. 2). make sure it is fully closed and stays that way in your pocket or purse.
Your bag is easily tested. Just call your phone while its in the bag. The test is better if it is done in a place that shows the maximum bars for service. For foolproof testing, stand next to a cell tower for your carrier and do the same thing.
This avoids having to worry about; whether the radio circuitry is really turned off or not, getting a phone with a removable battery, secret secondary batteries, or secret RFID chips.
If some of the phone manufacturers are being coy about denying the ability to remotely activate a turned off phone, it might be because they have allowed the phone to be configured to listen while "off". It is conceivable to me (but I'm not convinced) that manufacturers along with carriers in conformance with CALEA might allow a phone to be set in a pseudo-off mode in response to a wiretap order. Regardless, this can still be defeated with a Faraday bag.
I don't really see this as a reason for deciding to no longer read Slashdot. Remember that GCHQ was targeting a subset of Belgacom IT staff, not all Slashdot readers. The Slashdot site, itself, was not compromised or even touched. If they targeted you it would be for whatever sites you were currently using. Your best defense is to maximize security on your own computer or smartphone. It will not make any difference to stop using Slashdot.
There is some hopeful information in the Spiegel article
"The injection attempts are known internally as "shots," and they have apparently been relatively successful, especially the LinkedIn version. "For LinkedIn the success rate per shot is looking to be greater than 50 percent," states a 2012 document."
Reading between the lines: This shows that they had less success at targeting Slashdot as opposed to LinkedIn. This probably has to do with the kind of user who frequents Slashdot. Even among IT professionals, I would speculate that those whose frequent Slashdot are more sophisticated about computer security. They are the kind that would ensure their work computers are updated frequently and would also update the software on their own computers or smartphones often. They are more likely to use less vulnerable browsers or restrict the use or limit the scope of scripts within the browser. A successful QI attack requires not only a vulnerability in the browser but one in the underlying OS to permanently make sure the computer is compromised. Do not ignore a major point here that these attacks were not always successful.
I am guessing here because it has been a few years since I've traveled by plane, but I think there is a separation of function for the TSA agents. The agent who the mother talked to was not the one who is assigned to deal with an, initial, positive test. I don't think these agents are allowed to talk to each other much as that would distract from the primary function of each and possibly distort the mother's explanations in the retelling. (correct me if I'm wrong here). I do think the more important issue here is that the agent who is testing is very well informed about the potential for false-positives and has the knowledge needed to ferret out false from negative without undue delays for a passenger. TSA agents should not have to depend on passengers to explain potential problems. In fact, they should treat such volunteered explanations only as behavioral information about the passenger. If the passenger is presenting new facts to the TSA agent, then that shows a failure in training the agents received.
Terrorist might well be willing to kill or maim other people's young children but I think they would balk at doing so to their own. History has shown a multitude of child abuse but I do believe the children's crusades are a myth. http://en.wikipedia.org/wiki/Children%27s_Crusade
I won't defend the shooter, but the possibility of targeted violence motivated by frustration with ridiculously strict governmental policy should have been part of the security equation taken into account by TSA. This could be interpreted as blaming the victim, but here, you have one of the worst incidents ever (post 9/11) affecting an airport and it was, specifically, the security policy of the TSA which can be pointed to as the proximate cause.
There is a semi-rational reason for posting a false positives materials list. A real terrorist group could do dry runs where they check the consistency from airport to airport, or between different agents, of TSA's ability to sniff out bomb materials. This is only semi-rational because one can get that information elsewhere. As part of the concept of using layers of security, TSA still feels they need to not make it easy for the terrorists by posting such a list even though that makes the process more difficult for everyone else. What is really important though is that the TSA agents are well versed in what items can cause a false positive. Either they are not well versed or the established procedure is to treat a false positive exactly the same as a true positive. This is where they fail in my view.
What should happen is that upon first indication of potential bomb materials the agent should ask a couple of questions to determine the likelihood of a false positive. The agent should then use observations about the travelers, including behavior, to determine whether or not secondary screening is warranted. I would say, in this context, secondary screening should not have been necessary. I don't think there has ever been an incident where a terrorist has been willing to sacrifice their own young child for the cause. The agents should just make sure the parent(s) traveling with the child had full control over all luggage so that nothing could have been slipped in unknown to them.
One of the worst aspects of blind devotion to a security process is that TSA can't even have a public discussion about the wisdom of their procedures as that would, again, tip off potential terrorists. So, in response to complaints they whip out their boilerplate, just policy, form and wait till the controversy blows over.
The linked article about traumatic experiences and memory is incorrect. I haven't read anything by the psychologist whose research the article is about, so it might be the author, and not the psychologist, got things a little wrong. Research in neurophysiology shows that stressful events are more memorable as stress hormones increase the ability of the hippocampus to lay down new memories. That doesn't mean that those memories are particularly accurate and this is where the confusion might have originated.
The 6 month test in the Foothill Division of the LAPD was a blind test. The police received maps every day but they didn't know if the map was generated by PredPol or the LAPD crime analyst. Unfortunately, PredPol has obfuscated the results in their "proven accuracy" chart. It is positive result for their software but not as positive as it leads you to believe. The accuracy part didn't need the blind setup but it will be useful for the efficacy study. They will have to wait years for such a study to yield accurate results. There is no way around that. Still, that doesn't mean the software isn't useful. It is way overpriced though. I am thinking about writing my own implementation and selling it at only $10K a pop, one time sale, no SAAS here.
A final point on predictive policing. A number of media articles treat this as an application of "Big Data". It is not big data. Mohler says that he needs 1200 to 2000 crime data points to generate prediction boxes that have a reasonable amount of accuracy. That is the size of data for about any statistical analysis and is no where near what is considered big data. The reason that murder is not included is because cities don't usually have enough data points for murder (God, what an awful euphemism this is, as I write it).
I dug into this and found a useful video of a lecture by George Mohler who is the chief scientist for PredPol. http://vimeo.com/50315082 Fair Warning! this video is rather technical and assumes familiarity with the subject.
He talks about this chart at 30 min. into the video. The graph is a comparison of the LAPD crime analyst hot spot generation compared with that generated by PredPol's algorithm which uses a semi-parametric self-exciting point process. The x axis represents the number (or possibly a percentage of total crimes on a particular day) of crimes that happen within all hot spots. Their repeated claim that PredPol's algorithm is twice as accurate as an analyst is really only valid when generating 20 hotspots. At higher numbers of hot spots this ration falls but is still better in all cases. This is data for a six month period in the Foothill division of the LAPD.
The video eliminates the secrecy of all this. Mohler, in fact, points out that you can take the same equations he lists along with crime data and write a program to generate hot spots (prediction boxes) yourself. Oee critical point is that the algorithm is different from traditional hotspot generation in that it is predictive rather than just reflecting past crime activity.
I am guessing the first chart's x-axis ranges from 0%-90% predictive accuracy. I think they cut off the upper 10% because they wouldn't have the data for that part. The curve would likely be asymptotic towards 100% accuracy as the boxes are increased from 100 to the number of boxes covering an entire city. Anyway it would stretch police resources too much to include more boxes and contradict the whole point of hot spotting. A better description of that graph would be to say that PredPol's software is 20%-25% more accurate in predicting locality of crime than some crime analyst would be using statistical methods that are not mentioned.
I am not following your logic. One square mile can potentially contain, roughly, 100 prediction boxes (500ft x 500ft). At 500 square miles, LA has a total of 50,000 potential prediction boxes. If all are used that will indeed be assured to account for 100% of crime. The whole point of predictive policing, and hot spot policing in general, is that a small portion of the total area is selected for additional periodic patrols.
Re: Re: Re: In addition to the reasons given above...
I don't see why you need to use the margin of error to affect the shape of the box. The margin of error can be a separate value while holding the shape to a square. A patrol just has to include the whole box while traveling streets that don't necessarily align with the box anyway. Including territory just outside of the box doesn't invalidate the conformance of a patrol to the Koper Curve Principle. The only difference that variability in the nature of structures contained in the box makes is how the patrols take place (i.e. whether by vehicle, on foot, or by boat).
If you look at the partial map of Santa Cruz on their website, it includes one prediction box that is 300 feet from police headquarters. Knowing Santa Cruz, that seems conceivable, but it is an odd result.
There is an important aspect of predictive policing that is not mentioned in any of the press releases or press articles. I am guessing the reason it isn't mentioned is because law enforcement is afraid that if it was common knowledge the effectiveness would be diminished. Unfortunately, leaving it out skews our understanding of predictive policing and leads to false ideas like the software can predict a crime at this spot and this particular time. If this missing aspect, the Koper Curve Principle, was explained than it would probably lead to a greater public acceptance and less skepticism.
The Koper Curve Principle is used in association with any type of "hot spot" type patrolling. It basically says that periodic, and highly visible, patrols of an area that are 12-16 minutes in length maximize the use of police resources. Crime will be reduced in that area as a result of criminals noticing what seems to be an increase in police presence. This effect will differ depending on the crime and obviously, will have little effect on crimes of passion. At any rate, predictive policing is not a stakeout within a prediction box. The following is a recent study about the effectiveness of various police practices. http://www.jjay.cuny.edu/Telep_Weisburd.pdf
Why is this software package any better that a veteran cop's intuition or any other statistical analysis of crime data? The advantage over intuition is that it avoids cultural or emotional biases, it adapts to changes more quickly than a human would, and it can be more easily communicated to officers who are not veterans within a particular police department. Since the data proving effectiveness is not there yet we can't know if this secret algorithm is any better than some other mix of statistical analysis. This secrecy is the snake-oil part. If the multitude of Phd's had truly figured out a unique statistical analysis then they would have patented it and secrecy would not be part of the package. All I can see is that they update this daily with yesterday's statistics, the algorithm this new data and applies Bayesian Inference, and it is fed into GIS software to produce convenient maps. This is something that is not too hard to reproduce. Don't dismiss the idea that predictive policing might be effective but do be skeptical that PredPol has the only true answer. PredPol's hard sell approach certainly makes me more skeptical of them.
At some point an active shooter drill will run into some kid who has martial arts experience. The fake shooter will be maimed and the resulting paranoia about potential lawsuits will kick in and end this insanity all across the nation.
As for the simulated drug searches, this makes sense because the police are preparing the students for future, non-simulated, random drug searches. The child being bitten teaches all the other kids they must remain still.