by Mike Masnick
Fri, Dec 2nd 2016 4:47pm
by Tim Cushing
Tue, Nov 22nd 2016 2:42pm
Twitter Says Its API Can't Be Used For Surveillance, But What Does It Think The FBI's Going To Do With It?
from the spies-in-sports-coats-or-just-LEOs? dept
Dataminr, the company whose Twitter firehose access has become somewhat of cause celebre on both sides of the privacy fence, is back in the news. After being told it couldn't sell this access to government agencies for surveillance purposes, Dataminr had to disconnect the CIA from its 500 million tweets-per-day faucet.
Twitter was pretty specific about what this buffed-up API could and could not be used for. The CIA's surveillance efforts were on the "Don't" list. This rejection of the CIA's access was linked to existing Twitter policies -- policies often enforced inconsistently or belatedly. What the CIA had access to was public tweets from public accounts -- something accessible to anyone on the web, albeit with a better front-end for managing the flow and an API roughly 100x more robust than those made available to the general public.
The FBI will soon be able to search a vast repository of public tweets in real time for hints about potential terrorist attacks and other public-safety crises.
The bureau awarded a sole-source contract to Dataminr, a company that allows customers to churn through Twitter's "firehose," which includes more than 500 million 140-character messages posted daily. Twitter's public API only gives users access to about 1 percent of tweets, according to a FedBizOpps posting.
Now, the question is not whether or not the FBI should have access to publicly-available Tweets. It always will have that access, with or without Dataminr's assistance. The question is whether Twitter believes the FBI is not engaged in the sort of surveillance it disagrees with.
In the context of its Dataminr access, I'm sure the FBI would have preferred to be thought of as a law enforcement agency. Divorced from the API-access context, it has done much in recent years to place itself on the same level as the CIA. It honestly feels it should be given more foreign intelligence gathering powers -- more so than the CIA, which has traditionally handled only foreign-facing operations.
Likewise with the NSA. The NSA's bulk collection orders under Section 215 were obtained in the FBI's name, with the data going directly to the NSA and the intelligence agency "tipping" an unspecified amount of the haul back to the FBI for further examination.
What the FBI is going to engage in with this access will be a form of surveillance, albeit one with very few privacy implications. Twitter has yet to speak up about the recently-awarded contract. It may never do so. It may believe the FBI is primarily engaged in law enforcement, even though the agency rebranded in the midst of the Snowden leaks, emerging as the "national security" agency it apparently felt it always should have been.
The statement issued by Twitter suggests it's only the "surveillance" that bothers them, not so much what each government agency seeking access feels its core mission is. The policy says "government or intelligence agenc[ies]" will be forbidden from purchasing access for surveillance purposes and the FBI certainly can't deny it's a government agency.
It also shouldn't matter which hat the FBI wears when attaching the hose. Twitter yanked Geofeedia's API access after discovering it was selling access to law enforcement agencies all over the US for the purposes of tracking First Amendment-protected activity. Its policies also list "track" and "investigate" as problematic uses of its API -- two things the FBI does often.
Given the agency's long history of engaging in surveillance of protected political activity, it's not much of a stretch to believe the FBI will use Dataminr's tools for the same ends. Then again, Dataminr or no Dataminr, the tweets it's seeking to analyze are already out there where anyone can see them. All the agency is really buying is a hose and a funnel.
by Tim Cushing
Thu, Nov 17th 2016 2:45pm
from the everything-in-its-right-place dept
So much for encryption turning phones into inscrutable blocks of plastic, metal, and glass. The Intercept is reporting that Apple is doing some of law enforcement's work for it, routing call records to users' iCloud storage.
Russian digital forensics firm Elcomsoft has found that Apple’s mobile devices automatically send a user’s call history to the company’s servers if iCloud is enabled — but the data gets uploaded in many instances without user choice or notification.
“You only need to have iCloud itself enabled” for the data to be sent, said Vladimir Katalov, CEO of Elcomsoft.
The logs surreptitiously uploaded to Apple contain a list of all calls made and received on an iOS device, complete with phone numbers, dates and times, and duration. They also include missed and bypassed calls. Elcomsoft said Apple retains the data in a user’s iCloud account for up to four months, providing a boon to law enforcement who may not be able to obtain the data either from the user’s carrier, who may retain the data for only a short period, or from the user’s device, if it’s encrypted with an unbreakable passcode.
Plain vanilla call records aren't that difficult to obtain. They've long been considered third-party records and can be obtained without a warrant. The Intercept quotes a former FBI agent as saying this is a "boon" for law enforcement because the four-month retention period is longer than most service providers'.
That doesn't seem to be correct at all. The EFF's Nate Cardozo points out that most service providers retain call logs for at least a year, with some retaining records for as long as a decade. Kim Zetter, who wrote the piece for The Intercept, believes it might be a misunderstanding. Providers may retain content (messages, etc.) for a shorter time frame than the four months of records Apple automatically uploads, but former agent Robert Osgood (quoted in The Intercept's piece) clearly states he's referring to call logs.
The concerning part of this isn't the normal call logs. Those are retained for years by carriers and can be obtained with a subpoena or a pen register/trap and trace order (for "real-time" data). There are two aspects of this automatic collection that should worry iPhone users.
First, it's not solely limited to calls placed directly through carriers.
FaceTime, which is used to make audio and video calls on iOS devices, also syncs call history to iCloud automatically, according to Elcomsoft. The company believes syncing of both regular calls and FaceTime call logs goes back to at least iOS 8.2, which Apple released in March 2015.
And beginning with Apple’s latest operating system, iOS 10, incoming missed calls that are made through third-party VoIP applications like Skype, WhatsApp, and Viber, and that use Apple CallKit to make the calls, also get logged to the cloud, Katalov said.
Trying to route around service providers to limit easily-obtainable records of your call activity is somewhat pointless on Apple devices. It all gets captured and can be obtained directly from the company. Presumably this information would still fall under the Third Party Doctrine, meaning law enforcement most likely won't have to present a warrant to collect this data from Apple.
The other concerning part of this collection is that Apple does it without informing customers that it's doing it. It does list several forms of data it syncs to users' iCloud accounts, but never states that it's collecting call records. Kate Cox of The Consumerist digs into the iCloud fine print.
Under the header “Privacy and security,” Apple writes:
Apple takes data security and the privacy of your personal information very seriously, and iCloud features are designed with your privacy in mind. All your iCloud content — like photos, documents, and contacts — is encrypted when sent over the Internet and, in most cases, when stored on our servers. If we use third-party vendors to store your information, we encrypt it and never give them the keys. And security enhancements like two-factor authentication help to ensure that the important information in your account can only be accessed by you, and only with your devices.
And the full list of features Apple mentions on the site includes backup for “important stuff like photos and videos”; Notes; iTunes and Apple Music; Mail, Calendar, Contacts, and Reminders; Safari browser history and passwords; Safari password keychain; and Find my [Device]. Nowhere is “call history data” mentioned.
Apple's explanation for this hidden syncing is "convenience:" "history syncing" allows users to "return calls from any device." That's fine but it doesn't explain why Apple doesn't list that in the data it syncs to iCloud or why it doesn't give users an easy way to exclude call data from this process.
Not that users of other devices should feel superior. Android and Windows phones do the same thing and give users no easy way to disable call tracking.
But it does drill another hole in the "going dark" theory. Tons of information from locked phones is being synced to cloud storage that manufacturers hold the keys to. And, in the case of Apple, content from end-to-end encrypted iMessages could be no more than a warrant away from law enforcement's possession.
by Tim Cushing
Wed, Nov 9th 2016 4:46pm
Data-Driven Policing Still Problematic; Now Being Used By Government Agencies For Revenue Generation
from the defined-by-the-data-you-generate,-rather-than-the-person-you-are dept
Data, even lots of it, can be useful. But it also leads to erroneous conclusions and questionable correlations. Ever been baffled by the content of a "targeted" ad? Just imagine the fun you'll have when "lol 'targeted' ad" is replaced with nearly-incessant "interactions" with law enforcement.
The American Civil Liberties Union, citing reports that the Chicago Police Department used a computer analysis to create a “heat list” that unfairly associated innocent people with criminal behavior, has warned about the dangers of the police using big data. Even companies that make money doing this sort of work warn that it comes with civil rights risks.
“We’re heading to a world where every trash can has an identifier. Even I get shocked at the comprehensiveness of what data providers sell,” said Courtney Bowman, who leads the privacy and civil liberties practice at Palantir Technologies, a company in Palo Alto, Calif., that sells data analysis tools. He has lectured on the hazards of predictive policing and the need to prove in court that predictive models follow understandable logic and do not reinforce stereotypes.
When even the companies gathering the data are concerned about the implications, there's a problem. (One issue being: why don't they stop?) Anything that can be obtained (preferably in bulk) without a warrant will be. And it gets funneled into predictive policing software that attempts to mold disparate info into a usable whole. Lost in the shuffle are the individuals now represented by data points and algorithms. A data point located in the "wrong" neighborhood could result in surveillance backed by nothing resembling reasonable, articulable suspicion.
It's not all bad, though. There are uses for aggregate data that don't create privacy concerns or fears of ever more biased policing. As the New York Times article points out, the collected data frees up resources to deal with more serious crime by contributing to traffic management and reducing the amount of data entry needed to complete routine paperwork.
On the other hand, the desire to obtain any data available without a warrant is resulting in some very twisted uses of third-party records. In places like Chicago, the data-driven "wrong side of the tracks" can result in many innocent people being treated as inherently suspect. In Seattle, government agencies are hoovering up third-party records to maximize rent-seeking.
The county's animal services recently sent out loads of threatening letters to pet-owning residents, warning them that failing to get their pets properly licensed could lead to $250 fines. The county was going extract money from them either way.
But how did the county know who owned pets if they weren't licensed? It turns out they got their mitts on direct mail lists from stores that tracked customer purchasing habits through membership cards and the like. For the stores and the private retail environment, they're tools to more directly market consumers with goods they may want or need. In the hands of government, it becomes a lot more sinister. A woman who no longer owned a pet received one of these threatening letters and wondered what was going on.
The plan: compare these third-party mailing lists to pet registrations and send threatening letters to anyone on List A but not on List B. Sure, the county claims it won't be doing any follow-up enforcement -- like in-person visits from animal control officers with their hands out -- but the damage has already been done. People who no longer have pets are being hit with letters and plenty of unregistered pet owners will never even know the county is digging through third-party data in hopes of sniffing them out.
Once such government behavior becomes viewed as acceptable -- or not troublesome enough to result in losable lawsuits or massive public backlash -- it becomes the new normal. Today, the government comes for your unregistered pets. Tomorrow, it could be your children.
How about a threatening letter from Child Protective Services noting that your grocery purchases suggest you are not feeding your kids with foods the government deems the most healthy, and if you don't change your behavior, you may have a little visit? It's not an absurd idea, given we're seeing food nannies in the school system meddling with lunches parents are providing to their kids.
The solution would appear to be to prevent retailers from gathering so much data about their customers. But it isn't. Retailers can send as much garbage mail as they like in hopes of more sales, but all they can do is hint and beg. The government, on the other hand, has plenty of enforcement options to make unsolicited direct mail campaigns much more effective in separating people from their money. Or their pets. Or their kids. Or whatever.
by Tim Cushing
Fri, Oct 21st 2016 3:21am
from the it's-not-about-accountability,-it's-about-control dept
FBI Director James Comey didn't dig into his bag of "Ferguson Effect" rhetorical devices during his comments to a law enforcement conference on Sunday, but he came close. Under that theory, the possibility of being held accountable by citizens and their recording devices has apparently been holding officers back from enforcing laws, making arrests, or otherwise earning their paychecks.
The problem now is a lack of data, Comey claims. Law enforcement has lost control of the narrative, he stated, as if a one-sided portrayal of every police use of excessive/deadly force was somehow beneficial to the nation.
Dramatic videos of deadly law enforcement encounters and the absence of reliable data about how often police use force contribute to a regrettable narrative that "biased police are killing black men at epidemic rates," FBI Director James Comey said Sunday.
That story line has formed amid a lack of comprehensive, national data about how many citizens are killed or injured at the hands of police officers.
Thanks to the DOJ and FBI's active disinterest in collecting this data (until just recently), the "narrative" is no longer law enforcement's to control. Comey at least admits the FBI -- which was charged with collecting this data but somehow believed voluntary reporting would result in a comprehensive dataset -- is partly to blame.
We do not know whether number of black, brown or white people being shot by police is up because we have not collected data.
The problem with Comey's comments is that he apparently believes data on excessive force and killings by police officers will be ultimately exculpatory.
We need to show people what American law enforcement is really like, because if they see what we see, the chasm will close.
But the data collected by the public of its own initiative shows exactly what Comey claims it doesn't: that law enforcement officers are killing black men at "epidemic rates." Worse, Comey believes data collected and disseminated well after the fact will somehow be able to defuse immediate reactions to released video of officers killing or abusing citizens.
Videos of fatal police encounters that capture the public's attention and are shared broadly across the internet can fuel the perception that "something terrible is being done by the police," even if the data aren't there to back it up.
Given the audience, Comey probably didn't feel comfortable pitching the truth: that policing in America is every bit as bad as it's portrayed to be. Comey thinks data will give law enforcement control over the narrative, but that seems to be his only concern. The culture of American policing needs to change before the data start matching law enforcement's narrative.
Almost without fail, DOJ investigations of law enforcement agencies find two things: routine use of excessive force and biased policing. These aren't anomalies or "bad apples." This is how policing in America works.
As for the narrative, law enforcement still largely controls it. The corpse of the recently killed is barely on the way to the city morgue before law enforcement officials are dumping criminal records and officers' "feared for their safety" claims into the hands of reporters. No amount of pointing to stats is going to change the fact that far too many interactions are needlessly escalated by responding officers, or that biased police tactics are generating far too many interactions in the first place.
While it's good to know the FBI is finally going to push for better data collection on police use of force, the fact that it did nothing for nearly two decades counts against any goodwill it might hope to generate by finally doing its job. Unfortunately for those hoping this might lead to better policing, Jim Comey has made it clear it's really about controlling the narrative and pushing the American public to view law enforcement the way Comey feels they should be viewed: as good people in tough jobs who rarely, if ever, screw up. We'll just have to see what sort of spin is applied when Comey realizes the numbers aren't going to add up to his preconceptions.
by Tim Cushing
Fri, Sep 16th 2016 1:02pm
Senator John McCain Uses Cybersecurity Hearing To Try To Shame Twitter For Not Selling Data To The CIA
from the NO-ONE-CARES dept
John McCain -- fighting for the government's right to get all up in your everything -- has decided to embrace the "grumpy" part of his "grumpy old legislator" personality.
Back in July, McCain expressed his displeasure with Apple declining his invitation to show up and get yelled at/field false accusations at his hearing on encryption. He dourly noted that he was "seeking the widest variety of input," but his invited guests included Manhattan DA Cy Vance, a former Bush-era Homeland Security advisor and former NSA deputy director Chris Inglis. Not having Apple to kick around peeved McCain, who finished off the "discussion" with subpoena threats.
Another encryption hearing hosted by McCain devolved into the senator ranting about something no one cares about but him: a tech company not immediately prostrating itself in front of an intelligence agency. Here's Marcy Wheeler's summation of McCain's "contribution" to the discussion.
His tertiary point seems to have been to attack Apple and Twitter for making efforts to protect their customers. After getting a witness to comment about Twitter’s long-term refusal to let Dataminr to sell Twitter data to the CIA, he suggested perhaps the response should be to “expose” the company.
"Expose" how? This was "exposed" already, with the aftershocks of the exposure being "so what?" and "who cares?" Twitter simply enforced a pre-existing policy, pointing out to a third-party data mining company that it wasn't allowed to sell Twitter data to the government for surveillance use. This blocked the CIA from drinking from the Dataminr/Twitter firehose, which made the CIA sad and Twitter look stalwart and -- generally speaking -- didn't prevent the government from using any number of other methods to scoop up public tweets for surveillance purposes.
It also made McCain mad and he's still aching about it three months later. So, Wheeler has decided to help McCain out by publicizing Twitter's decision to hold a third-party social media data miner to the terms of its agreement with the government. Two more headlines have been added to her post, both breaking the news that was broken months ago and did little to appreciably nudge surveillance/outrage needles in any direction.
But it's still a big deal to McCain. He spent a little over two minutes (starting about 46:50 in the recording posted here) crafting his molehill into a mountain before cajoling NSA director Michael Rogers into answering what should have been a hypothetical question. While Admiral Rogers uncomfortably admitted he "didn't understand" why Twitter would enforce a pre-existing policy, McCain was unable to get anyone in the room to say anything on the record about "exposing" Twitter for its apparently nefarious decision to enforce the rules of Dataminr's agreement.
Wheeler has a better question:
Of course, you might ask why McCain is demanding that our tech companies to make money off of surveillance of you. And why he considers Twitter such an exception.
by Tim Cushing
Wed, Aug 10th 2016 2:41pm
DOJ Finally Going To Force Law Enforcement Agencies To Hand Over Info On People Killed By Police Officers
from the better-late-than-never dept
At long last, the federal government is getting serious about tracking the use of deadly force by law enforcement officers.
For most of the last two decades, the DOJ has been collecting this information from local law enforcement agencies, but only on a voluntary basis. As a result, the federal numbers have nearly no relation to the real numbers -- which have been compiled by a handful of private actors, including The Guardian, a UK-based journalistic entity.
Last June, legislators introduced a bill (that promptly went nowhere) which would replace voluntary reporting with mandatory reporting. The FBI expressed its concern about the government's inability to collect accurate information on citizens killed by police officers, offering on multiple occasions to replace its voluntary system with a better voluntary system.
The Guardian is reporting that the voluntary system is finally being replaced with something that will create actual accountability.
Police departments will be required to give the US justice department full details of deadly incidents involving their officers each quarter, under a new government system for counting killings by police that was influenced by the Guardian.
Announcing a new program for documenting all “arrest-related deaths”, federal officials said they would actively work to confirm fatal cases seen in media reports and other open sources rather than wait for departments to report them voluntarily.
This still lets local PDs off the hook in terms of immediate self-reporting. But that's probably ok, as there's nothing in the reporting of deaths at the hands of police officers that encourages urgency or transparency from law enforcement agencies. With the feds independently verifying reported deaths -- i.e., those reported by journalists -- delays between reports and their addition to the federal numbers will be decreased dramatically.
Law enforcement agencies aren't completely off the hook, however. They'll still be required to report in custody deaths to the Justice Department. The difference is that the DOJ will no longer wait around for agencies to self-report. Local agencies heavily reliant on federal funding will probably be the agencies filling out these reports the fastest.
In their Federal Register article, officials cited their authority under the death in custody reporting act – a law that states local departments must report all deaths in custody to the justice department or lose 10% of their federal funding. The law has been largely ignored since being reauthorized in December 2014.
The other change of note is that this will no longer be a year-end tabulation after all the self-reporting is completed. Agencies can fill out one form for 2016's total deaths, but going forward will be required to hand these in quarterly.
Agencies will also be responsible for collecting a lot of data they've never had to previously. Details about the deadly incident will need to be provided, along with demographic data on the deceased. Coroners and medical examiners serving law enforcement agencies will also need to turn over information to the government and will be asked to confirm local news reports on officer-involved deaths.
This is a huge step forward for a federal agency that has long relied on voluntary reporting from compliant law enforcement agencies to tabulate the use of deadly force by officers. It's a sign that the federal government finally realizes the good people in law enforcement can't be relied on to hand over data on incidents that make them look less that perfect on a voluntary basis. Targeting federal funding is a smart move because that's the sort of money that gets spent on surveillance tools and 1033 acquisitions that agencies normally couldn't afford without it.
The real test will come when it's implemented, as it often takes more than federal mandates to alter entrenched cultures where accountability and transparency are considered weaknesses.
by Glyn Moody
Mon, Aug 8th 2016 11:23pm
Medical Researchers Want Up To Five Years Exclusivity For Clinical Trial Data Derived From Volunteers
from the papers-before-patients dept
A year ago, we wrote about how TPP's requirement for "data exclusivity" risked undermining one of science's fundamental principles: that facts cannot be owned. Data exclusivity is just the latest attempt by Big Pharma to extend its monopoly over drugs, whether using patents or other means. To a certain extent, you might expect that: after all, companies are designed to maximize profits, and if it means more people suffer or die along the way, well, that's regrettable but sort of beside the point. However, it's surprising to see a group of medical researchers writing in the prestigious New England Journal of Medicine (NEJM) calling for just the same kind of data exclusivity. The post is in response to an earlier NEJM article by the International Committee of Medical Journal Editors (ICMJE), entitled "Sharing Clinical Trial Data":
As a condition of consideration for publication of a clinical trial report in our member journals, the ICMJE proposes to require authors to share with others the deidentified individual-patient data (IPD) underlying the results presented in the article (including tables, figures, and appendices or supplementary material) no later than 6 months after publication.
Reasonable enough, you might think. But in the new commentary from the International Consortium of Investigators for Fairness in Trial Data Sharing -- which doesn't seem to have any online presence currently -- a group of "282 investigators in 33 countries" (pdf) beg to differ:
Although we believe there are potential benefits to sharing data (e.g., occasional new discoveries), we believe there are also risks (e.g., misleading or inaccurate analyses and analyses aimed at unfairly discrediting or undermining the original publication) and opportunity costs (e.g., the ICMJE proposal would have enormous direct costs and would probably divert resources, both financial and human, from the actual conduct of trials).
It's rather telling that the new discoveries that arise from research are dismissed as "occasional," while the rather weird concern about "discrediting or undermining the original publication" is put forward as if it were a major problem in the field. The International Consortium of Investigators for Fairness in Trial Data Sharing has a few suggestions for what should be done instead of the ICMJE proposal:
The timeline for providing deidentified individual patient data should allow a minimum of 2 years after the first publication of the results and an additional 6 months for every year required to complete the study, up to a maximum of 5 years.
Five years' data exclusivity takes us into TPP territory. And then there's this:
Persons who were not involved in an investigator-initiated trial but want access to the data should financially compensate the original investigators for their efforts and investments in the trial and the costs of making the data available.
You may have thought research was about winning new knowledge and willingly sharing it with your peers, but in fact it's about money. Actually, what is most shocking about the International Consortium of Investigators for Fairness in Trial Data Sharing's opinion piece is not what it says, but what it doesn't say. The whole thrust of the piece is what a tough life researchers have:
To complete an RCT [randomized, controlled trial], investigators must develop a protocol, obtain funding, overcome regulatory and bureaucratic challenges, recruit and follow participants, undertake analyses, and publish the results. This process takes several years, and for large clinical trials it can sometimes take a decade or longer. Adequate incentives for researchers to invest the substantial time and effort required to conduct RCTs and to publish the results in a timely fashion are important.
But at least it's a noble struggle, you might think, since this is all done for the patients' benefit. Or maybe not:
A key motivation for investigators to conduct RCTs is the ability to publish not only the primary trial report, but also major secondary articles based on the trial data.
It's all about those career-enhancing publications, apparently. But it's not just the patients who are missing from the International Consortium of Investigators for Fairness in Trial Data Sharing's worldview. They are also ignoring an absolutely indispensable aspect of clinical trials. It's so important that the ICMJE's article begins by acknowledging it in the first sentence:
The International Committee of Medical Journal Editors (ICMJE) believes that there is an ethical obligation to responsibly share data generated by interventional clinical trials because participants have put themselves at risk.
This is why the data that results from those clinical trials must be shared as soon as possible: because members of the public who volunteer to take part in them have literally risked their lives in order to benefit others. The idea that this data should be hoarded by the researchers for up to five years just so that they can squeeze out a few more articles that look good on their CV is profoundly insulting to the participants and their unsung selflessness.
by Mike Masnick
Mon, Aug 8th 2016 3:38am
from the short-sighted-in-the-extreme dept
Over in Australia, they've apparently got some other ideas in mind. Late last year, the Australia Bureau of Statistics announced that for this year's census it would, for the first time, retain all the names and addresses it collected. This has raised some pretty serious concerns, and some fairly weak claims from the government. Prime Minister Malcolm Turnbull has announced that no one should worry because the government always protects people's privacy. No, really.
Mr Turnbull said on Wednesday the organisation "always protects people's privacy".Anyone claiming that the security of any system "is absolute" has no fucking clue about security. There is no such thing as absolute security, and saying as such probably just acts more to entice hackers to try to break in than anything else. The comments from the ABS's chief statistician are not any more comforting. When asked about security, he went with a Trumpian response of "we have the best security features."
"The security of their personal details is absolute and that is protected by law and by practice," he said.
"That is a given."
"The ABS has the best security features," he said.Making matters even worse, over the weekend, it was revealed that the ABS actually had plans to crossmatch people's data to other government services, and do other things with it -- which is exactly what a large part of the concerns were about.
"We've never had a privacy breach with Census information and we do secure the information somewhat differently … These days we can keep names separate from address and separate from other Census content, in three separate computer systems and never brought together."
When asked if he believed this year's Census had been handled poorly, Mr Kalisch responded that "we're well ahead of where we thought we would be".
“Retention of personal identifiers could improve the value of census data through data integration and linking, which would enable new products,’’ the document, released under freedom of information laws, stated.The same document notes that there may be some "public backlash" to all of this "which would need to be carefully managed."
So far, they're not doing a very good job managing anything. The privacy and security concerns are growing rapidly, and people are speaking out on why they're willing to face fines and punishment by refusing to fill out the census -- even those who strongly support the idea of the census. This post from the former Deputy Privacy Commissioner, Anna Johnson, is well worth a read:
There's a lot more in Johnson's post that is worth reading, including just how ridiculous the privacy promises are, and even an analogy of how the ABS is acting "like a very, very bad boyfriend" who "keeps on breaking promises, pushing boundaries and disappointing you."
The definition of ‘census’ is “an official count”. I actually want to stand up and be counted. But only counted; not named or profiled or data-matched or data-linked, or anything else. The privacy risks of doing anything else are just too great.
I have thought about just refusing to provide my name. But even if I don’t give my name, if the ABS is determined to link my Census data with other datasets, there would be enough other information in my Census answers (sex, age, home address, previous home address, work address) to let them proceed regardless. It won’t be enough to protect my privacy.
As for the security assurances, beyond just being ludicrous in claiming "absolute" security, there are already some pretty serious concerns. First of all, can you really claim that your security is "absolute" when you're storing passwords in plaintext? I don't think so -- but that's apparently what the ABS is doing with census passwords.
On top of that, some are already finding that their older computers are apparently unable to handle the census. If the goal is to collect information on everyone, perhaps you should design a simple system that doesn't require a modern computer. he expects the census data to be "inevitably leaked"?
And, now, because of this mess, plenty of people say they're simply not going to obey and respond to the census. And while the Australian government may try to crack down on such behavior, in the end, it's absolutely going to call the accuracy of the census into question. So in their quest to expand the power of the census, the ABS may have done the exact opposite. * Special thanks to Australian journalist/privacy activist Asher Wolf for helping me go through some of the details on this story.
by Mike Masnick
Mon, Jul 25th 2016 2:42pm
[Updated] Wikileaks Leak Of Turkish Emails Reveals Private Details; Raises Ethical Questions; Or Not...
from the whoo-boy dept
The files were obtained by Phineas Fisher, who was the source. As far as I can tell, Fisher did not intend to dump all of the files publicly, and Fisher has not indicated that he meant to give any of the files to WikiLeaks to publish. However, they received a partial set of the documents and decided to publish them.Of course, in the meantime, there's been a lot of nastiness, with Wikileaks and its supporters unfairly claiming that Zeynep Tufekci was an agent for the Erdogan government -- which is insane if you know her at all. As Best notes in his piece, it's entirely reasonable that Tufekci assumed Wikileaks was responsible for the files (even though she only accused them, accurately, of promoting the files, not uploading or hosting them -- and they did, in fact, tweet a link to the files as well as post it to Facebook), and while Wikileaks may be on the defensive about other claims about its leaks, it didn't need to attack her credibility in the process. And it is true that Wikileaks tweeted a link to the files.
Following the WikiLeaks release of the partial set, Fisher decided to release his set. Since the files came from a known source (Fisher has been responsible for many high profile hacks, including the hack on the Hacking Team), I used the torrent file that the files were released through to create a bittorrent instance on the Internet Archive’s server. The server proceeded to download the torrent and create the item that was linked to by WikiLeaks.
After the personal information was discovered, the AKP files were removed from the Internet Archive’s server.
Although I wasn’t aware that it was included in the release at the time, I accept my responsibility in distributing the personal information. The explanation as to how it happened is not an excuse for the fact that it did happen.
Update 2: In response to our update, Zeynep Tufekci has sent over the following quote, noting that she still has concerns about how Wikileaks handled this:
"Wikileaks has never clarified that the emails it hosts are almost entirely mundane emails of ordinary citizens and revealed nothing of public interest after days of intense combing (though there were privacy violations there as well), and it has never apologized for the fact that the databases that it repeatedly, and via multiple channels, pointed to its millions of followers as full data of "our AKP emails" (they weren't) and "more" actually contained private and sensitive information of tens of millions of people in Turkey, including more than 20 million women. I never claimed that they hosted; I was agnostic on that point so none of the substantive discussions revolves around who hosted them. However, I'm glad the person who uploaded them has come forward to apologize, and learn from this. I hope the broader hacker community also reflects on this, and realizes that rushing, jumping on news cycles, dumping data indiscriminately, uploading stuff you do not know, working in a language you do not understand with no local contacts, and then accusing your critics of being government shills without the slightest attempt at research is not okay."And... original article below.
Last week, we (like many others) reported on the news that Turkey was blocking access to Wikileaks, after the site released approximately 300,000 emails, supposedly from the Turkish government. We've long been defenders of Wikileaks as a media organization, and its right to publish various leaks that it gets. However, Zeynep Tufekci, who has long been a vocal critic of the Turkish government (and deeply engaged in issues involving the internet as a platform for speech) is noting that the leak wasn't quite what Wikileaks claimed it was -- and, in fact appears to have revealed a ton of private info on Turkish citizens.
Yes -- this "leak" actually contains spreadsheets of private, sensitive information of what appears to be every female voter in 79 out of 81 provinces in Turkey, including their home addresses and other private information, sometimes including their cellphone numbers. If these women are members of Erdogan's ruling Justice and Development Party (known as the AKP), the dumped files also contain their Turkish citizenship ID, which increases the risk to them as the ID is used in practicing a range of basic rights and accessing services. I've gone through the files myself. The Istanbul file alone contains more than a million women's private information, and there are 79 files, with most including information of many hundreds of thousands of women.What's not in the leak, apparently, is anything really about Erdogan's government:
According to the collective searching capacity of long-term activists and journalists in Turkey, none of the "Erdogan emails" appear to be emails actually from Erdogan or his inner circle. Nobody seems to be able to find a smoking gun exposing people in positions of power and responsibility. This doesn't rule out something eventually emerging, but there have been several days of extensive searching.At the very least, this does raise some ethical questions. In the past, Wikileaks has (contrary to what some believe!) actually been pretty good about redacting and hiding truly sensitive information that isn't particularly newsworthy. It's possible that this is just a slip up. Or it's possible that Wikileaks got lazy. Or it's possible that the organization doesn't care that much to go through what it gets in some cases. [Update: Or, see the update above, where we discover it was a third party that uploaded this data, that then got associated with the Wikileaks data after Wikileaks tweeted].
I still think that the organization has every right to release what it gets, but it should also be open to criticism and people raising ethics questions about what it has chosen to release. The fact that it appears to have failed to consider some of the questions in this case, and then possibly overplayed the story of what was in this release is certainly concerning, and harms Wikileaks' credibility. [Update: so, this was a mistake, though it's unfortunate that Wikileaks then lashed out out Tufekci and others making additionally baseless claims. Yes, it was wrongly accused, but that's no reason to wrongly accuse others as well.]