Posted on Techdirt - 8 February 2016 @ 3:22am
Because citizens are localized but their data isn't, things aren't going to get any less weird as time progresses. Or any less legally troublesome. Ellen Nakashima and Andrea Petersen of the Washington Post have seen a copy of a draft negotiating document between UK and US representatives that would allow MI5 (and presumably other agencies) to access data and communications held on US servers.
The transatlantic allies have quietly begun negotiations this month on an agreement that would enable the British government to serve wiretap orders directly on U.S. communication firms for live intercepts in criminal and national security investigations involving its own citizens. Britain would also be able to serve orders to obtain stored data, such as emails.
UK agencies would still be locked out of obtaining information or data on US persons and it would take legislation to actually make this access a reality, but it's apparently being considered, as UK officials feel this issue is standing in the way of investigations/counterterrorism efforts.
As it stands now, UK agencies must make formal diplomatic requests which rely on a Mutual Legal Assistance Treaty -- a process that can take months. That's not good enough, apparently. Everyone wants instant access, including UK agencies, and a strong streak of entitlement (the same entitlement guiding FBI director James Comey's one-sided "debate"
on encryption) runs through the arguments for this expansion of the UK's legal powers.
“Why should they have to do that?” said the administration official. “Why can’t they investigate crimes in the U.K., involving U.K. nationals under their own laws, regardless of the fact that the data happens to be on a server overseas?”
Why indeed? Why comply with existing laws or territorial restrictions? After all, the FBI is working toward the same end, pushing for the right to hack servers
located anywhere in the world when pursuing criminals.
Several issues need to be addressed before UK agencies can be granted permission to demand communications and data from US companies. For one thing, a warrant issued in the UK is not exactly the same thing as a warrant issued in the US. The legal standards may be similar, but they're still a long ways from identical.
The negotiating text was silent on the legal standard the British government must meet to obtain a wiretap order or a search warrant for stored data. Its system does not require a judge to approve search and wiretap warrants for surveillance based on probable cause, as is done in the United States. Instead, the home secretary, who oversees police and internal affairs, approves the warrant if that cabinet member finds that it is “necessary” for national security or to prevent serious crime and that it is “proportionate” to the intrusion.
Note the "silence" on the differences between the legal standards. It appears no one involved in this discussion is interested in digging into these disparities.
A second administration official said that U.S. officials have concluded that Britain “already [has] strong substantive and procedural protections for privacy.” He added: “They may not be word for word exactly what ours are, but they are equivalent in the sense of being robust protections.”
As a result, he said, Britain’s legal standards are not at issue in the talks. “We are not weighing into legal process standards in the U.K., no more than we would want the U.K. to weigh in on what our orders look like,” he said.
That's great. Both countries won't examine each other's legal standards because they don't want to upset the reciprocity implicit in the draft agreement. The UK can ask for stuff from US companies and vice versa, with neither country playing by the other country's rules. In between all of this are citizens of each respective countries, whose data and communications might be subjected to varying legal standards -- not based on where the data is held, but who's asking for it.
Of course, the alternatives are just as problematic. If an agreement like this fails to cohere, overseas governments will likely demand data and communications generated by their citizens be stored locally
, where they would be subject only to local standards.
Then there's the question of what information these agencies already
have access to, thanks to the surveillance partnership
between the NSA and GCHQ. Although neither agency is supposed to be focused on domestic surveillance (although both participate in this to some extent), the NSA is allowed to "tip" domestic data to the FBI for law enforcement purposes. Presumably, GCHQ can do the same with MI5. The tipped info may not be as comprehensive as what could be obtained by approaching a provider directly, but it's certainly more than the black hole the current situation is being portrayed as. (Especially considering GCHQ already has permission to break into any computer system located anywhere
in the world...)
No matter what conclusion the parties come to, legislation addressing it is likely still several months away, if it ever coheres at all. Congress -- despite its occasional lapses into terrorist-related idiocy -- is likely not interested in subjecting US companies to foreign laws, no matter the stated reason for doing so. But if it doesn't oblige the UK (and others who will jump on the all-access bandwagon), it's safe to assume the British government will move towards forcing US companies to set up local servers and segregating communications and data by country of origin.
29 Comments | Leave a Comment..
Posted on Techdirt - 5 February 2016 @ 12:46pm
The state of Maryland's defense of the Baltimore PD's warrantless use of Stingray devices continues, taking the form of a series of motions unofficially titled Things People Should Know About Their Cell Phones.
The last brief it filed in this criminal prosecution claimed "everyone knows" phones generate location data, therefore there's no expectation of privacy in this information. As commenters pointed out, people may know lots of stuff about records they're generating, but that doesn't mean law enforcement should have warrantless access to those records.
Everyone Knows… That my Doctors generate medical data about patients, so how about we get their medical records on public display without warrants!
With no expectation of privacy, there's no need for a warrant. And with no warrant requirement, there's no chance of having evidence tossed. That's a win Maryland needs, considering the Baltimore PD alone has deployed IMSI catchers several thousand times
without obtaining warrants. Everything runs through pen register orders, which both lower the burden of proof and (in many cases) obscure the technology actually being used.
Now, it's back with its response to the defendant's motion to dismiss and it's again claiming People Know Stuff, therefore no expectation of privacy. (h/t Brad Heath
After dismissing the defendant's arguments about police use of location tracking devices as "dystopian fantasies," the state argues
it's time for the accused (not just this one, but any others facing prosecutions predicated on warrantless cell phone tracking device usage) to stop pretending they don't know how much data their phones are coughing up.
While cell phones are ubiquitous, they all come with "off" switches. If a cell phone is turned on, it is receiving signals from cell towers, and sending signals back out to cell towers. The cell site simulator used in this case took advantage of that fact in order to locate Andrews's phone. Because Andrews chose to keep his cell phone on, he was voluntarily sharing the location of his cell phone with third parties. Under the doctrine set forth by the Supreme Court in Smith, supra, he cannot claim a Fourth Amendment privacy right in this case.
The "Smith" the state refers to is 1979's Smith v. Maryland
, which law enforcement loves
to use in cell phone surveillance cases, because:
a) it's incredibly outdated, and
b) it provides a very broad and favorable reading of the Third Party Doctrine
as it relates to phone usage.
The state says it's the defendant's own fault he was located. After all, he had a choice. And he chose badly.
Andrews complains that the police "invaded" a "constitutionally protected area," and therefore this search triggered Fourth Amendment protections under United States v. Karo, 468 U.S. 705 (1984) and Kyllo v. United States, 533 US. 27 (2001). But in Karo, the suspect was unaware that he had brought a police transponder into his home, and in Kyllo, the suspect was unable to prevent grow-lights (or his body) from emitting heat. Andrews, by contrast, was quite aware that he was bringing his own cell phone into the house. And he was quite capable of turning it off
The government's argument, while technically solid when used in conjunction with these precedent-setting decisions (Smith's
outdated view of phones notwithstanding), but it becomes completely disingenuous when it describes the "sharing" of identifying phone data.
Just as the telephone company in Smith used transmitted phone numbers in a way quite distinct from the way in which the police used them, so, too, Andrews's cell service provider used the ID number broadcast by his cell phone in ways quite distinct from the way in which the police used it. The way in which the information was used does not alter the "expectation of privacy" in the information itself. Smith controls here. Andrews's addition of the adjective "exact" to the noun "location" does not alter that fact. The issue is not whether Andrews was aware that the police could find the location of his cell phone to within 20 yards. The issue is whether Andrews can claim an objectively reasonable expectation of privacy in information which he was voluntarily broadcasting to third parties at all times. Under Smith, the answer is no.
There is no Fourth Amendment right to evade a valid arrest warrant. Andrews was wanted on multiple counts of attempted murder. A life "on the lam" may require some inconveniences, such as not staying in one's home, and turning one's cell phone off when not in use. There is no constitutional right to avoid being arrested for one's crimes, and nothing unreasonable about the police using the same information that Andrews was sharing with the rest of the world to apprehend him.
The "rest of the world?" Really? Andrews may have been able to talk his cell phone provider into turning over a copy of all the data his phone had generated, but it's not as though the general public has access to this information, expectation of privacy or no. Just because law enforcement can access this information with warrants or (more likely) pen register orders does not make it information "shared" with "the rest of the world." It is not shared indiscriminately and it's only because cell providers are legally compelled
to cooperate with law enforcement (CALEA
, etc.) that cops can obtain this information with a pen register order, rather than a warrant.
And, in this case, the information was not obtained with a court order. There may be a court order on record that would give the impression the BPD would approach a telco for phone records, but the actual collection of Andrews' location info was done with a Hailstorm cell tower spoofer. The state claims the request specified the use of a cell tower spoofer but there's no indication the presiding judge had any idea how much information these devices can obtain. A pen register order refers to a targeted phone number. A cell tower simulator gathers information from everyone
in the area.
This isn't just a fight over this particular prosecution. This is the state safeguarding its thousands of Stingray deployments. If it's going to be able to keep those prosecutions from falling apart -- now that the BPD's devices are an open secret -- it needs the court to agree there's no expectation of privacy in cell phone location data. And in order to do that, it apparently needs the court to believe everyone using a cell phone is sharing all sorts of information with "the rest of the world."
Read More | 48 Comments | Leave a Comment..
Posted on Techdirt - 5 February 2016 @ 11:39am
More sexting stupidity, this time in Michigan.
A Three Rivers, Michigan, teenager is both the victim and perpetrator of a sex crime. He might land on the sex offender registry, and face criminal charges, all because he took an inappropriate photo—of himself.
The boy is unnamed in local news reporters, which note that he is under 15 years of age. He allegedly took a nude photo of himself on a girl’s cell phone. That girl sent the picture to another girl, who sent it to another. Preliminary charges are pending for all three—the boy was charged with manufacturing child porn, and the girls with distributing it. A prosecutor is still weighing whether to pursue the charges.
Hopefully, the prosecutor will realize that pursuing the suggested charges could ruin a few teens' lives. The police detective working the case seems to want to destroy these kids' lives… for the good of other teens, or something.
Police Detective Mike Mohney told WBST.com that sexting is a serious crime because it leads to “bullying,” and “real severe things like people committing suicide or violent crimes against others because they're so embarrassed about it.”
As Reason's Robby Soave points out, Detective Mohney is a walking contradiction. Apparently, it's never occurred to him that bringing child porn charges against these young teens might result in bullying and suicide. Nothing makes the future look dim and hopeless like a long stint on the sex offender registry. Nothing destroys someone's reputation faster than being listed alongside criminals who manufactured actual
child porn, rather than just took a photo of their own adolescent body.
For that matter, the preliminary charges make this teen's decision to photograph his own body and send it to another teen a far worse crime than if he'd simply showed up at the girl's house, stripped off his clothes and proceeded to engage in sexual activity with her.
Taking off his clothes at her house would have been nothing more than indecent exposure
, a misdemeanor. More importantly, unless the person has been convicted for other sexual-related crimes
, there's no sex offender registration tied to the charge.
Even if he'd pursued sexual contact with the other teen, it still would have been a better outcome than being branded a child pornographer. Michigan has no "Romeo and Juliet" law, so any contact between teens -- no matter their closeness in age -- could trigger statutory rape charges. (Obviously, if the sexual activity was not consensual, this would be actual
rape, but there's no reason to believe a [possibly] unsolicited naked photo rises to the level of aggravated sexual assault.)
If the activity was consensual, the worst charge would be statutory rape, which does not require sex offender registration for teens
[P]eople who are convicted of criminal sexual conduct based on consensual sexual conduct with children over the age of 13 who are not more than four years older than their victims are not required to register.
And, if the sexual contact contained no penetration, no criminal charges would be brought at all
[A] 17-year-old who engages in consensual petting with a 14-year-old could not be prosecuted for a crime. However, if the parties engaged in oral sex, the 17-year-old could face prosecution.
So, this so-very-concerned detective has taken a digital photo -- taken by a teen of his own body
-- and turned it into something worse than actual in-person nudity and/or sexual contact. That's a pretty fucked up way to show concern for sexting teens. Treating photos taken by minors and distributed to other minors as child porn
is the worst
possible way to handle a situation that, in all reality, should be left to the discretion of the teens' parents.
67 Comments | Leave a Comment..
Posted on Techdirt - 5 February 2016 @ 10:32am
Nothing pushes a negative review of your product out of the public eye faster than a lawsuit, am I right? That's the line of thinking Enigma Software has chosen to entertain. It recently filed a lawsuit against BleepingComputer, alleging that its 2014 "review" (actually a forum post detailing Enigma's SpyHunter history as "rogue" software and the deceptive business practices the company has deployed) is defamatory.
What would seem to be a mixture of opinion and fact-based assumptions (backed by links to other sources) is portrayed by Enigma as a malicious attempt by BleepingComputer to damage its reputation so the site can push readers to affiliate partners and advertisers.
Enigma Software claims in its lawsuit that BleepingComputer has the negative SpyHunter review because it takes part in an affiliate advertising program which grants BleepingComputer a commission for redirecting users to Malwarebyte’s site. The Enigma Software Group claims, “Bleeping not only has unlawfully benefited from its smear campaign to the detriment of ESG, it has damaged the reputation of ESG by refusing to take down its false and misleading statements which have been reposted numerous times on other anti-spyware related forums and websites.”
Other computer security sites have already leapt to BleepingComputer's defense. Malwarebytes has donated $5,000 to the site's legal fees
and points out that BleepingComputer is not some fly-by-night operation that solely acts as a funnel to preferred vendors.
The content is provided by the volunteer efforts of security professionals and the more than 700,000 registered users who ask and answer all questions presented on the site. To summarize, Bleeping Computer is a valuable resource in the efforts to help users live in a malware free world.
Over at CSO's Salted Hash, Steve Ragan points out the reputation Enigma claims BleepingComputer is destroying has already been severely damaged by the company's own actions over the years.
[T]he lawsuit says, "Bleeping has a direct financial interest in driving traffic and sales to Malwarebytes and driving traffic and sales away from ESG."
While that claim is true at face value, the affiliate programs used by Bleeping Computer help keep the website online and they use affiliate links for a number of vendors, not just Malwarebytes.
Also, most of the comments that are critical of Enigma Software and SpyHunter exist because the company has gained a bad reputation over the years due to spam, as well as questionable detection rates.
Ragan then runs down Enigma's history, including the high number of refunds it's had to hand out to maintain its A+ BBB rating
, as well as the years it spent being blacklisted as a security risk by respected anti-virus firms.
He also notes, as BleepingComputer did in its disputed forum post, that SpyHunter has never been classified as malware or targeted for removal by competing anti-virus products, but that's apparently largely due to Engima's past litigious efforts
, rather than Enigma dropping the more questionable "features" of its product -- like automatic renewals, suspicious scan results and its "pay-to-clean" pricing. (The scan is free. The removal requires a six-month subscription, which will be automatically renewed by Enigma in perpetuity unless otherwise instructed.)
The lawsuit is already off on the wrong foot, what with it clearly being filed solely to shut down criticism. While Enigma may find New York's lack of a universal anti-SLAPP statute useful (the current version
only protects speech related to the discussion of public permits, and even then, it only protects certain people [bloggers, non-traditional journalists] from SLAPP lawsuits brought by government entities), it's now facing Marc Randazza, who has taken up BleepingComputer's defense.
Adding to this is the fact that the specific statements Enigma claims are false and defamatory aren't even directly quoted from the posted review. They're rephrased to put words in the mouth of the forum moderator who posted it. This low-level deception might have made sense if Enigma hadn't included a screenshot of the post it's misquoting as an exhibit in the filing.
Here are Enigma's claims, followed by the actual wording used by BleepingComputer.
In these posts, Bleeping makes the following assertions falsely and without any reasonable basis to believe that the statements were true when made:
That SpyHunter 4 or ESG engage in "deceptive advertising which violates several consumer protection laws in many states";
[The "quoted" statement does not actually appear in this post, or in any of the ones following it in the thread.]
ES: That SpyHunter 4 or ESG has a "history of employing aggressive and deceptive advertising";
BC: SpyHunter by Enigma Software Group USA, LLC is a program that was previously listed as a rogue product on the Rogue/Suspect Anti-Spyware Products List because of the company's history of employing aggressive and deceptive advertising.
[This claim is backed up by a footnote linking to an outside source
that reinforces BC's claim.]
ES: That SpyHunter 4 is a "rogue product";
BC: SpyHunter by Enigma Software Group USA, LLC is a program that was previously listed as a rogue product on the Rogue/Suspect Anti-Spyware Products List…
BC: SpyHunter is not classified as malware or rogue security software and other antivirus and antimalware vendors do not target it for removal.
ES: That SpyHunter 4 or ESG have not cooperated in submitting their program for testing "most likely due to the program's ineffectiveness and high rate of false positives?";
[Again, this "quoted" phrase does not appear in the post, or in any the moderator's posts in the same thread. The moderator notes it has not been tested by other AV firms to determine its effectiveness, but does not make any related claim about false positives or ineffectiveness. The closest thing to it is this sentence, which is clearly an opinion.]
In my opinion SpyHunter is a dubious program with a high rate of false positives.
[This is backed up by a link to supporting information
from an outside source.]
ES: That SpyHunter 4 or ESG engage in deceptive pricing;
BC: While there are mixed reviews for SpyHunter, some good and some bad, my main concern is the reports by customers of deceptive pricing, continued demands for payment after requesting a refund, lack of adequate customer support, removal (uninstall) problems and various other issues with their computer as a result of using this product. For example, some users are not aware that when purchasing SpyHunter, they have agreed to a subscription service with an automatic renewal policy.
[Again, these statements are supported by links to information sources
. The addition of "my main concern" clearly shows the moderator is making a statement of opinion based on available information. And the connecting phrase "reports by customers" makes it clear he's making an inference based on statements by others.]
ES: That most users of SpyHunter 4 "are not aware that when purchasing SpyHunter, they have agreed to a subscription service with an automatic renewal policy"; and
[See the above quote and note, again, that multiple links in the review direct readers to outside sites backing up this statement, like the numerous complaints about this practice found at ComplaintsBoard
and the Better Business Bureau
ES: That SpyHunter 4 is "malware" or "rogue security software" despite not being classified as such by security vendors.
BC: SpyHunter by Enigma Software Group USA, LLC is a program that was previously listed as a rogue product…
BC: SpyHunter is not classified as malware or rogue security software and other antivirus and antimalware vendors do not target it for removal.
[These two directly contradict the assertion being made by Enigma in its lawsuit. The author of the post never states that SpyHunter is "malware" or "rogue security software."]
Enigma doesn't have much of a case. But it has just enough of one to be troublesome. It's forced others to bend to its will in the past by aggressively litigating, and it can drain BleepingComputer of time, energy and money just by forcing it to defend itself from ridiculous claims.
Read More | 18 Comments | Leave a Comment..
Posted on Techdirt - 5 February 2016 @ 6:23am
According to a recently-filed lawsuit, the media is apparently every bit as "helpful" as law enforcement when it comes to the responsible, logical handling of teens and sexting. Confusing "hurting" with "helping," Colorado's KOAA allegedly exposed not only the name of a teen involved in a sexting incident, but also the part that puts the "sex" in "sexting."
The station, KOAA TV, aired footage of the boy’s erect penis during a news report that was put together after his father’s girlfriend approached producers about an alleged blackmail attempt, according to a complaint filed Friday in U.S. District Court.
Producers were told on Feb. 24 by the woman that someone had tried to blackmail the teen, now 16, using sexually explicit material. That same day they arrived at the family house in Pueblo, Colorado to investigate the claims and interview the boy’s father, Elijah Holden. While on assignment, the suit alleges that the news team collected screenshots from the teen’s Facebook page, as well as images from the YouTube page where the blackmail video had been uploaded, to be used in their coverage.
The plaintiff and his father both asked that the name “be kept confidential through any report presented by Defendant KOAA,” attorney Matthew Schneider said in the filing.
Since law enforcement largely seems to feel sexting = child porn
, the station should have found itself under investigation for distributing child porn. Instead, the only negative result of its allegedly terrible editorial practices so far is Holden's lawsuit
Holden is seeking damages related to the outing of his name and sexual organs, with damages sought clearing the $1 million mark. In its defense, the station had this to say:
“Through a series of stories during the last several years, KOAA has informed its viewers about the dangers of sexting and cell phone security,” KOAA president and general manager Evan Pappas said in a statement to Courthouse News, where the suit was first reported on Tuesday this week. “At the specific request of the victim’s father, we ran a story two years ago about his son being blackmailed over a cellphone video.”
Well, I guess nothing better illustrates the dangers of sexting more than irresponsibly splashing a minor's name and penis all over the TV screen. Of course, considering these were tied to blackmail allegations by an adult, it would seem more -- much more
-- discretion would have been in order. Instead, the TV station went the other way, displaying the name of the minor involved over a screen cap of his penis and topped it off by dragging his social circle into the mess.
The station claims the allegations are unsubstantiated, but there's really no excuse for using a minor's name -- even if the guardian gave permission to the news outlet to do so. But going past that, how does the station hope to explain its use of an explicit photo of a minor in a publicly-broadcast news report? According to the lawsuit, something that could be considered child pornography somehow made its way past internal censors and ended up on the evening news.
Defendant KOAA aired the thumbnail image of the YouTube video depicting Plaintiff's erect penis and his name as a part of the story shown on February 24th 2014.
While journalists have played an important part in exposing ridiculous prosecutions of sexting teens, there's no denying the lurid nature of the subject matter is also beneficial to the entities covering the stories. The implicit suggestion that YOUNG NAKED TEENS lie just beyond the next commercial break attracts additional viewers. This additional motivator might explain the apparent lack of discretion on the part of KOAA.
As of now, what we have is a news agency that claims it broadcasts these stories to educate the public on the dangers of sexting while apparently feeling compelled to drive that point home through its own actions.
Read More | 51 Comments | Leave a Comment..
Posted on Techdirt - 5 February 2016 @ 3:21am
Nothing does more damage more quickly to your community than deciding to place your fear of piracy over the the concerns of those who've already paid for your product. DRM is rarely, if ever, the answer. And yet, it remains an inexplicably popular "solution."
Daz 3D, which produces 3D art software as well as assets for use with third party software, has decided to do something about its perceived piracy problem. Last November, it had this to say:
[W]e feel the best way to fight piracy is make the convenience of doing something legally more so than the inconvenience of pirating. That is why we made finding, downloading, installing, and loading content in Studio as streamlined and easy as possible while making getting a pirate-able copy of the original product harder.
The solution to the problem, according to Daz 3D, was to have the software "phone home" at least once to obtain a key for content/software files, which would only arrive in encrypted form. Supposedly, this would be limited to once per computer but the new, encrypted files would pose problems for existing users.
Those on older versions of Daz's software would be unable to access any new content. Transferring old data could also result in problems -- something Daz acknowledged in a later post
, noting that scripts and tools might not work with unencrypted content.
At the time of the announcement, no plan was in place to provide offline users with authentication keys, nor would it be possible to purchase new content without running through Daz's "Connect" service, which not only authenticates users but "assembles" newly purchased content for use with Daz software.
Daz did the right thing and put its proposals up for discussion. This generated dozens of pages
of comments, many of which were from users opposed to the addition of DRM. Some were concerned about the Daz Connect DRM breaking content they'd already paid for. Others simply didn't like being treated like pirates when they'd actually paid for software and add-ons.
Daz's representatives were active in forum discussions and very straightforward about their reasons for looking into instituting DRM. The company is hoping a few extra installation hoops and another layer of authentication would deter casual pirates, leaving them only the diehard crackers interested in "capturing" a niche "market."
The willingness to listen and participate in the discussion separates Daz from many other companies
who've added DRM to their products. Unfortunately, it appears the discussion had little effect on Daz's final decision. The post may be titled "You've been heard," but the content contained in it
indicates the listening was little more than a formality. Daz will be moving ahead with its original plan, despite customers making it clear they'd rather have a product that doesn't introduce compatibility problems. Nor do they want to be limited to a single distribution system. And they're less than thrilled about the "phone home" requirement.
The new post, delivered four months after the original announcement, changes nothing about the DRM structure. While it does add some fail-safe measures (like third-party escrow that will prevent users from being locked out of their purchases if Daz goes out of business), the end result is still the same. DRM is coming to Daz and there's nothing users can do about it.
Currently Daz Connect gives customers the ability to install (among other things) encrypted content. Daz Connect also lets customers retrieve a Key to decrypt their content. Customers have raised the concerns of:
What if Daz is not available to provide the keys anymore, chooses not to, or starts charging an additional fee to get a key for previously purchased content?
Solution: We have developed and fully tested a utility which will decrypt, and save in non-encrypted formats, Daz products on a customer’s computer. We are also working out details with a software escrow company who will provide this utility to the public free of charge in the event that Daz is no longer in a business position to, or is unwilling to continue offering this as a free service. This will also be added to the Daz EULA to ensure customers of our commitment to enable them to always be able to use content that they have purchased a license for.
Obviously this does not address other issues such as scripts and tools that work on un-encrypted content. But those are solved in other ways. We are working (and will continue to work) with developers who have this need, in order to show them how to do it with encrypted content.
Apparently, "hearing" actually means ignoring concerns people expressed, including portability from older versions of Daz's software. And, as is nearly always the case when DRM to added to a previously DRM-free product, the company is presenting it as a win for paying customers
Is the encryption associated with Daz Connect essentially Digital Rights Management (DRM)?
We strive to add great benefits to being connected while limiting the impact to the user experience. Although we have included file encryption to protect our artist community, the primary target is to provide a better experience for our users. Daz Connect delivers and updates products more efficiently but relies on the fact that files are in a location and format that is maintained by the application. In this sense, Daz Connect provides some measure of digital rights management.
So, Daz is thinking of its customers while simultaneously willing to ignore those customers to institute something it thinks will decrease piracy. While I can appreciate the fact Daz wants to protect its bottom line, it needs to be aware that instituting these new restrictions will result in actual
lost sales -- something that may ultimately prove more harmful than the theoretical lost sales
Daz attributes to piracy.
39 Comments | Leave a Comment..
Posted on Techdirt - 4 February 2016 @ 11:41am
The Russian block party continues. The government agency in charge of censoring the internet is still working its way backwards, hoping to erase the collective memories of the web… or at least, keep Russian citizens from seeing certain bits of the archived past.
Last summer, Russia blocked the Internet Archive's "Wayback Machine," an extremely useful tool that allows users to see historical snapshots of websites. The government may only have intended to block a single page, but because the Internet Archive utilizes HTTPS, the only practical way for ISPs to block the targeted pages was to block it at the domain level.
The same thing is now happening to archive.is, another useful tool that allows users to archive pages they feel might be altered or disappear altogether at some point in the future. (via Google Translate and an anonymous TD reader)
Roskomnadzor introduced archive.is service to Internet resources registry, prohibited by the law of the Russian Federation.
On the site supervisory authority pointed out that archive.is entered in the register by order of the Federal Service for Drug Control 28 January 2016.
Service continues to work as usual, but for many Russian customers of providers it is no longer available.
The problem here is the Russian's take on the War on Drugs. Because it's illegal to discuss drug use/abuse/sales, Roskomnadzor has disappeared another archive that might
contain copies of pages it's blocked in the past. That the service would be of use to Russian citizens for non-drug related purposes appears to be of no concern to the Russian government.
And again, it's the use of HTTPS that's resulted in the entire site being blocked. Targeted pages can't be targeted if the connection is encrypted. So, down goes the entire site and, of course, no one in the web censorship body seems to be bothered by the collateral damage.
8 Comments | Leave a Comment..
Posted on Techdirt - 4 February 2016 @ 9:25am
A Techdirt reader has sent us a copy of former DHS head/current University of California President Janet Napolitano's official response to the outcry over the secret surveillance of UC staffers -- surveillance she personally approved.
Napolitiano's letter to UC-Berkeley employees immediately ties the secretive surveillance implementation to the UCLA Medical Center cyberattack, just in case anyone (and it's a lot of anyones) feels the effort was unwarranted.
A group of faculty members at the Berkeley campus has articulated concerns regarding some of the security measures we adopted in the wake of the UCLA cyberattack last year. The concerns focus on two primary issues: whether systemwide cyber threat detection is necessary and whether it complies with the University’s Electronic Communications Policy (ECP); and why University administrators failed to publicly share information about our response to the cyberattack.
If your privacy is being compromised, the real villains here are the people behind the cyberattack. As for the secrecy surrounding it, Napolitano seems to indicate she'd like
to discuss it, but immediately abandons that line of inquiry to blame disgruntled staffers and the media for misrepresenting her snooping initiative.
The Berkeley faculty members have shared their concerns with colleagues at other campuses and with various media outlets. Unfortunately, many have been left with the impression that a secret initiative to snoop on faculty activities is underway. Nothing could be further from the truth.
I attach a letter from Executive Vice President and Chief Operating Officer Nava explaining the rationale for these security measures.
Great, except that Nava's letter arrived five months
after the program was implemented and two months after a university official said the program would be shut down -- a statement which itself preceded (by a month) the news that the program has actually been allowed to continue uninterrupted.
Napolitano claims there was no secrecy.
As you know, leadership at all levels, including The Regents, Academic Senate leadership, and campus leadership, has been kept apprised of these matters, including through the establishment and convening of the Cyber Risk Governance Committee (CRGC). The CRGC, comprises each campus’s Cyber Risk Responsible Executive (CRE), as well as a representative of the University’s faculty Senate, the General Counsel, and other individuals from this office with responsibility for systemwide cybersecurity initiatives.
Yes, look at all the people who were informed! And were apparently informed they could not pass this information on to anyone else!
From our earlier post on the subject -- directly from some of those on Napolitano's "approved" list.
UCOP would like these facts to remain secret. However, the tenured faculty on the JCCIT are in agreement that continued silence on our part would make us complicit in what we view as a serious violation of shared governance and a serious threat to the academic freedoms that the Berkeley campus has long cherished.
For many months UCOP required that our IT staff keep these facts secret from faculty and others on the Berkeley campus.
This assertion directly contradicts Napolitano's depiction of the events.
I have from the beginning directed my staff to make every effort to actively engage with all stakeholders and to minimize to the extent possible the amount of information that is not shared widely.
This seems highly unlikely, considering no one began publicly talking about this secret surveillance until just recently. If the information had been widely disseminated (as Napolitano's claims she directed), the backlash would have begun months ago.
And, of course, Napolitano is all about that privacy.
Personal privacy and academic freedom are paramount in everything we do. But we cannot make good on our commitment to protect individual privacy without ensuring a sound cybersecurity infrastructure. While we have absolutely no interest in the content of any individual’s emails or browsing history, we must accept that active network monitoring is a critical element of a sound cybersecurity infrastructure and the interconnectedness of the University and all of its locations requires that such monitoring be coordinated centrally.
School officials -- at least those allowed to see email content/web browsing history -- may claim they have "no interest" in seeing it, but that doesn't change the fact that any of them can
access it without fear of repercussion. Not only that, but a third party has access to this same data -- a third party Napolitano won't identify.
She closes her official "this is all fully justified because cyber" letter with the same assertion
so many officials make when secret goings-on are dragged out into the sunlight: "I've always wanted to have this discussion I'm now being forced to have!"
I invite further robust discussion and debate on this topic at upcoming meetings of the CRGC and COC.
That's just disingenuous. Don't extend an invitation to a conversation you can no longer avoid.
As the TD reader who sent this over explains, they're not exactly thrilled the former DHS head is using a privacy breach to further undermine UC staffers' privacy.
This sort of thing, by the way, is exactly the reason that everyone had the "say what?" reaction when Napolitano was appointed. This is why people were concerned.
P.S. I'm one of the people whose information was compromised in the UCLA Med Center hack, and don't appreciate their screw-up then being used as an excuse to screw us over now.
Read More | 25 Comments | Leave a Comment..
Posted on Techdirt - 3 February 2016 @ 11:36am
Former DHS boss Janet Napolitano -- who once stated she "doesn't use email" (for many reasons, but mainly to dodge accountability) -- is now showing her underlings at the University of California why they, too, might not want to "use email": someone might be reading them over their shoulders.
UC professor Christopher Newfield has the inside details of the recently-exposed monitoring system secretly deployed by the University of California (and approved by school president Napolitano) to keep tabs on the communications, web surfing and file routing of its employees. The SF Chronicle has an article on the secretly-installed spyware behind its paysieve [try this link], but Newfield has the internal communications.
The installation of the third-party monitoring software was so secretive that even the university's campus information technology committee was forbidden from discussing it with other staff. The committee has now decided to go public.
UCOP would like these facts to remain secret. However, the tenured faculty on the JCCIT are in agreement that continued silence on our part would make us complicit in what we view as a serious violation of shared governance and a serious threat to the academic freedoms that the Berkeley campus has long cherished.
Some salient facts:
- The UCOP had this hardware installed last summer.
- They did so over the objections of our campus IT and security experts.
- For many months UCOP required that our IT staff keep these facts secret from faculty and others on the Berkeley campus.
- The intrusive hardware is not under the control of local IT staff--it sends data on network activity to UCOP and to the vendor. Of what these data consists we do not know.
- The intrusive device is capable of capturing and analyzing all network traffic to and from the Berkeley campus, and has enough local storage to save over 30 days of *all* this data ("full packet capture"). This can be presumed to include your email, all the websites you visit, all the data you receive from off campus or data you send off campus.
The official excuse for the installation of intrusive spyware is "advanced persistent threats" possibly related to a cyberattack on the UCLA Medical Center last summer. How monitoring staff emails plays into the thwarting of "threats" hasn't been explained. Now that the secret's out, the university is claiming it's all good because policies prevent the university from using any intercepted information/communications for "nonsecurity purposes."
The university may have a policy forbidding this activity, but that's not really the same thing as guaranteeing abuse of this surveillance will never happen. Its belated not-an-apology offers no contrition for keeping this a secret from a majority of its staff. And the statement does not name the third party in charge of the collection and monitoring.
While it certainly isn't unusual for employers to monitor employees' use of company computers and devices, it's normally clearly stated in policy manuals, rather than installed surreptitiously and cloaked in deep secrecy.
As Newfield points out, no one was apprised of the monitoring until after it was underway. Some heard a few weeks after the monitoring was put in place (August of last year) when the university updated its security policies following the medical center breach. Many more heard nothing until the first week of December. Following the wider exposure, staffers were assured by the school's vice president that the monitoring would cease and the software would be removed.
The VP said one thing and the school did another.
On Jan. 12, 2016, The Berkeley Joint Committee on Campus Information Technology (JCCIT) met with Larry Conrad and others. The committee was informed that contrary to the Dec. 21, 2015 statements, UCOP had decided to continue the outside monitoring and not disclose any aspects of it to students or faculty.
At this point, the decision was made to go public. A letter was drafted and sent to school administration. It was also sent to the New York Times. This prompted the generation of bullshit from the Executive VP's office.
On Jan. 19, 2016, UCOP Exec. VP and COO Rachael Nava sent a letter to those who signed the Jan. 15, 2016 letter. The original version was marked "CONFIDENTIAL: DO NOT DISTRIBUTE" and invoked "Attorney-Client privilege". After several recipients responded to her via email questioning who is the client and why her letter must be kept secret, a revised version of the letter was sent the next day removing that language, stating: "All: Please accept my apologies with regard to the confusion on the attorney client privilege language on the letter. It was a clerical error and was not intentional. Please find a revised version of the letter with the language removed."
The full letter contains some truly incredible statements.
With respect to privacy, the letter and structure of the University’s Electronic Communications Policy (ECP) reflect the principle that privacy perishes in the absence of security. While the ECP establishes an expectation of privacy in an individual’s electronic communications transmitted using University systems, it tempers this expectation with the recognition that privacy requires a reasonable level of security to protect sensitive data from unauthorized access.
Privacy does not "perish" in the absence of security. This conflation of the two is ridiculous. If a malicious party accesses private communications, that's a security issue. If an employer
accesses these communications, that a privacy issue. Claiming to value privacy while secretly installing monitoring software (and then lying about removing said software) only serves to show the university cares for neither. By adding a third party to the monitoring process, the university has diminished the privacy protections of its staff and
added an attack vector for "advanced persistent threats." It has effectively harmed both privacy and security and, yet, still hopes to claim it was necessary to sacrifice one for the other.
The other statement, tucked away as a footnote, absurdly and obnoxiously claims the real
threat to privacy isn't the school, but people making public records requests.
Public Records Act requesters may seek far more intrusive access to the content of faculty or staff records than what the ECP permits for network security monitoring. The limits on the University’s own access to electronic communications under the ECP do not apply to Public Records Act requests.
Meanwhile, the school's tech committee has pointed out its IT staff is more than capable of handling the privacy and security of the network and, quite obviously, would show more respect for their colleagues' privacy while handling both ends of the privacy/security equation.
It's perfectly acceptable for entities to monitor employees' use of communications equipment. But you can't do it this way. You can't install the software secretly, swear certain employees to secrecy, not tell anyone else until the secret is out in the open, promise to roll it back and then secretly decide to do the opposite, etc. And when challenged, you can't play fast and loose with "security" and "privacy" as if they were both the same word spelled two different ways.
[Update: a TD reader has given us a copy of Janet Napolitano's response to the outcry over the school's secret surveillance efforts. A new post on that letter is on the way. If you'd like a head start, it's embedded below.]
Read More | 38 Comments | Leave a Comment..
Posted on Techdirt - 3 February 2016 @ 8:29am
Three of the Four Horsemen of the Internet Apocalypse (*Revenge Porn not included) are being targeted by Utah legislator David Lifferth with a package of amendments to the state's cybercrime statutes.
Utah Representative David E. Lifferth (R) has filed House Bill 225 which modifies the existing criminal code to include cyber crimes such as doxing, swatting and DoS (denial of service) attacks. According to the amendments, these crimes can now range anywhere from misdemeanors to second-degree felonies.
As is often the case when (relatively) new unpleasantness is greeted with new legislation
, the initial move is an awkward attempt to bend the transgressions around existing laws, or vice versa. Lifferth's is no exception. As GamePolitics points out, only one of the new crimes is specifically referred to by its given name: DoS attacks. The other two can only be inferred by the wording, which is unfortunately broad.
[making] a false report to an emergency response service, including a law enforcement dispatcher or a 911 emergency response service, or intentionally aids, abets, or causes a third party to make the false report, and the false report describes an ongoing emergency situation that as reported is causing or poses an imminent threat of causing serious bodily injury, serious physical injury, or death; and states that the emergency situation is occurring at a specified location.
It's the stab at doxing that fares the worst. In its present form, the wording would implicate a great deal of protected speech. This is the wording Lifferth would like to add to the "Electronic communication harassment" section.
electronically publishes, posts, or otherwise makes available personal identifying information in a public online site or forum.
Considering it's tied to "intent to annoy, alarm, intimidate, offend, abuse, threaten, harass, frighten, or disrupt the electronic communications of another," the amended statute could be read as making the publication of personal information by news outlets a criminal activity -- if the person whose information is exposed feels "offended" or "annoyed." Having your criminal activities detailed alongside personally identifiable information would certainly fall under these definitions, which could lead to the censorship (self- or otherwise) of police blotter postings, mugshot publication or identifying parties engaged in civil or criminal court proceedings.
It also would to make "outing" an anonymous commenter/forum member/etc. a criminal act, even if the amount of information exposed never reaches the level of what one would commonly consider to be "doxing." Would simply exposing the name behind the avatar be enough to trigger possible criminal charges?
While it's inevitable that lawmakers will have to tangle with these issues eventually, it's disheartening to see initial efforts being routinely delivered in terrible -- and usually unconstitutional
-- shape. We expect our legislators to be better than this. After all, it's their job to craft laws and to do so with some semblance of skill and common sense. If nothing else, we expect them to learn something from previous failures to pass bad laws, whether theirs or someone else's.
16 Comments | Leave a Comment..
Posted on Techdirt - 3 February 2016 @ 3:20am
Score one for the American public. A federal judge has reached the same conclusion many FOIA requesters have: the FBI simply doesn't play well with public records laws.
The FBI unlawfully and systematically obscured and refused to answer legitimate requests for information about how well it was complying with the Freedom of Information Act (Foia), a Washington, DC court found last week.
US district judge Randolph D Moss ruled in favor of MIT PhD student Ryan Shapiro, finding that the government was flouting Foia, a law intended to guarantee the public access to government records unless they fall into a protected category. Moss found that the FBI’s present policy is “fundamentally at odds with the statute”.
The 63-page opinion
dives deep into the FOIA exemption weeds. Moss does grant the FBI a few of its motions for summary judgment, but on the whole, he finds the FBI's responses (or lack thereof) to several disputed FOIA requests to be unjustified.
The documents sought by Shapiro and his co-plaintiffs (Jeffrey Stein, Truthout, National Security Counselors) deal with the FBI's FOIA response procedures. These include "search slips," which detail the FBI's efforts to locate requested documents, case evaluations (which can give FOIA requesters some insight on the application of exemptions and search efforts made by individual
staffers) and other processing notes. The FBI refused to part with any
of these background documents if they pertained to other
denied FOIA requests.
The FBI argued that most of what it withheld fell under "law enforcement techniques and procedures," which it feels are categorically excluded from disclosure, thanks to FOIA exemption 7(e). Of course, it all depends on which court it's making this assertion in, as the clause pertaining to this exception is punctuated badly
would disclose techniques and procedures for law enforcement investigations or prosecutions, or would disclose guidelines for law enforcement investigations or prosecutions if such disclosure could reasonably be expected to risk circumvention of the law
In some districts, the courts have interpreted the wording to mean these records are
exempt. In other districts, the courts have read the FOIA exemption clause as meaning these documents are only exempt if
the FBI can offer evidence that releasing them might compromise national security or ongoing investigations.
Judge Moss' opinion agrees with the first interpretation. In doing so, he meets the FBI halfway, which is far further than the FBI has been willing to meet the suing FOIA requesters. Even with the additional slack, the FBI still isn't living up to FOIA standards.
Moss agreed that even if individual documents were protected by that Foia exemption, the entire categories of document the FBI withholds were emphatically not. “[The FBI] concedes that the vast majority of [the records in question] are not protected at all,” he wrote. “It is only arguing that by withholding all search slips, even those not protected by Foia, it can amass a haystack in which to hide the search slips that are protected [emphasis his].”
“[T]he FBI’s exercise of its statutory authority to exclude documents from Foia’s reach is not the kind of ‘technique’ or ‘procedure’” to which the necessary exemption refers, wrote Moss.
Moss is not the first DC District judge to order the FBI
to explain its overuse of FOIA exemptions. In another FOIA lawsuit filed by Ryan Shapiro, Judge Rosemary Collyer found the FBI's lack of responsiveness and explanations to be problematic.
"(Shapiro) argues that FBI has not established that it actually conducted an investigation into criminal acts, specified the particular individual or incident that was the object of its investigation, adequately described the documents it is withholding under Exemption 7, or sufficiently connected the withheld documents to a specific statute that permits FBI to collect information and investigate crimes.
Mr. Shapiro further alleges that FBI has failed to state a rational basis for its investigation or connection to the withheld documents, which he describes as overly-generalized and not particular. On the latter point, the Court agrees."
I'm sure the FBI will challenge Judge Moss' order. It has no interest in providing additional documents to Ryan Shapiro as it's convinced the prolific FOIA filer will "trick" it into revealing stuff it doesn't want to with multiple, overlapping FOIA requests
. The FBI's "mosaic theory" is being tested in court. With the claims it's made here, it clearly wants the court to reinterpret the letter of the law in its favor -- something that would move the agency even further away from the spirit of the law, which is exactly where it wants to be.
Read More | 23 Comments | Leave a Comment..
Posted on Techdirt - 2 February 2016 @ 9:33am
The misuse of DMCA notices to remove unwanted information from the web has been well-documented here. The "right to be forgotten" has sort of codified this behavior, but only applies to citizens of certain countries.
James Kutsukos would like something removed -- a search warrant application hosted by the ACLU, which details a US Postal Service investigation which culminated in his being convicted for marijuana distribution. It's easy to see why Kutsukos would want this removed:
It's far less simple to divine why
the ACLU should feel compelled to remove it.
Kutsukos has his reason
Re: This needs to be taken off ASAP NOW THAT THE NSA LOST THEIR CASE
Explanation of complaint
this must be removed now. firstname.lastname@example.org
The NSA hasn't "lost" any "cases," so far. I assume the "lost case" Kutsukos is referring to is Judge Leon's determination
that the Section 215 bulk collection was unconstitutional (back in December of 2013). This would predate the April 1, 2014 timestamp on the takedown notice (which, for some reason, appears to have been received by the ACLU one year before
Kutsukos sent it).
If so, then the decision had not been overturned
by the Appeals Court yet, so it was technically still in the loss column. Even so, there's nothing about this that involves the NSA. The investigation was initiated by the US Postal Service and later involved the FBI
The evidence obtained by the postal inspector consisted of text messages sent using Google Voice, which is not one of the providers implicated in the NSA's bulk collection efforts. (At least as far as we know... The phone metadata program [which also sweeps up other "business records"] targets telcos, not Google. Google's data is likely gathered under a different authority
using a separate NSA collection program
So, it looks like either a misreading of Judge Leon's decision or -- as we've seen in other cases
-- a sad attempt to intimidate a takedown recipient by throwing around government agency acronyms.
Either way, the document remains intact on the ACLU's servers and in Google's search results for Kutsukos, which lead off with a link to the affidavit.
And, because his woeful takedown attempt has been archived for posterity, Kutsukos is once again linked to a document he'd rather bury.
Read More | 10 Comments | Leave a Comment..
Posted on Techdirt - 1 February 2016 @ 8:36am
Apparently, the only way to stop terrorists from hating us for our freedom is to strip away those offensive freedoms.
Erik Barnett, the DHS's attache to the European Union, pitched some freedom-stripping ideas to a presumably more receptive audience via an article for a French policy magazine. Leveraging both the recent Paris attacks and the omnipresent law enforcement excuse for any bad idea -- child porn -- Barnett suggested victory in the War on Terror can be achieved by stripping internet users of their anonymity. You know, all of them, not just the terrorists.
After a short anecdote about a successful child porn prosecution in Europe. Barnett gets straight to the point. Here's Kieren McCarthy of The Register.
Before we have an opportunity to celebrate, however, Barnett jumps straight to terrorism. "How much of the potential jihadists' data should intelligence agencies or law enforcement be able to examine to protect citizenry from terrorist attack?", he poses. The answer, of course, is everything.
Then the pitch: "As the use of technology by human beings grows and we look at ethical and philosophical questions surrounding ownership of data and privacy interests, we must start to ask how much of the user's data is fair game for law enforcement to protect children from sexual abuse?"
In short, if you value internet-related freedoms, you're basically supporting terrorism and child porn. No person -- especially no legislator -- would want to be seen as valuing personal freedoms over the good of the nation's infrastructure/children. And, because terrible ideas must be buttressed by terrible analogies, Barnett theorizes that the internet is basically a car.
"When a person drives a car on a highway, he or she agrees to display a license plate. The license plate's identifiers are ignored most of the time by law enforcement [unless] the car is involved in a legal infraction or otherwise becomes a matter of public interest. Similarly, should not every individual be required to display a 'license plate' on the digital super-highway?"
To use the Fourth Amendment for a moment, a lowered expectation of privacy is in play when operating a vehicle
on public roads. However, the Fourth Amendment affords a great deal of privacy
to the interior of people's homes. Because the government (in most cases) does not provide internet access, it has no basis to demand ongoing access to citizens' internet activities. It may
acquire this information (along with subscriber info) using search warrants and subpoenas during the course of investigations, but it cannot demand (or at least shouldn't) -- for national security reasons or otherwise -- that every internet user be immediately identifiable.
Discussions of requiring a license
for internet usage
have been raised previously
but rarely go anywhere. To do so is to start heading down
the path to totalitarianism
. Unfortunately, being in a constant state of war against an ambiguous foe often results in legislators and government officials declaring their interest in seeing this path not only surveyed, but the first layer of asphalt applied.
Barnett is one of this number, and he wants a strawman to serve as construction foreman.
"Social media is used to generate support for terrorist groups ... How appropriate is the law enforcement engagement of the social media companies to reveal digital fingerprints of these extremist groups? Who determines the level of 'extremism' of a group? Few would disagree that law enforcement and intelligence services should have the ability..."
of people would disagree, starting with many citizens and running all the way up to their service providers. On top of that, the nation's courts would find the institution of a law that strips the anonymity
of internet users to be unconstitutional
, so that's another hurdle Barnett and like-minded officials would not be able to clear, no matter their stated justification.
60 Comments | Leave a Comment..
Posted on Techdirt - 1 February 2016 @ 6:34am
The way things are going, pretty soon FBI Director James Comey is going to be out there alone, flipping off light switches and blowing out candles, all the while cursing the going darkness.
A new report by Harvard's Berkman Center for Internet and Society debunks law enforcement's fearful statements about encroaching darkness. (h/t New York Times) As the report points out, there may be some pockets that are darker than others, but the forward march of technology means other areas are brighter than they've ever been. In particular, the growing Internet of Things is pretty much just the Internet of Confidential Informants.
Three trends in particular facilitate government access. First, many companies’ business models rely on access to user data. Second, products are increasingly being offered as services, and architectures have become more centralized through cloud computing and data centers. A service, which entails an ongoing relationship between vendor and user, lends itself much more to monitoring and control than a product, where a technology is purchased once and then used without further vendor interaction. Finally, the Internet of Things promises a new frontier for networking objects, machines, and environments in ways that we just beginning to understand. When, say, a television has a microphone and a network connection, and is reprogrammable by its vendor, it could be used to listen in to one side of a telephone conversation taking place in its room – no matter how encrypted the telephone service itself might be. These forces are on a trajectory towards a future with more opportunities for surveillance.
On top of the additional opportunities for surveillance, there's encryption itself. The best friend of Public Enemies #1 -- whatever is far from the insurmountable obstacle Comey and others have presented it as. While some companies are offering encryption by default and others are specializing in secure communications apps and tools, this is still mostly in service to niche markets.
[C]ompanies typically wish to have unencumbered access to user data – with privacy assured through either restricting dissemination of identifiable customer information outside the boundaries of the company (and of governments, should they lawfully request the data). Implementing end-to-end encryption by default for all, or even most, user data streams would conflict with the advertising model and presumably curtail revenues.
Even Apple and Google -- the two companies that added encryption-by-default
to their devices -- aren't interested in encrypting everything
Google offers a number of features in its web-based services that require access to plaintext data, including full text search of documents and files stored in the cloud. In order for such features to work, Google must have access to the plaintext. While Apple says that it encrypts communications end-to-end in some apps it develops, the encryption does not extend to all of its services. This includes, in particular, the iCloud backup service, which conveniently enables users to recover their data from Apple servers. iCloud is enabled by default on Apple devices. Although Apple does encrypt iCloud backups, it holds the keys so that users who have lost everything are not left without recourse. So while the data may be protected from outside attackers, it is still capable of being decrypted by Apple.
In short, far more surveillance doors have been opened in the past decade than have been closed. As the authors point out, smart devices
and online services have implemented voice commands, giving them the capability to record conversations far more private than those that might take place over other encrypted channels. As a case in point, the report notes the FBI exploited in-car microphones more than a decade ago, using a luxury auto "concierge" service to eavesdrop on conversations between organized crime members.
They also point out that encryption isn't always surveillance-proof. NSA officials have encouraged
the use of encryption -- not just because it protects ordinary citizens from attacks, but also because it can crack some of it and grab tons of metadata no matter what form is being used. Not only that, but officials have admitted that the use of encryption "lights up" potential surveillance targets
, making its haystack trawling much more efficient.
Comey is the odd man out here, abandoned by the NSA, administration and, with few exceptions
, other law enforcement agencies. The solution isn't bans or backdoors. The solution is the exploitation of every new attack vector willingly created by social media apps, smart devices and the general interconnectedness of the world wide web. If he persists in this fashion, it won't be too long before he's considered no more credible than the ranting doomsayers who prowl city streets and subway platforms.
And let's not forget law enforcement agencies solved crimes and captured criminals for over two hundred years in this country -- and never found the lack of access to smartphone contents to be a hindrance.
Read More | 9 Comments | Leave a Comment..
Posted on Techdirt - 29 January 2016 @ 12:46pm
EPIC is reporting that the DOJ has finally caved and is handing over a document it requested last fall. The document EPIC sought was the "Umbrella Agreement" between the US and Europe on the handling of each entities' citizens' data.
On September 8, 2015, European and US officials announced that they have concluded an agreement, the so-called Umbrella Agreement, which is a framework for transatlantic data transfer between the US and the EU. The proposed goal of the Agreement is to provide data protection safeguards for personal information transferred between the EU and the US. Despite the announcements, neither US officials nor their European counterparts made the text of the Agreement public.
Two days after this announcement, EPIC filed expedited FOIA requests on both sides of the pond for the text of this agreement, arguing (logically) that the people this would affect had a right to know what their governments were agreeing to. EPIC specifically had concerns
that the US would offer less protection to foreign citizens' data than to its own citizens, given that it has historically refused to extend these niceties to those residing elsewhere on the planet.
The DOJ has provided EPIC with a copy of the agreement
. In doing so, it hopes to bring to an end EPIC's FOIA lawsuit
against the agency. But the DOJ notes in the letter attached to the agreement that it's only doing so in the most begrudging fashion. If only its partners on the other side of the Atlantic hadn't blinked first…
After carefully reviewing the record responsive to your request, I have determined that, as a matter of discretion, this document may be released in full. While this record is likely subject to Exemption 5, which concerns certain inter- and intra-agency communications protected by the deliberative process privilege, given the fact that the European Commission has provided you with a copy of the record and is making the file publicly available on its website, I have determined to release the record as a matter of discretion.
That's the "most transparent administration
" at work. The European Parliament released the agreement on September 14, 2015 -- six days after the announcement. The DOJ, on the other hand, held out for nearly six months and is only releasing it because it's already in the public domain. And it's arguing that it should still be exempt as a "deliberative document" -- using the government's most-abused
FOIA exemption -- even when another, larger government agency has determined the document deserves no such protection.
Read More | 4 Comments | Leave a Comment..
Posted on Techdirt - 28 January 2016 @ 11:23pm
So much for those "inalienable rights." The Sixth Amendment -- among other things -- guarantees representation for criminal defendants. This guarantee has been declared null and void in two states: Utah and Pennsylvania.
The problem isn't that these states aren't willing to comply with both the Sixth Amendment and the Supreme Court's Gideon v. Wainwright decision. It's just that they're not going to spend any of their money doing it. In these states, funding for indigent defense is left up to local governments, with no additional support coming from the state level.
This causes problems for smaller locales, which often don't have the revenue to fully fund the legal defenders the accused are (supposedly) entitled to. But it's not just a matter of funding. It's also a matter of priorities. The state of Utah is currently being sued because of its unwillingness to ensure public defenders are properly funded. There's money available, but lawmakers have shown an unending willingness to only fund half of the criminal justice equation.
Utah is one of only two states nationwide that provide no state funding for indigent defense. It ranks 48th in the nation in per capita funding of indigent defense, according to the complaint.
Nor has the state set standards for contracted indigent defenders, or ensured that counties provide "constitutionally adequate" legal representation, the men say. Utah counties design and administer their own indigent defense programs.
Washington County uses fixed-price contracts to pay local attorneys for indigent defense, and budgeted $760,688 for indigent defense in this year. The county budgeted $2.8 million for prosecution this year, and the state has budgeted $18.6 million for criminal prosecution, and not a dime for defense.
points out that the lack of funding has hampered both of their cases. For one of the two defendants bringing the suit, the lack of funding resulted in his public defender's contract not being renewed, basically leaving him without capable representation.
At the time this lawsuit was filed, Plaintiffs were being represented by public defenders. During that representation, Washington County did not renew the public defender contract for Mr. Paulus’ public defender which makes it impossible for him to continue to his currently scheduled trial date.
Mr. Paulus faces 25 years to life in the Utah State Prison if he is convicted of his crimes. Mr. Paulus had a previous public defender, Ed Flint, who had obtained a private investigator interview a number of witnesses, but Mr. Flint had not retained any expert witnesses because of the contract issues with the Defendants as it relates to the public defender system in Washington County. As of the date of the filing of this action, Aric Cramer, who had two contracts and subcontracted with Ed Flint, did not have these two contacts renewed. Mr. Cramer would also subcontract with Ariel Taylor. On information and belief, one of those contacts are still unfilled.
On information and belief, attorney Ariel Taylor has been awarded one of those two vacant contacts. However, Mr. Taylor has no knowledge or involvement in the Mr. Paulus’ case prior to the non-renewal of Mr. Flint and Mr. Cramer’s contracts.
And it's not just that defendants are in danger of losing lawyers familiar with their cases if contracts aren't renewed. Years of underfunding and neglect by local governance has led to an ad hoc public defense network which does little to ensure defendants receive competent assistance.
Defendants exercise no supervision over the county indigent defense programs. They have also failed to establish, require, or enforce any practice standards or gridlines for the portions of noncapital indigent defendants receive constitutionally adequate representation.
National standards pertaining to the administration and provision of indigent defense programs have been in existence for decades . State and local entities across the country have adopted many of these practices standards. Washington County has refused to do so.
The lawsuit is aiming for class status, which would draw in many other criminal defendants -- either imprisoned or still awaiting trial.
The numbers cited in the suit aren't anomalous. The complaint that defenders' offices are underfunded
can be heard all over the nation
. It's just that two states have further tipped the scales in favor of prosecutors by passing all costs on to local governments. And when there's a limited amount of money to spend, it plays better with voters to hand it to the law enforcement side, rather than a system that helps "guilty" people "escape" punishment. Yes, I'm aware our justice system is predicated on the presumption of innocence, but that's the ideal, not the prevailing perception.
A system that is routinely a travesty at best
is a complete debacle in Utah, quite possibly to the point of being unconstitutional. But that's the way the accused are treated. The system prefers plea bargains to trials and convictions to exonerations by a large margin -- something that can be immediately confirmed by taking a glance at government balance sheets.
At best, this case will force the state to start funding indigent defense. But much more needs to be done before the system can be considered equitable.
Read More | 48 Comments | Leave a Comment..
Posted on Techdirt - 28 January 2016 @ 11:39am
What happens when you lower the barriers to entry? More participants join the market. It works everywhere, even when the market is "law enforcement" and the "customers" are everyone else.
Vigilant Solutions, one of the country’s largest brokers of vehicle surveillance technology, is offering a hell of a deal to law enforcement agencies in Texas: a whole suite of automated license plate reader (ALPR) equipment and access to the company’s massive databases and analytical tools—and it won’t cost the agency a dime.
Vigilant is leveraging H.B. 121, a new Texas law passed in 2015 that allows officers to install credit and debit card readers in their patrol vehicles to take payment on the spot for unpaid court fines, also known as capias warrants. When the law passed, Texas legislators argued that not only would it help local government with their budgets, it would also benefit the public and police.
Well, we can see how this will benefit law enforcement and others on the government food chain, but it's unclear how this will benefit the public. The bill's sponsor said the law would "relieve the burden" of having their vehicles impounded or being jailed for unpaid fines. But beyond those vague perks, the benefits seem to flow mostly in one direction.
The EFF quotes legal blogger Scott Henson of Grits for Breakfast
, who speculated the combination of license plate readers and credit card readers would push cops towards chasing down unpaid fines rather than enforcing traffic laws or performing more routine patrol duties. If so -- and it appears to be the case -- this is exactly
the outcome Vigilant was expecting. It didn't hand out its tech for free. There may be no price tag on the plate readers
at the point of purchase, but that's only because Vigilant has points on the back end.
The “warrant redemption” program works like this. The agency gets no-cost license plate readers as well as free access to LEARN-NVLS, the ALPR data system Vigilant says contains more than 2.8-billion plate scans and is growing by more than 70 million scans a month. This also includes a wide variety of analytical and predictive software tools.
The government agency in turn gives Vigilant access to information about all its outstanding court fees, which the company then turns into a hot list to feed into the free ALPR systems. As police cars patrol the city, they ping on license plates associated with the fees. The officer then pulls the driver over and offers them a devil’s bargain: go to jail, or pay the original fine with an extra 25% processing fee tacked on, all of which goes to Vigilant.
To make this relationship even more explicit, officers who issue tickets to parked vehicles rather than drivers leave a note instructing them to visit Vigilant's
website to pay the fine. On top of the 25% fee, Vigilant also gets to collect massive amounts of sweet, sweet driver data
, which it can then sell to other law enforcement agencies (database access licenses) and private firms (insurance companies, repo men, etc.). And, if the locals seem understaffed, Vigilant is more than happy to pick up the slack.
In early December 2015, Vigilant issued a press release bragging that Guadalupe County had used the systems to collect on more than 4,500 warrants between April and December 2015. In January 2016, the City of Kyle signed an identical deal with Vigilant. Soon after, Guadalupe County upgraded the contract to allow Vigilant to dispatch its own contractors to collect on capias warrants.
As the EFF points out, this freemium service benefits Vigilant and law enforcement, but does very little for the general public… including protect them from Vigilant's inability to perform its job competently
During the second week of December, as part of its Warrant Redemption Program, Vigilant Solutions sent several warrant notices – on behalf of our law enforcement partners – in error to citizens across the state of Texas. A technical error caused us to send warrant notices to the wrong recipients.
These types of mistakes are not acceptable and we deeply apologize to those who received the warrant correspondence in error and to our law enforcement customers.
Apologies are nice, if of limited utility, but…
[T]he company has not disclosed the extent of the error, how many people were affected, how much money was collected that shouldn’t have been, and what it’s doing to inform and make it up to the people affected.
As has been discussed here before, turning law enforcement agencies into revenue-focused entities is a bad idea. Case in point: asset forfeiture
. Further case in point: speed trap towns
. Improper incentives lead to improper behavior. Agencies may like the idea of a "free" license plate reader, but the price still has to be paid by someone
-- and that "someone" is going to be the general public.
As priorities shift towards ensuring ongoing use of the "free" ALPRs, other criminal activity is likely to receive less law enforcement attention. Unpaid fines and fees are in law enforcement's wheelhouse, but should never become its raison d'etre. Once it does, the whole community suffers. Anything that could be implemented to lower crime rates would also serve to lower revenue, making it far less likely to be implemented. Fewer infractions mean fewer opportunities to collect court fees. And while the legislators pushing the new law Vigilant is leveraging talked a good game about sending fewer people to overcrowded jails, the governments overseeing these agencies still have budgets to meet and law enforcement to lean on to ensure this happens. Actually achieving the bill's stated aims would mean a steady reduction in court fees, which would lead to the loss of "free" plate readers. And no one wants that, at least not on the government side of things.
41 Comments | Leave a Comment..
Posted on Techdirt - 28 January 2016 @ 9:29am
In an article that's actually a bit (but just a bit) more thoughtful than the headline applied to it ("How Corporations Profit From Black Teens' Viral Content"), Fader writer Doreen St. Felix tackles the cultural appropriation of creative works. Sort of.
While the article does quote from a 2008 essay about the historical cultural appropriation of black artists' works by record labels, etc., the article does not point out any specific appropriation occurring here -- at least not in terms of the two creators St. Felix has chosen to write about. And it has nothing to say about how these corporations are "profiting" from this supposed appropriation.
One of those is Kayla Newman, whose offhand comment in a Vine video birthed a new slang term.
In the video everyone knows, uploaded on June 21st, 2014, Kayla admires her precisely arched eyebrows: “We in this bitch. Finna get crunk. Eyebrows on fleek. Da fuq.”
Newman's Vine video has generated 36 million loops, moving "on fleek" from her lips to the Urban Dictionary
. Some of those stops have been corporate. St. Felix points to IHOP
Twitter accounts' use of the phrase in an attempt to "feign cultural relevance."
That appears to be the extent of the "appropriation." No one's monetizing the phrase, nor have these corporate entities done damage to anyone but themselves
by deploying it. Newman hasn't seen any money from creating the slang term, although it's not for a lack of trying.
“I gave the world a word,” Newman said. “I can’t explain the feeling. At the moment I haven’t gotten any endorsements or received any payment. I feel that I should be compensated. But I also feel that good things happen to those who wait.”
The other artist quoted in the Fader article is a dancer named Denzel Meechie. Meechie performs improvised dance routines to various songs. This has earned him millions of views on YouTube. It has also seen his original account shut down by the rightsholders of the songs he dances to.
In mid-September, YouTube shut down Meechie’s channel, which had accrued hundreds of thousands of subscribers. “I had too many copyright strikes,” he said, referring to his use of songs without explicit legal permission from labels.
Ironically enough, some artists whose labels have issued takedowns have actively sought out Meechie because of his viral cachet.
According to Meechie, labels contact YouTube and demand his videos be taken down, often without the knowledge of their own artists, some of whom pay him directly to help boost their buzz. “And it’s crazy, you know, because the artists ask me to put the videos up.”
That's what happens when you turn over copyright enforcement to algorithms
Once again, we're not seeing much evidence of corporate cultural appropriation of black artists' creations. But the essay St. Felix quotes from does
have something relevant to say about the intellectual property power structure.
“Black artists had no input in [copyright law], and examination reveals that it is in some respects incompatible with Black cultural production in music,” writes Greene, arguing that multiple copyright standards were specifically structured to preclude black blues artists, especially women, from claiming ownership.
It's not just black artists, though. It's all
artists. Intellectual property laws have been refined by corporations and their lobbyists to provide the most protection for those with the means to benefit from extended copyright terms and ridiculously generous readings of trademark claims: corporations.
But K.J. Greene's next assertion (from her cultural appropriation essay) goes right off the rails.
“The idea/expression dichotomy of copyright law prohibits copyright protection for raw ideas,” Greene wrote. “I contend that this standard provided less protection to innovative black composers, whose work was imitated so wildly it became ‘the idea.’”
Opening up the law to include protection of ideas won't stop the IHOPs of the world from borrowing slang from a teen on Vine. It will, however, be exploited thoroughly by the same labels and studios that exploited black artists in the past. It won't level the playing field. And beyond all of that, it's just a stupid thing to say.
As is some of this, when St. Felix tries to tie this all together.
In some sense, the roaring debates over white appropriation of black slang, music, and dance have worked as an avatar for circumstance of the independent black creator in the digital age. But the analog is insufficient. Intellectual property and viral content should be interrogated from a legal standpoint[...]. The copyright statute under which Meechie’s YouTube account got flagged and then taken down should be re-examined, as should the legal gray areas that leave individual creators like Newman in the cold.
We can agree that the killing of Meechie's original YouTube account should be examined more closely, but the fault lies with labels that have opted for efficiency over accuracy -- the deployment of bots that only recognize certain arrangements of ones and zeroes, eliminating any of the nuance or context that make fair use a viable defense.
As for Newman, it's entirely unclear how much income St. Felix -- or even Newman herself -- expects a viral video loop that birthed a slang term to generate. Users can't monetize Vine loops, at least not directly. Pursuing someone for copyright infringement (if they used the Vine loop in a YouTube compilation video, for example) would be of limited usefulness.
If the concern is limited to the worldwide "unauthorized" use of "on fleek," the route for monetization runs through the trademark office. Unlike copyright, trademark doesn't apply automatically. It must be applied for, accepted and -- most importantly -- put to use. These steps aren't cheap.
But why should anyone expect this contribution to the English language to generate income? It's two words from a seven-second video, and the only way it would conceivably be protected would be as a trademarked phrase, which would only prevent others from using it under specific
circumstances for specific
goods/services. It will not return "ownership" to Newman. Nor will it rebalance the IP playing field. There's arguably nothing protectable here, no matter how Newman, St. Felix or essayist K.J. Greene feel about it.
Yes, corporations are opportunists who will often use current slang to coat their advertising with "How do you do, fellow kids
?" vibes. But they're not co-opting cultures. They're just acting the way we expect corporations to act: make various stabs at youthful relevance with (usually) awful results. St. Felix's article does a great job tracing the origin of the phrase, but never comes close to making a point about cultural appropriation or tying this supposed act to corporate profits.
The point that does come through is that something
is wrong with IP laws, but the fixes suggested here would only make things worse. And even embracing the ridiculous concept of extending IP protections to unformed ideas still wouldn't
turn two words into money. Being outraged that corporations frequently behave in a manner that only furthers their own
interests is a nonstarter. Stretching the shameless repurposing of slang by corporate Twitter accounts and the merciless actions of infringement bots to be indicative of a new era of exploitation of black artists is reading far too much into the predictable actions of both corporations and the bots that work for them.
56 Comments | Leave a Comment..
Posted on Techdirt - 28 January 2016 @ 6:26am
Another whistleblower is facing charges brought by this administration -- one that has prosecuted more whistleblowers than all other administrations combined. Thomas Tamm, a DOJ lawyer during the Bush era, exposed the NSA's super-secret domestic surveillance program, whose authorization ran directly from the Attorney General to the Chief Judge of the FISA Court.
His whistleblowing led to a Pulitzer for the New York Times. The information Tamm gave to NYT reporters detailed something referred to only as "the program." The two-person approval process eliminated much of the paper trail and allowed the NSA to perform warrantless domestic surveillance. Colleagues of Tamm's at the DOJ's Office of Intelligence Programs and Review even told Tamm this was "probably illegal."
As Cyrus Farivar points out, Tamm has spent several years being investigated, but, so far, nothing has stuck. In 2007, his house was raided by 18 FBI agents who seized every electronic device they could find in Tamm's house and pressured him to plead guilty to espionage charges. Two years later, Tamm received the "Ridenhour Truth-Telling" prize. Two years after that, the government dropped the espionage charges.
But the government isn't done with Tamm. In what it likely views as a wrist slap, it's bringing ethics charges against Tamm for bypassing the "proper channels" to expose government wrongdoing. Basically, it's a bar complaint -- the government's last-ditch attempt to make Tamm pay for making it look bad.
Respondent became aware that there were some surveillance applications that were given special treatment. The applications could be signed only by the Attorney General and were made only to the chief judge of the Foreign Intelligence Surveillance Court. The existence of these applications and this process was secret.
Respondent learned that these applications involved special intelligence obtained from something referred to as "the program." When he inquired about "the program" of other members of the Office of Intelligence Policy and Review, he was told by his colleagues that it was probably illegal.
Even though Respondent believed that an agency of the Department of Justice was involved in illegal conduct, he did not refer the matter to higher authority within the Department.
For more than a decade, the government has gone after Tamm. All it's left with is this: a threadbare claim that Tamm's decision to bring this information to the press was a breach of trust. His "client" -- the DOJ -- was not handled in accordance with its "Rules of Professional Conduct." Still, it's more than the government actually needed
to do. It could have used the opportunity to shut down a program that was considered "probably illegal" by other DOJ lawyers. Instead, it shot the only messenger it could: the person exposing the wrongdoing.
Read More | 33 Comments | Leave a Comment..
Posted on Techdirt - 27 January 2016 @ 11:23pm
The NYPD is once again in the middle of a transparency/accountability controversy. The law enforcement agency has achieved the dubious distinction of being more difficult to obtain public records from than federal three-letter agencies like the CIA and NSA. The latest news does nothing to improve its reputation.
Some of this is due to its in-house classification system, which allows it to arbitrarily declare potentially-responsive documents "secret" -- something it does quite often with no apparent oversight. Some of it is due to the department's general antagonism towards transparency and openness, which keeps documents not marked secret out of the public's hands just because. Its steadfast belief that the only entity truly entitled to information is the NYPD has seen this attitude carried over to discovery requests in civil lawsuits and criminal cases, much to the general disgruntlement of presiding judges.
With the NYPD's court-ordered body camera program going into effect, the recorded footage is the latest target of FOIL (Freedom of Information Law) requests. TV station NY1 asked for a "sampling" of body-worn camera footage from five weeks of recording. In return, the NYPD has given it nothing but delays… and a high-dollar estimate.
When the NYPD first rolled out its body camera pilot program, the idea was increased transparency and accountability. But last spring when NY1 requested five weeks worth of footage under the state’s Freedom of Information Law, known as FOIL, the NYPD said it would cost NY1 $36,000 so that an officer could first review and edit the video, to address privacy and other concerns.
After a couple rounds of appeals, the TV station has taken the next step. It sued
the NYPD, citing a number of FOIL violations.
The NYPD denied NY1's request for unedited footage without specifying what material it plans to redact, how much material will be excluded from disclosure, or how the redaction will be performed. Instead, Respondents suggested that they may provide with edited footage, but only on the condition that remit $36,000.00, the alleged cost to the NYPD of performing its unidentified redactions.
FOIL does not permit public records to be withheld absent a full explanation of the materials that are exempt from disclosure. FOIL also does not permit agencies to levy any charge for review and redaction of records (let alone a $36,000.00 charge). As a result, the response to NY1's request violates FOIL.
Indeed, the response to NY1's request for footage runs counter to both the public policy of openness underlying FOIL, as well as the purported transparency supposedly fostered by the BWC program itself.
Redacting footage isn't necessarily inexpensive, but the NYPD has provided no justification for the $36,000 fee. The FOIL request doesn't ask for anything more than a "sampling" of the recorded footage. The NYPD responses don't specify whether the agency considers this to be every minute of footage recorded during those time periods, or something considerably more limited.
It is true that the footage will have to be redacted, at least in part. But without further information, the "reasonableness" of the NYPD's fee demand can't be assessed. This FOIL paywall runs contrary to the law's purpose, as well as the presumption of disclosure stressed in comments made by NYC Mayor Bill de Blasio, who lauded the new body-worn camera program as a step forward in transparency and accountability. If the footage remains solely in the possession of the NYPD, there will be no additional transparency or accountability.
On the other hand, NYPD Commissioner Bill Bratton seems to feel the state's public record law only applies to other government agencies. The NYPD currently ranks at the bottom of the list
for city agency FOIL responsiveness. That seems unlikely to change if this is how the department responds to requests for footage.
"We have never released 911 calls, and video recorded by these officers, I think, would be under the same protection of not being released, even to FOIL requests," said Police Commissioner William Bratton.
Unfortunately, this response from the NYPD -- despite effectively pricing NY1 out of the market for these public records -- directly contradicts the commissioner's beliefs. Obviously, the NYPD FOIL team feels these documents are
responsive to public records requests. However, it's more than willing to do whatever it takes to ensure this responsiveness remains in the realm of the theoretical.
Read More | 22 Comments | Leave a Comment..
More posts from Capitalist Lion Tamer >>