Capitalist Lion Tamer’s Techdirt Profile


About Capitalist Lion TamerTechdirt Insider

List of blogs started with enthusiasm, which now mostly lie dormant:

[reserved for future use]

[recently retired]

[various side projects]

Posted on Free Speech - 18 December 2018 @ 10:42am

Politician Who Tried To Hijack Critic's Blog Via Trademark Applications Agrees To Never Pull This Bullshit Again

from the shove-the-shame-you-don't-have-into-a-ziploc-bag-and-GTFO-of-office-thx dept

In one of the more blatant attempts at censorship we've witnessed, a Minnesota politician tried to trademark the name of a politically-focused blog that often criticized her. Tax board member Carol Becker tried to take the name "Wedge LIVE!" away from its owner, John Edwards, who had been using the name for years to cover local politics. Becker first claimed she thought of the name herself, which she thought would be perfect for her yet-unrealized podcast covering… local politics.

After receiving a bit of heat from Tony Webster, John Edwards, and Edward's supporters, Becker finally admitted she was attempting to take the name away from her critic, who had built his unregistered brand over the past several years. After more backlash, she decided to withdraw her trademark applications but warned she would try again in six months if Edwards didn't register them first.

Four months later, it appears Edwards has prevailed. His post at Wedge LIVE! notes he has dropped his lawsuit against Becker seeking an injunction blocking her from filing for Wedge Live-related trademarks. Becker has agreed to drop her censorial pursuit of the name "Wedge LIVE," bringing an end to this ridiculous and particularly inept attempt to silence a critic.

The legal effort to defend Wedge LIVE from Carol Becker has ended in victory. In a settlement reached late Monday, and fully executed yesterday, Becker has acknowledged my ownership of the name “Wedge LIVE.” Additionally, Becker has agreed that she will “never assert any claim to these marks in the future.”

Perhaps this debacle will lead Becker to exit the public sector. Becker attempted to use the federal government's IP protections to undermine a critic -- one she also baselessly accused of being funded by "dark money" and called a tax fraud. She also denied being aware of Wedge LIVE!'s existence when first confronted by journalists, only belatedly admitting she knew exactly what she was doing when she filed the disingenuous trademark applications. She's proven she can't really be trusted to handle even the small part of government she's staked out.

5 Comments | Leave a Comment..

Posted on Techdirt - 18 December 2018 @ 3:22am

FBI Swept Up Info About Aaron Swartz While Pursuing An Al-Qaeda Investigation

from the stocking-the-back-room-for-a-rainy-day-of-searches dept

The FBI has the power to collect massive amounts of data and communications during its investigations. This power periodically ingests NATSEC steroids, pumping the FBI's data stores full of stuff not relevant to the NSA's work, but possibly relevant to the FBI's crime-fighting duties.

You would think the FBI would toss anything not relevant to an investigation. Just in terms of storage and haystack-sorting, it would only make sense to discard data/communications not needed for ongoing investigations. But you'd be wrong. The FBI holds on to everything it gets because you never know: the irrelevancies you hoovered up yesterday might be useful today.

That's pretty much what happened to Aaron Swartz, according to documents published by Dell Cameron of Gizmodo.

Nearly two years before the U.S. government’s first known inquiry into the activities of Reddit co-founder and famed digital activist Aaron Swartz, the FBI swept up his email data in a counterterrorism investigation that also ensnared students at an American university, according to a once-secret document first published by Gizmodo.

The email data belonging to Swartz, who was likely not the target of the counterterrorism investigation, was cataloged by the FBI and accessed more than a year later as it weighed potential charges against him for something wholly unrelated.

The data collected -- most likely obtained with an NSL -- came from Pittsburgh University. It was part of a data haul associated with the FBI's terrorism investigation. Swartz was never the target of this Al-Qaeda investigation but the information obtained remained in the FBI's data stores even though the FBI had no reason to hold onto non-hit data.

When the FBI did start looking in Swartz's activities, it found his email address in the stored data it had obtained from the university in 2007. This apparently happened in 2008, around the time the FBI was trying to determine if Swartz had violated any laws by freeing millions of court documents from PACER.

The FBI targeted a foreign terrorist entity but ended up with an untold amount of email metadata originating from US persons' accounts. The only reason we're seeing any evidence of the FBI's backdoor domestic searches come to light is Aaron Swartz's inadvertent involvement. If it hadn't been for a high-profile prosecution and Swartz's tragic suicide, public interest in the FBI's investigative activities surrounding Swartz would likely have minimal public interest. But Swartz's prosecution and death have put more eyes on the case, including those of transparency group Property of the People, which obtained this document through an FOIA lawsuit against the agency.

This document is more evidence the FBI abuses its investigative privileges. The agency engages in foreign-facing terrorism investigations, hoovering up as much "relevant" data as it can. Rather than discard everything not related to the investigation, the FBI stores it indefinitely. When the FBI opens a domestic investigation, it can give itself a head start by digging through its data stores for info it has "inadvertently" gathered on American citizens. The information the FBI already has on hand -- info it really can't justify keeping -- helps build cases against Americans while depriving them of their right to challenge the evidence against them. Americans don't know about this evidence because it's laundered by NSLs, warrants, and whatever else the FBI needs to deploy to duplicate the results of data store searches the agency has already performed.

13 Comments | Leave a Comment..

Posted on Techdirt - 17 December 2018 @ 1:30pm

The Intelligence Community's Official Whistleblower Channel Is Going To Start Hunting Down Leakers

from the prepare-for-the-worst-of-both-worlds dept

The Inspector General for the Intelligence Community is finally implementing long-resisted whistleblower-related reforms. The IG has previously buried reports indicating whistleblowers were being greeted with retaliation for going through the proper channels. Despite this, government officials continue to claim the only whistleblowers they'll recognize are those who use the internal options -- options that allow the government to control the narrative and, in many cases, do as little as possible to address complaints.

The Inspector General's office is one of the official channels. After turmoil that consumed most of last year -- including the ouster of Dan Meyer, the head of the IC's whistleblower outreach program -- a new Inspector General is in place. Michael Atkinson promised to get the IC IG's house in order after news surfaced of its burial of a damning whistleblower retaliation report earlier this year, but so far it's unclear what improvements have been made.

What does appear to be in place is the IG office's participation in the Forever War on Whistleblowers. National security reporter Jenna McLaughlin noticed this disturbing development in the IG office's latest semiannual report [PDF]:

Beginning in June 2018, the Investigations Division began to take steps to permit the ICIG to fulfill its responsibilities under Intelligence Community Directive 701, Unauthorized Disclosures of Classified National Security Information (ICD 701). In December 2017, the DNI revised ICD 701 to improve the IC’s efforts to detect, deter, report, and investigate unauthorized disclosures. The revised ICD imposed new responsibilities upon the ICIG to report and investigate unauthorized disclosures.

The office will somehow protect whistleblowers while hunting down those who operate outside official channels.

Under ICD 701, the ICIG will:

Review unauthorized disclosure cases where the FBI decides not to investigate or the FBI investigates but the Department of Justice declines prosecution, in coordination with the other Office(s) of Inspectors General involved, to determine whether an Inspector General administrative investigation is warranted.

Now, even if the FBI and DOJ decide a disclosure case isn't worth pursuing, the IG will open its own investigation and, apparently, see if it needs to talk the DOJ into taking another look at it. The IG is limited to administrative investigations, but there's no reason the DOJ can't turn it into a criminal investigation after receiving more info from the Intelligence Community Inspector General.

Oddly enough, this is followed directly in the report by the ICIG's announcement of a "Center for Protected Disclosures." This is the IG's belated compliance with whistleblower protections enacted during the Obama administration. Coming along too late to do people like Ed Snowden any good, the new hotline connects whistleblowers to the IG's office, hopefully in a way that keeps their complaints confidential and shields them from retaliation.

Hopefully, this new Center works better than the IG's old whistleblower business model, which saw all but one case resolved in favor of the government and the single outlier allowed to drag on for more than 700 days without resolution. But the IG's plan to get into the leak-hunting business tempers this mild good news, suggesting it may utilize its resources to hunt down those who bypass an office some IC employees justifiably believe won't protect them.

Read More | 15 Comments | Leave a Comment..

Posted on Techdirt - 17 December 2018 @ 6:27am

Report: CBP's Border Device Search Program Is An Undersupervised Catastrophe

from the scattershot-security dept

The CBP is searching more devices than ever and ramping up an "extreme vetting" program that includes biometric scans, demands for social media account passwords, and more intrusive searches across the board. As the number of device searches continues to increase, the agency's technical chops and and internal oversight aren't keeping pace.

That's according to recently-released Inspector General's report [PDF], which finds little to like about the CBP's search processes and policies, other than they occasionally manage to catch criminals attempting to enter the US. The CBP's Office of Field Operations is supposed to be taking charge of device searches, ensuring they're done effectively and intelligently. So far, it appears the OFO has taken a hands-off approach to management, resulting in bad practices and worse security.

[B]ecause of inadequate supervision to ensure OFO officers properly documented searches, OFO cannot maintain accurate quantitative data or identify and address performance problems related to these searches. In addition, OFO officers did not consistently disconnect electronic devices, specifically cell phones, from networks before searching them because headquarters provided inconsistent guidance to the ports of entry on disabling data connections on electronic devices. OFO also did not adequately manage technology to effectively support search operations and ensure the security of data.

Here's the kicker: the OFO is so laid back it still hasn't begun to address a problem raised by the Inspector General more than a decade ago.

Finally, OFO has not yet developed performance measures to evaluate the effectiveness of a pilot program, begun in 2007, to conduct advanced searches, including copying electronic data from searched devices to law enforcement databases.

Considering the pace of technology development, the OFO has managed to put the CBP more than a decade behind. Playing catch up now will probably bring them to five years behind schedule sometime within the next couple of years and ahead of the office's baseline expectations sometime around never.

These device searches can be intrusive. In some cases, devices are held for months as the agency performs forensic searches and analyzes the data. These intrusions need to be justified, but the IG found CBP officers can hardly be bothered to do the paperwork.

We reviewed 194 EMRs [Electronic Media Reports] and identified 130 (67 percent) that featured one or more problems, which totaled 147 overall.

The DHS's own search policies say device searches will be limited to data at rest, unless a deeper search can be justified. The OIG says none of the 154 EMRs compiled before the DHS reiterated this rule in April 2017 contained any evidence data connections were disabled before searches were performed.

This lack of care undercuts one of the arguments the DOJ offered when fighting against a warrant requirement for phone searches: that criminals could destroy evidence on a seized device using remotely-triggered software. The CBP either doesn't think this is a possibility or it sincerely doesn't care if it's jeopardizing its own searches. Either way, it does nothing to give the government's overdramatic assertions any more credibility.

The list of bad news goes on and on. The CBP failed to renew licenses for forensic software, resulting in the inability to perform advanced searches for period of months. It also ignored retention policies, allowing data copied from people's devices to sit around on external storage devices indefinitely. As the OIG points out, this isn't just a policy violation. It's also a security issue. Agents could peruse communications and data they have no business looking at and the theft of a storage device could result in unauthorized disclosures of travelers' data.

If there's a silver lining, it's that the CBP concurs with the IG's determination that it sucks. There's been no pushback from the agency -- only vows to make the needed improvements. But that's tempered by the fact the CBP still hasn't begun to address issues raised by the OIG in 2007. These recommendations will likely put the agency even further behind the technological curve, raising the chance of criminals and terrorists escaping detention and increasing the risks posed to travelers that their data might be abused by the CBP, or worse, some rando who happens to walk off with an unguarded USB stick.

Read More | 32 Comments | Leave a Comment..

Posted on Techdirt - 14 December 2018 @ 7:39pm

Kansas Supreme Court Says Cops Can Search A House Without A Warrant As Long As They Claim They Smelled Marijuana

from the I-love-the-smell-of-exigency-in-the-morning dept

The Kansas Supreme Court has just given cops a pass to treat residents' homes like cars on public roads. Being in a car greatly diminishes your Fourth Amendment protections and many a warrantless search has been salvaged by an officer (or a dog) testifying they "smelled marijuana" before tearing the car apart.

Unlike a car on a public road, a person's home has traditionally been given the utmost in Fourth Amendment protections. The bar to search a home is higher than the bar to search a vehicle. Cops aren't supposed to be walking up to windows to peek inside. Nor at they supposed to hang out by the door, hoping to catch a whiff of something illegal.

But that's exactly what they'll be able to do now. If they can find a reason to approach someone's home, all they need to do is declare they smelled marijuana to get past the front door without a warrant. This completely subjective form of "evidence" can be used as probable cause to effect a warrantless search.

The stupefying opinion [PDF] opens with an equally-stupefying bit of exposition:

While on routine surveillance at a local convenience store, Lawrence Police Officer Kimberly Nicholson checked a vehicle's license plate. That records check indicated the car had been stopped several weeks earlier with Irone Revely driving. It was noted there was an active arrest warrant for Revely's brother, Chayln Revely. Nicholson confirmed Irone was the driver, and she believed the passenger matched Chayln's description.

Nicholson followed the vehicle, looking for a traffic violation that would permit a vehicle stop and might allow the officer to confirm the passenger's identity. No violation occurred, so Nicholson followed the vehicle to an apartment complex. The passenger got out and ran into an apartment. Irone trailed behind. Nicholson approached and asked Irone if the person who ran into the apartment was his brother. Irone did not answer and continued walking toward the apartment with Nicholson following.

I'm still trying to wrap my mind around the phrase "routine surveillance at a local convenience store" that's just casually dropped into the opening of the opinion as if that collection of words made any sort of sense. Is this how we're spending our law enforcement dollars? Hanging out by local businesses and running plates? It seems, at best, incredibly inefficient.

That being said, the 7-11 stakeout (or whatever) led Officer Nicholson to the door of Lawrence Hubbard's apartment. That's when the law enforcement magic happened:

Nicholson later testified she was about 2 feet from the front door when Hubbard exited. She further testified she "smelled a strong odor of raw marijuana emanating from the apartment." The officer questioned Irone and Hubbard about the smell. Hubbard denied smelling anything and said his lawyer told him humans cannot detect a marijuana odor.

(That last sentence is equally stupefying. Marijuana does have an odor. That being said, that odor is not always present when an officer claims it is. See also: every search predicated on the smell of marijuana that fails to turn up any marijuana.)

More officers had arrived by that time and decided they might need a warrant. The officers told everyone present to leave until the apartment could be searched. Three officers, including Nicholson, performed a "security sweep" to make sure everyone had left. During this sweep, officers saw drug paraphernalia, a gun, and a locked safe. The warrant arrived and the safe was pried open, resulting in the discovery of 25 grams of marijuana.

Now, let's look at Officer Nicholson's claim:

Nicholson said she "smelled a strong odor of raw marijuana emanating from the apartment."

Here's what was found:

[O]fficers pried open the safe and found 25.07 grams of raw marijuana inside a Tupperware container…

So, from two feet outside the doorway, Officer Nicholson smelled raw marijuana located in Tupperware container inside a locked safe inside a bedroom inside the "back bedroom" closet. That's the story she stuck with, which seems facially unbelievable given the facts of the case.

Whatever, says the Kansas Supreme Court. Officer Nicholson was declared credible, given her past nasal expertise. The same with the other officer, who also smelled raw marijuana through the Matryoshka-esque layers shielding the contraband from random apartment visitors.

Among its factual findings, the court concluded: (1) Nicholson had "detected the smell of raw marijuana 200 to 500 times and burnt marijuana 100 to 300 times" in her law enforcement training and professional experience; (2) when Hubbard came out of his apartment, closing the door behind him, both Nicholson and Ivener could smell what they identified as the odor of raw marijuana coming from the apartment; (3) Ivener testified the smell was "potent" and "overwhelming…"

LOL at "overwhelming." One burnt cig and 25 grams in a locked safe inside a sealed Tupperware container. Officer Ivener is more bloodhound than human and is obviously credible as fuck. This third attempt to suppress the evidence fails because the state Supreme Court says assertions that cannot be proven are all that's needed to waive probable cause search requirements. If an officer claims to smell marijuana, the exigent circumstances exception to the warrant requirement kicks in. After all, preventing someone from flushing weed down the toilet is more important than ensuring the rights of the policed.

[W]e agree with the panel that the probable cause plus exigent circumstances exception permitted the warrantless sweep. Therefore, to the extent the paraphernalia evidence and the search warrant were fruits of a warrantless search, the sweep was not illegal and the challenged evidence is not subject to exclusion.

The sound you hear accompanying this sentence is the Constitution being run through the shredder like Banksy artwork:

We hold that the totality of the circumstances surrounding a law enforcement officer's detection of the smell of raw marijuana emanating from a residence can supply probable cause to believe the residence contains contraband or evidence of a crime.

There's all officers need to obtain a warrant. And since you can't have anyone destroying the evidence you claim you smell while you're waiting for a warrant, you get a free warrantless peek.

The panel focused on the second, fourth, and fifth Dugan factors. Under the second, the court highlighted Ivener's testimony that he did not know how many people had been in the apartment originally and whether they all left, so the officers could not know whether everyone was out. This weighs in the State's favor. Under the fourth factor, the panel noted there was evidence the occupants were aware of the officers' presence, so this also weighs in the State's favor because it demonstrates anyone staying behind would be alerted to the likelihood of an impending search.

The dissent says the lower court did not do enough to vet the officers' claims about their ability to identify the odor of raw marijuana a few dozen feet away from where it resided inside a sealed container inside a locked safe. It points out that if officers want to be considered experts on the odor of marijuana, they should be treated as expert witnesses when testifying. Instead, the lower court accepted their claims of expertise (the hundreds of past marijuana odor sniffs) but then decided they should only be held to the same standard as a lay person giving non-expert testimony. From the dissent:

The officers in this case were not testifying as mere lay persons. On the contrary, they specifically stated that the origin of their ability to smell and identify the source of their olfactory perception as raw marijuana stemmed from their brief exposure to the identified odor during their study at one or more police academies, followed by their experience with numerous cases in which they had successfully detected the substance. This uncontroverted dependency between the officers' training and experience on the one hand and the opinions they expressed on the other hand qualified their testimony about detecting the strong, potent, or overwhelming odor of raw marijuana as expert opinion testimony.


As urged by the defense, the science, if any, behind the officers' apparently sincere belief in their professed ability to detect an odor of raw marijuana should have been subjected to vetting under the rule of Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579, 113 S. Ct. 2786, 125 L. Ed. 2d 469 (1993), which is now codified in subsection (b). The officers' expert opinion testimony should have been admitted on the critical issue of the existence of probable cause at the time of the sweep of the apartment only if "(1) [t]he testimony [was] based on sufficient facts or data; (2) the testimony [was] the product of reliable principles and methods; and (3) the witness[es] ha[d] reliably applied the principles and methods to the facts of the case." K.S.A. 2017 Supp. 60-456(b). The district judge erred by failing to exercise her gatekeeping function under subsection (b).

This would have given the defendant a chance to raise a Daubert challenge during trial, which could have resulted in the lower court finding in his favor on the unconstitutional search argument. Rather than officers simply saying "Oh, I've smelled weed a lot and also this time," they'd actually have to provide some evidence of their claims. Is there anything "scientifically valid" about claiming to have experienced the "overwhelming" odor of raw marijuana safely ensconced in a goddamn safe? Probably not. But we'll never know because Kansas courts won't apply that standard. And the state's courts will never have to apply the standard because the top court has stated it's now OK for cops to rescue a warrantless search simply by saying they smelled something illegal.

Read More | 64 Comments | Leave a Comment..

Posted on Techdirt - 14 December 2018 @ 12:01pm

Inspector General: FBI Lost Six Months Of Important Text Messages Because Its Retention System Sucks

from the all-the-smart-people-at-the-agency-etc dept

It's great to know the FBI wants encryption broken so it can forensically molest any devices in its possession to find the mother lode of culpatory evidence these devices always contain. ("Always," you ask? The FBI irritatedly taps the word "always" repeatedly in response.)

The reason this is such good news is that the FBI can't even manage to reliably extract content from phones it issues to agents and other personnel. If you can't expertly handle data migration/storage from phones in your control at all times, how badly are you going to bungle forensic evidence extraction at scale if the government ever green lights encryption backdoors?

The DOJ Inspector General has just released a report [PDF] detailing its investigation of missing text messages sent by two agents at the center of a Congressional hearing about supposed biased behavior during the FBI investigation of Hillary Clinton and Mueller's investigation of Donald Trump. Agents Peter Strzok and Lisa Page exchanged text messages expressing their dislike of Trump and made some comments suggesting they would do something to harm his presidential chances. Critics believed this showed these agents -- if not the agency itself -- were guided by political bias when investigating Trump's ties with Russia.

Maybe there was more to this than there first appeared to be. Thousands of text messages from the agents' devices went missing -- a gap that stretched from December 2016 to May 2017. The Inspector General's office used forensic tools to recover roughly 19,000 text messages from the two phones. The culprit appears to be standard operating procedure rather than a deliberate attempt to destroy evidence.

Strzok and Page had each returned their DOJ-issued iPhones six months earlier when their assignments to the SCO (Special Counsel's Office) had ended. The OIG was told that the DOJ issued iPhone previously assigned to Strzok had been re-issued to another FBI agent… CYBER obtained a forensic extraction of the iPhone previously assigned to Strzok; however, this iPhone had been reset to factory settings and was reconfigured for the new user...

The same thing happened to Page's phone. It was reset in July 2017 by personnel at the DOJ's Justice Management Decision. It hadn't been issued to another agent but it had been restored in preparation for reassignment.

Resetting phones just makes sense. Nothing about the FBI's handling of records its supposed to be retaining does. Text messages are official communications. They're subject to public records requests and they're often responsive to subpoenas in criminal cases. Wiping a phone without ensuring existing communications have been backed up is monumentally stupid and possibly illegal.

To the agency's credit, it does try to retain these communications before resetting issued devices. The problem is its tool works poorly. As does its management:

FBI Assistant General Counsel [redacted for some fucking reason] informed OIG that there does not appear to be a directive for preservation of texts by ESOC [Enterprise Security Operations Center], but that ESOC retains text messages as a matter of practice.

Define "retain" and "matter of practice" in the context of a six-month gap of non-retention of Strzok/Page text messages. I guess it's the thought that counts?

[E]SOC could not provide a specific explanation for the failure in the FBI's text message collection relating to Strzok's and Page's S5 phones…

ESOC did offer up a set of possible explanations for the failure, none of which are reassuring. First, it could have been a bug reported by the vendor in 2016 but not fixed until March 2017. The application itself could have been misconfigured. The application may not have been compatible with device software updates.

Efforts were made to mitigate the issue. But those failed as well. The FBI phased out Samsung S5s and replaced them with S7s. Nothing changed but the phone model.

[A]ccording to FBI's Information and Technology Branch, as of November 15, 2018, the data collection tool utilized by FBI was still not reliably collecting text messages from approximately 10 percent of FBI issued mobile devices…

That the OIG was able to recover thousands of messages from forensic extraction and scouring the FBI's enterprise database isn't really good news. It's unlikely the FBI will make the same effort when hit with discovery demands and it already won't thoroughly search databases it has full access to when responding to FOIA requests. So, records are going to go missing and it won't be until the OIG steps in that any effort will be made to find the missing records, much less take a good look at the broken processes that caused them to go missing in the first place.

Read More | 12 Comments | Leave a Comment..

Posted on Free Speech - 14 December 2018 @ 10:42am

Ex-Sheriff Joe Arpaio Claims Three Publications Did $300 Million In Damage To His Pristine Reputation

from the exponential-thinking dept

Former sheriff (and ongoing blight on the state of Arizona) Joe Arpaio has decided to sue a handful of new agencies for defamation. The slightly-overwrought press release from FreedomWatch (and founder Larry Klayman) alleges defamation per se on the part of CNN, the Huffington Post, and Rolling Stone and claims these three publications caused $300.5 million in damage to Arpaio's otherwise impeccable reputation.

Here's Freedom Watch's zesty summation of the lawsuit:

"It's time that someone stood up to the Left's 'Fake News' media, which is bent on destroying anyone who is a supporter of the president and in particular Sheriff Arpaio. My client will not be bullied by the likes of Jeff Zucker, Chris Cuomo, the Huffington Post, and Rolling Stone, as he alone has the courage to stand up for not just himself, the President of the United States but also all fair-minded and ethical Americans."

Ok, then. If you think the lawsuit itself is a much more buttoned-down affair, then you haven't read a Larry Klayman complaint before. It starts with the usual stuff establishing standing before getting down to the focus of the complaint. The alleged defamation committed by all three defendants is referring to Joe Arpaio as a "convicted felon" when his only conviction was for a misdemeanor. Rolling Stone issued a correction but the other two defendants haven't corrected their original misstatements. Hence the lawsuit -- Arpaio and Klayman's public attempt to stick it to the "Left Fake News media."

Here's why Arpaio feels he's owed $300 million for a couple of standing misstatements. Running for an open US Senate seat must pay really well.

Plaintiff Arpaio’s chances and prospects of election to the U.S. Senate in 2020 have been severely harmed by the publication of false and fraudulent facts in the Defamatory Article. This also harms Plaintiff financially, as his chances of obtaining funding from the Republican establishment and donors for the 2020 election have been damaged by the publication of false and fraudulent representations in the Defamatory Article.

Given the pardon issued to him by the Republican president currently in office, it seems unlikely his reputation suffered any damage from these incorrect statements. If anything, it only further damaged the reputation of these publications, at least in the eyes of Arpaio supporters (which presumably includes a sizeable percentage of Republican voters).

Arpaio managed to survive hundreds of self-inflicted reputational wounds during his years as sheriff, so it's a bit of a stretch to claim three "fake news" sources have done anything more than further cement his reputation as a martyr to the cause.

Arpaio also claims this has damaged his reputation within the law enforcement community. Again, it seems unlikely to have budged the needle there either. Law enforcement agencies tend to view the press with the same suspicion Arpaio does and probably agree the ex-sheriff was persecuted rather than prosecuted.

Nevertheless, there's potential money to be made. And Klayman, representing Arpaio, isn't above using a federal lawsuit as soapbox. At times, the complaint [PDF] more resembles a transcript of a YouTube monologue than a statement of facts and allegations.

Defendants are aware of these prospective business relationships and thus, given their malice and leftist enmity of Arpaio sought to destroy them with the publication of the subject Defamatory Publications.

Defendants published the Defamatory Publications to influence the RNC, the RNCC and affiliated political action committee and persons, and other donors, to withhold funding for Plaintiff Arpaio’s 2020 political campaign by smearing and destroying his reputation and standing in his law enforcement, government and political community.

Plaintiff Arpaio has been harmed as to his reputation as “America’s Toughest Sheriff” and financially by the publication of the Defamatory Article.

[insert fire emoji]

While it's true publications got the facts wrong, Joe Arpaio is an extremely public person. This raises the bar he must meet to succeed in this lawsuit. While the publications may have been careless in incorrectly noting the level of the offense that Arpaio was convicted for, that's not nearly enough to secure a favorable ruling.

The difference between convicted felon and convicted misdemeanant probably doesn't mean much when placed in the totality of Arpaio's recent history. Arpaio was convicted of contempt and spent part of the last decade being investigated by the DOJ. Add this to his long history of civil liberties violations and refusal to adhere to court orders, and the difference between a felony conviction and a misdemeanor is a rounding error.

Arpaio's reputation has been leaking hit points for a long time, but it has never affected his popularity with his presumed voter base. The rest of America may hate "America's Toughest Sheriff," but his supporters can't get enough of him. Three mistakes by three publications is unlikely to have caused $300-worth of damage to the ex-sheriff's Senatorial chances, much less $300 million. Some people are just defamation-proof and it's a good bet Joseph Arpaio is one of them.

Arpaio's welcome to waste the court's time and his own money claiming the "fake news" media dinged his rust bucket of a reputation, but he's not going to be happy when the court apprises him of the above facts. The problem is these three publications will have to spend some money of their own defending against a seriously weak lawsuit. With the DC circuit having decided it doesn't need to apply the District's own anti-SLAPP law to federal cases, it's likely the defendants will be stuck with covering their own costs, even if they prevail. On top of that they'll have to deal with an opposing counsel with a penchant for pissing off judges and treating the courtroom like a heated Periscope broadcast. It's a waste of everyone's time and money but Klayman's. I'm pretty sure he didn't take this on contingency.

Read More | 22 Comments | Leave a Comment..

Posted on Free Speech - 13 December 2018 @ 10:44am

Arkansas Politician Introduces Bill To Make It Illegal For Social Media Companies To Block Content He Likes

from the bonus-round-of-bad-ideas-immediately-follows dept

Arkansas state rep Johnny Rye is in galaxy mind mode. He's introduced a bill that aims to stop "censorship" by social media platforms by allowing the government to compel speech. I'm sure the irony is lost on Rye. But it's probably not the only thing sailing over Rep. Rye's head. (h/t Sarah McLaughlin)

What Rye is trying to stop is social media companies moderating their own platforms. He appears to feel conservatives are being "censored" by Facebook, Twitter, etc. and thinks rolling over the First Amendment and Section 230 immunity is going to cure this perceived ill.

Holy hell, the bill [PDF] is a mess. I'm going to have to quote from it at length because it's the only way any discussion of it can achieve semi-coherence. Here's the gist of it, from David Ramsey of the Arkansas Times:

The bill would allow plaintiffs to seek damages of a minimum of $75,000 "per purposeful deletion or censoring of the social media website user's speech" plus actual damages and punitive damages if aggravating factors are present. Only social media companies with at least 75 million subscribers would be subject to Rye's bill.

Slightly more specifically, the "Stop Social Media Censorship Act" says this:

The owner or operator of a social media website who resides in this state is subject to a private right of action by a social media website user if the social media website purposely:

(i) Deletes or censors a social media website user's religious or political speech; or

(ii) Uses an algorithm to suppress religious or political speech.

How does Rep. Rye get around the fact that private companies can moderate content on their platforms however they'd like without it being "censorship?" Easy. He just unilaterally declares Facebook, et al to be "public utilities." Problem solved.

A social media website is considered a public utility under this section.

Pretty cool. I didn't know writing worked that way. Let me see if I've got the hang of this…

Rep. Rye is considered a nuisance and threat to public safety under this section.

Now I just need to send the cops around to restore public safety by taking Rep. Rye out of the rotation.

The good news is social media companies can limit the monetary damages by restoring/uncensoring posts a user complains about. (Presumably using an in-court complaint form, rather than the site's online forms.) There's your compelled speech, which is just another misshapen cherry on the top of shit sundae.

Here's Rye's tiny concession to the First Amendment, which isn't really a concession, nor even compliant with the First Amendment. This must be Rye's idea of "narrow crafting."

A social media website is immune from liability under this section if it deletes or censors a social media website user's speech or uses an algorithm to disfavor or censure speech that calls for immediate acts of violence, is obscene, or pornographic in nature.

Rye is generously allowing platforms to engage in the sort of moderation they already engage in. They're free to moderate certain kinds of speech, just not the kind of speech Rye likes. And if users aren't willing to sue over "censorship" themselves, the state is empowered to draw inferences on their behalf.

The Attorney General may bring a civil cause of action under this section on behalf of social media website users who reside in this state whose religious speech or political speech has been censored by a social media website.

If you're wondering why Rep. Rye has crafted this monument to his own stupidity, David Ramsey has your answer:

Rye's bill comes in the same week that Sen. Jason Rapert vociferously complained about being temporarily barred from sending tweets by Twitter. A tweet that Rapert sent out regarding Muslims was found by the company to violate its "hateful conduct policy." The company imposed a timeout that lasted at least 12 hours, according to a printout of Twitter's communication that Rapert held up to the camera in a Facebook Live post. The offending tweet has apparently been removed.

Here are a couple other things Rye is pitching this legislative session:

Make it a felony to relocate, alter, remove, rename, rededicate or otherwise disturb historical monuments on public property without the permission of the Arkansas History Commission.

Create a special license plate for members of the Arkansas Masonic Lodge of Free and Accepted Masons.

So, a "no tearing down Confederate war hero statues" bill, and a special license plate for himself. From Rye's bio:

He is active in the Lions Club and Masonic Lodge.

This wave of proposed legislation follows last year's failed attempt to repeal the state's recognition of same-sex marriages.

And he's looking for even more internet regulation, this time under the guise of fighting sex trafficking. This bill [PDF] would ban anyone from selling any devices that access the internet without pre-installed "blocking software." This is at least as batshit as his social media censorship proposal.

A distributor shall not in this state manufacture, sell, offer for sale, lease, or distribute a product that makes content accessible on the internet unless the product:

(A) Contains active and properly operating blocking software that renders obscene material inaccessible;

(B) Prohibits access to content that is prohibited under this chapter;

(C) Prohibits access to revenge pornography;

(D) Prohibits access to a website that facilitates prostitution; and

(E) Prohibits access to a website that facilitates human trafficking.

The list of "prohibited content" includes revenge porn, "specified anatomical areas," and obscene material. The reseller or manufacturer violating this law is subject to a $500 fine… wait for it:

...for each prohibited image, video or audio depiction, or website found to be accessible at the time of the offense.

On top of adding new software to their devices, resellers and manufacturers will also foot the bill for a 24/7 complaint hotline to report overblocking/underblocking.

The good news (I guess) is that Arkansans still have the option to see turgid penises and whatnot. All they have to do is pay $20 and state, in writing, that they're above the age of 18 and definitely want to see as many "specified anatomical areas" as possible. Proof of age must also be submitted. The bill does not specify whether this will restore access to revenge porn or trafficked humans, but one would assume it's an all-inclusive fee.

Sex trafficking will somehow be prevented by the state AG dumping collected fines into a strongbox marked "for the children," because nothing's too on the nose for Johnny Rye:

Fines levied by a court under subdivision (a)(2)(A) of this section shall be deposited into the Safe Harbor Fund for Sexually Exploited Children.

Whew. What a time to be alive. And in Arkansas. And knowing you still have two more years before you can unceremoniously return Johnny Rye to the private sector he so very badly wants to harm.

Read More | 105 Comments | Leave a Comment..

Posted on Techdirt - 13 December 2018 @ 3:27am

Federal Court Says Massachusetts' Wiretap Law Can't Be Used To Arrest People For Recording Public Officials

from the because-duh-this-was-decided-seven-years-ago dept

Seven years ago, the First Circuit Court of Appeals released its Glik decision. This decision found that recording public officials was protected by the First Amendment, overriding Massachusetts state law. The state wiretap law says recordings must have consent of everyone captured on the recording. The Appeals Court said recording police officers while they performed their duties in public was clearly covered by the First Amendment. The opinion also dealt with some ancillary Fourth Amendment issues, but seemingly made it clear these recordings were protected activity.

The law remained on the books unaltered. Thanks to legislative inaction, the law is still capable of being abused. Since the Appeals Court didn't declare the law unconstitutional, or even this application of it, it has taken another federal court decision nearly a decade later to straighten this out. (h/t Courthouse News Service)

The ruling [PDF] deals with two First Amendment cases. One deals with activists recording cops. The other deals with another set of activists -- James O'Keefe's Project Veritas -- and its secret recording of Democratic politicians. The specifics might be a bit different, but the outcome is the same: recording public officials is protected by the First Amendment. The state law is unconstitutional.

Consistent with the language of Glik, the Court holds that Section 99 may not constitutionally prohibit the secret audio recording of government officials, including law enforcement officials, performing their duties in public spaces, subject to reasonable time, manner, and place restrictions.

That just reiterates Glik's findings. The Massachusetts federal court goes further, though:

The Court declares Section 99 unconstitutional insofar as it prohibits audio recording of government officials, including law enforcement officers, performing their duties in public spaces, subject to reasonable time, place, and manner restrictions. The Court will issue a corresponding injunction against the defendants in these actions.

The court also points out the state government's response to the Glik ruling was wrong. The ruling did not limit itself to "openly" recording public officials. It said the First Amendment protected the recording of public officials performing public duties whether or not government officials knew they were being recorded.

In October 2011, the bulletin was accompanied by a memo from the Commissioner citing the Glik decision. The memo instructs officers that “public and open recording of police officers by a civilian is not a violation” of Section 99. The cover memo for the May 2015 recirculation “remind[s] all officers that civilians have a First Amendment right to publicly and openly record officers while in the course of their duties.”


But Glik did not clearly restrict itself to open recording. Rather, it held that the First Amendment provides a “right to film government officials or matters of public interest in public space.”

The court says siding with the government's interpretation would just result in more bogus arrests under the state's wiretap law.

But the training materials go beyond telling officers when it is impermissible to arrest; taking a narrow construction of Glik, they also communicate that it is permissible to arrest for secretly audiorecording the police under all circumstances. In other words, it gives the green light to arrests that, as the Court holds below, are barred by Glik.

This ruling should put an end to that. You'd think the last ruling would have done the job, but despite the Appeals Court never ruling that secret recordings of public officials were illegal, the state decided to interpret the decision this way, leading directly to the lawsuits requiring the record to be set one more time, seven years down the road.

Read More | 19 Comments | Leave a Comment..

Posted on Techdirt - 12 December 2018 @ 3:36pm

School Boots Professor Off Campus After He Exposes Its Complicity In Predatory Publishing Schemes

from the bitten-hand-bites-back dept

Predatory publishing -- the pay-for-play practice that allows anyone to have their research published as soon as the check clears -- may end up costing a professor his job. Derek Pyne, associate professor of economics at British Columbia's Thompson Rivers University, has managed to turn his own campus against him simply for telling the uncomfortable truth.

His 2017 paper, The Rewards of Predatory Publication at a Small Business School, exposed the ugly side effects of the constant pressure on researchers and academics to be published. "Publish or perish," the saying goes. And if you can't get published by someone who thinks your research is worth publishing, get published by someone who thinks everyone with enough cash on hand deserves to be published.

What Pyne found was schools rewarding publication, whether or not the publication was bought and paid for.

It finds that the majority of faculty with research responsibilities at a small Canadian business school have publications in predatory journals. In terms of financial compensation, these publications produce greater rewards than many non-predatory journal publications. Publications in predatory journals are also positively correlated with receiving internal research awards.

Some of those who were reaping the rewards of being published by taking advantage of pay-for-play publications were Pyne's associates at Thompson Rivers University. They didn't appreciate being the data set Pyne used in his research paper. This backlash has led to Pyne being ousted from the campus of the school that employs him. (via Reason)

As a result of that 2017 paper and the media attention that followed, Pyne says, he’s been effectively banned from campus since May. He may visit only for a short list of reasons, such as health care. Teaching is out and so, too, is the library. It’s unclear when, or if, Pyne will be allowed to resume his normal duties.

This isn't the only thing Pyne has done to piss off his colleagues. He's also engaged in a number of heated arguments with faculty about the quality of the school's grad programs and brought his numerous complaints to the press. Administrators claimed coworkers were afraid of him and demanded he undergo a psychological evaluation. His keys were taken and he was banned from campus. Pyne cleared the psych eval -- one that found (understandably) Pyne felt persecuted by his employer. He's now back on the payroll, but has been told to "cease communicating inappropriate, defamatory and insubordinate statements" about the school.

Fortunately, Pyne has a few allies. Retraction Watch -- an essential site with zero sympathy for predatory publications -- is now involved in Pyne's fight against the university.

Ivan Oransky, Distinguished Writer in Residence at New York University's Arthur Carter Journalism Institute and co-founder of Retraction Watch, has followed Pyne’s case for over a year. He said recently that he was “puzzled” about “what's actually going on. It's not very helpful when a university takes action like this but doesn't say why.”

That's why Retraction Watch has argued for the release of university investigations, he said, citing an article on why Cornell University hasn’t released its findings in the Brian Wansink research misconduct case, among other similar incidents elsewhere.

He also has some free speech warriors of the Canadian variety helping him out.

Canada's Society for Academic Freedom and Scholarship has appealed to Thompson Rivers on Pyne's behalf. The Canadian Association of University Teachers, similar to American Association of University Professors, is also looking into the case.

Thompson Rivers has refused to participate in that investigation so far, David Robinson, CAUT’s executive director, said recently.

“This is a very peculiar case,” Robinson said. “But certainly criticizing colleagues’ research or his administration is intramural speech protected by academic freedom. These are matters of educational quality. He may be correct, or he may not be correct. But he certainly has a right to express his views on educational quality.”

Entities that can't handle criticism love shooting the messenger -- especially when that messenger is pointing out the university's willingness to reward quantity over quality. Whatever reputational damage the school and its pay-for-play professors are suffering isn't the result of defamation or inappropriate statements from Pyne. It's a direct result of their actions and the incentives the university employs. The university says it will reward educators who publish. And those educators are hastily shoving receipts from sketchy publications into their pockets as they make cases for merit raises. The university could have responded by altering its incentive programs, and those stung by Pyne's research could have acknowledged their gaming of the system. Instead, they're doing this, which is unfortunate, but also just as unfortunately, unsurprising.

9 Comments | Leave a Comment..

Posted on Techdirt - 12 December 2018 @ 10:45am

Malware Purveyors Targeting Pirate Sites With Bogus DMCA Takedown Notices

from the tough-to-tell-who's-wearing-the-white-hats-atm dept

DMCA takedown abuse is nothing new. But it normally involves bogus takedown requests claiming copyright violations. TorrentFreak has uncovered a new form of abuse that involves the DMCA, but unlike normal copyright claims, doesn't allow the target to contest the claims.

One of the most recent scams we’ve seen targets various popular game piracy sites, including,,,,, and

The notices in question are seemingly sent by prominent names in the gaming industry, such as Steam and Ubisoft. However, the sudden flurry of takedown requests appears to be initiated by scammers instead.

These scammers appear to be going after competitors. The entities behind this wave of bogus takedown notices are gaming Google's search engine via DMCA notices. Much like shady characters trying to vanish unflattering news and blog posts from Google's search results, these shady characters are trying to move their malicious sites higher in the rankings by targeting similar sites offering a similar selection of cracked software.

But rather than go with a straight copyright claim which could be contested and result in a reinstatement, the scammers are using another part of the DMCA -- one that provides no adversarial process.

[T]he notices are not regular DMCA takedowns. Instead, they are notifications that the URLs circumvent technological protection measures such as DRM, which is separately covered in the DMCA.

“Google has been notified that the following URLs distribute copyright circumvention devices in violation of 17 U.S.C. § 1201,” Google informed the site owner.

“Please find attached the notice we received. There is no formal counter notification process available under US law for circumvention, so we have not reinstated these URLs. If you dispute that you are distributing circumvention devices, please reply with a further explanation.”

That's the way the law works. Takedown notices claiming DRM circumvention (most pirated software involves some sort of circumvention) cannot be contested. Google is allowing replies in these cases, but what it's doing isn't mandated by law. Google, however, is obliged to comply with requests unless it feels the complaint isn't legitimate. How strongly it feels sometimes depends on the manpower available... or the attention the issue is receiving elsewhere on the web.

The notices collected by TorrentFreak hardly seem legit, even with only a cursory review. They're littered with typos and make unrealistic/absurd claims, like supposedly filing on behalf of Steam even though Steam doesn't actually own or produce the game titles listed in the takedown notice.

As TorrentFreak notes, thousands of URLs have already been taken down, pushing malware-loaded sites higher in search listings. Internet users seeking free games now may find they've picked up bitcoin-mining hitchhikers after visiting these scammers' sites.

The good news is Google is paying more attention to these takedown requests and has reinstated some URLs targeted by these malware purveyors. But the fact that this sort of search engine gaming is still effective is further proof the DMCA enables abuse by treating the accuser as inherently credible while limiting the options of those falsely accused.

17 Comments | Leave a Comment..

Posted on Techdirt - 12 December 2018 @ 3:23am

UK Spies Say They're Dropping Bulk Data Collection For Bulk Equipment Interference

from the I-mean,-they'll-still-use-both... dept

UK spies are changing their minds. Rapidly. Sure, bulk data collection is cool. But you know what's really cool? Mass interference with electronic devices.

At the time the Investigatory Powers Bill was passing through Parliament – it was signed into law in 2016 – EI [Electronic Interference] hadn't been used, but it was already seen an alternative to bulk interception.

However, it was expected to be authorised through targeted or targeted thematic warrants; as then-independent reviewer of terrorism David Anderson wrote at the time, "bulk EI is likely to be only sparingly used".


During the passage of the Investigatory Powers legislation, he said, the government anticipated bulk EI warrants would be "the exception", and "be limited to overseas 'discovery' based EI operations".

But with encryption increasingly commonplace, the spies want the exception to edge towards becoming the rule.

"Used sparingly" is now "used by default." Why? The good old baddie, encryption. A letter [PDF] written by security minister Ben Wallace says encryption is making bulk data collections less useful.

Following a review of current operational and technical realities, GCHQ have revisited the previous position and determined that it will be necessary to conduct a higher proportion of ongoing overseas focused operational activity using the bulk EI regime than was originally envisaged.

The lawfulness depends on the "double lock" process. The government alone can't give GCHQ permission to engage in bulk EI. There's a judge involved now, making this more of a warrant process than a subpoena process, to make a somewhat clumsy analogy. According to this report, bulk EI is still waiting in the wings. If true, it's a good thing because the double-lock process didn't actually go into effect until the end of November.

What bulk EI is remains somewhat of a mystery. But some of what's described in a 2016 report [PDF] containing several hypotheticals sounds like a lot of large-scale intrusion, ranging from Stingray-esque device location to tactics that have been left up to the imagination thus far.

This sounds a bit like the FBI's child porn hunting Network Investigative Technique: serving up malware to collect information on devices and their users.

Intelligence from sources including bulk interception identified a location in Syria used by extremists. However the widespread use of anonymisation and encryption prevented GCHQ from identifying specific individuals and their communications through bulk interception. GCHQ then used EI under an ISA authorisation (under the Bill this would be done using a targeted thematic EI warrant) to identify the users of devices in this location.

This may be a theoretical Stingray deployment:

A group of terrorists are at a training camp in a remote location overseas. The security and intelligence agencies have successfully deployed targeted EI against the devices the group are using and know that they are planning an attack on Western tourists in a major town in the same country, but not when the attack is planned for. One day, all of the existing devices suddenly stop being used. This is probably an indication that the group has acquired new devices and gone to the town to prepare for the attack. It is not known what devices the terrorists are now using. The security and intelligence agencies would use bulk EI techniques to acquire data from devices located in the town in order to try to identify the new devices that are being used by the group.

Whatever bulk electronic interference ends up being when it's actually deployed, GCHQ is sure of one thing: the less it knows about its targets, the more justified it is using it in bulk.

As the cell members can only be identified following considerable target discovery effort, a bulk EI warrant is suitable.

Whatever civil liberties concerns this program raises will probably be dismissed quickly. GCHQ's hypotheticals involve terrorism suspects overseas and child porn site operators -- the least sympathetic targets available. Foreigners are fair game for bulk anything and no one wants to side with child exploiters, even if they technically share the same civil liberties/rights.

The exception is the rule. This is how it works for those who promise the most worrying aspects of surveillance programs will be saved for the edge cases. Sooner or later, the edge cases are just cases, and no one is interested in walking anything back.

Read More | 10 Comments | Leave a Comment..

Posted on Techdirt - 11 December 2018 @ 12:04pm

When Not Hiding Cameras In Traffic Barrels And Streetlights, The DEA Is Shoving Them Into... Vacuums?

from the DEA-surveillance-sucks dept

If it exists, the DEA probably wants to stash a camera in it.

A Denair, California-based company called the Special Services Group, LLC won a $42,595 DEA purchase order at the end of November for a “custom Shop Vac concealment with Canon M50B.” Canon describes the M50B as a “high-sensitivity…PTZ [Pan-Tilt-Zoom] network camera” that “captures video with remarkable color and clarity, even in very low-light environments.” The M50B retails for about $3,400; the acquisition is being funded by the DEA’s Office of Investigative Technology and is presumably intended to assist agents in a specific operation, rather than for wider, passive monitoring.

This almost sounds like an ultra-low tech version of the NSA's hardware interdiction program. The NSA intercepts computer equipment to install hardware/software backdoors. The DEA's vacuum camera possibly could be stashed in a Shop Vac en route to a targeted person/business. Either that or a DEA agent/informant is going to pretend to be a janitor and wheel around a loaded Shop Vac to capture footage.

It's weird but it's pretty much in line with the DEA's procurement history. A report from Quartz last month showed the DEA was buying cameras concealed in streetlights, traffic barrels, and speed-display road signs. The last one on the list doesn't house ordinary cameras, but rather automated license plate readers.

Are there Constitutional concerns? Sure. They're pretty minimal in areas where any activity could be observed by a member of the public. But they're not nonexistent. And much of this surveillance activity occurs with the silent blessing of the city governments that own the repurposed streetlights. The government has occasionally pushed for upgraded streetlight systems, with the main "improvement" being the addition of surveillance devices.

Chad Marlow, a senior advocacy and policy counsel for the American Civil Liberties Union, says efforts to put cameras in street lights have been proposed before by local law enforcement, typically as part of a “smart” LED street light system.

“It basically has the ability to turn every streetlight into a surveillance device, which is very Orwellian to say the least,” Marlow told Quartz. “In most jurisdictions, the local police or department of public works are authorized to make these decisions unilaterally and in secret. There’s no public debate or oversight.”

The Shop Vac+camera is more problematic. Vacuums are typically used in areas not readily visible to the public. This narc vac deployment hopefully comes with a warrant attached. Someone consenting to having an area vacuumed isn't the same as consenting to a search. This device can do both at the same time, which would appear to be a Fourth Amendment issue if there's no accompanying paperwork.

Of course, it could be argued allowing someone like a DEA agent/informant into a private area is tacit consent to be searched. After all, anything seen by the camera would be seen by its operator. Anything illegal observed by this third party could be reported to law enforcement. Utilizing a camera as another set of eyes doesn't undercut this Fourth Amendment end-around. (If it's a DEA informant deploying the vacu-cam, the government can't claim it was a private search, so there's that...) The best solution is don't do illegal stuff where it can be observed by anyone -- or anything -- you don't know inside and out.

I wouldn't hold my breath waiting for this tactic to be discussed in court. There's nothing particularly secretive about the tech angle, especially when there's publicly-available acquisition documents directly referencing both the mean and the method. But I'm sure the DEA will still argue discussing a camera in a Shop Vac in open court will jeopardize future/ongoing investigations. However this procurement pans out, it's probably safe to say more than a few pieces of cleaning equipment underwent exploratory disassembly following the publication of the DEA's acquisition

22 Comments | Leave a Comment..

Posted on Techdirt - 11 December 2018 @ 10:45am

Unsolicited Dick Pics Prompt Stupid, Unworkable Legislative Response From New York Lawmakers

from the I'm-sure-the-smart-guys-at-the-NYPD-can-figure-it-out dept

Any sufficiently advanced technology is indistinguishable from magic a trenchcoat-wearing lurker. Apple's AirDrop app, which allows anyone to share files with anyone else using the app, has become the new way to send unsolicited dick pics.

Granted, there's a bit of a perfect storm aspect that sets it apart from the ChatRoulettes of the world. Users of the app must allow messages from "Everyone" (rather than just people on their Contacts list) and be within Bluetooth range of the amateur photographer.

Of course, since it can conceivably happen to someone, it has happened to someone. And the New York Post was there to report on the easily-avoidable menace.

Britta Carlson, 28, was riding the uptown 6 train to a concert on July 27 when a mysterious message popped up on her smartphone.

“iPhone 1 would like to share a note with you,” read the note sent at 6:51 p.m. She hit “Accept” and was horrified by what she saw. “It was just a huge close-up picture of a disgusting penis,” said Carlson, of Bushwick, Brooklyn. The message was titled “Straw” and was sent by an anonymous stranger.

“It really felt like someone had actually just flashed me.”

Well, it's a reasonable digital facsimile. The feeling is not misplaced. The response from legislators -- who feel compelled to do something when people who have removed every barrier against digital flashing are digitally flashed -- is, however, more than a little misplaced.

Let's not blame the victim. Setting AirDrop to accept messages from "Everyone" is risky, but that doesn't justify the distribution of up-pants photography no one has asked for. But let's not jump into the legislation-mobile just because the New York Post found two New Yorkers in a city of 8 million who ended up with unwanted junk in their AirDrop trunks.

The New York Times found a couple more women who endured the same experience while riding public transportation to introduce its reporting on the introduction of a bill by New York City legislators.

The two women were victims of what has become known as cyber flashing, a growing trend of technology-enabled sexual harassment. It has become so common that two lawmakers introduced a City Council bill on Wednesday to explicitly make it a crime, punishable by a $1,000 fine or up to a year in jail.

It's an anti-penis pic bill targeting AirDrop that's about as dumb as that string of words sounds. Here's its champion, taking a tough anti-dick stance:

“In the old days, you had to have a long trench coat and good running shoes,” said Councilman Joseph Borelli, a Republican from Staten Island who is co-sponsoring the anti-flashing bill. “Technology has made it significantly easier to be a creep.”

Both of Borelli's statements are substantially true, but it's still unclear how the law will work or if it even can be made workable in practical terms. The bill would add "unsolicited intimate images" to the state's existing harassment law, but nothing has been said by supporters of the bill that indicates they've really thought this through.

Sarah Edwards of mac4n6 has thought this through. While a certain amount of data is logged by Apple when AirDrop is used, it's unlikely the digital detritus left behind on the victim's phone will be of much use to law enforcement. Since the perp isn't going to turn himself in, cops are left with only the complainant's phone to work with. And Hein says there's not enough there to work with.

The lack of attribution artifacts at this time (additional research pending) is going to make it very difficult to attribute AirDrop misuse. At best, if the cops are provided each device, they can pair the connections up – however this will require access to file system dumping services like Cellebrite CAS, GrayKey from GrayShift or performing a jailbreak to get the most accurate analysis. If the devices are named appropriately (ie: If Jen Mack’s iPhone was actually named ‘Jen Mack’s iPhone’) this may make an analysis easy, however I can see an instance where this can be abused to imitate someone else.

First, they're going to need the perp's phone. (And if they already have that, they likely can find the pic that was sent.) But how likely is that scenario? AirDrop pervs aren't going to be turning themselves in and/or agreeing to forensic phone searches. Are cops going to get a warrant to handle a misdemeanor harassment charge involving a picture sent into the ether to be swept up by passing AirDrop users? It's not like they're dealing with targeted harassment which might make it easier to identify the person behind the lewd photo.

Is any judge going to OK a full-scale search for a photo used in a drive-by digital flashing? Sure, some might, but it seems unlikely law enforcement is going to put its best minds and expensive hacking tools to work to gather data that probably won't even help them track down the sender.

And where do you go from there? You have a device ID you can tie to a person, but you still have to find that person to make the charge stick. Will the NYPD be wardriving with Stingrays to ring up petty harassment charges?

There are so many aspects of this that make zero sense. But what do you expect from reactionary lawmaking triggered by a New York Post article. The bill is so useless even one of its sponsors admits it's little more than anti-dick pic PR:

Donovan J. Richards, a Democratic councilman from Queens and a co-sponsor of the bill, said the legislation was intended as a bipartisan effort to raise awareness — and to remove the sense of impunity that may embolden those sending the images.

So, to sum up, some people found out what happens when you turn your device into a public mailbox and lawmakers managed to turn some bad experiences into bad legislation. Of course, the blame ultimately lies with the jerkoffs who can't keep their appendages to themselves. If people weren't idiots, we wouldn't need to discuss idiotic legislation quite so often. Idiots need to keep their private parts private and idiots with lawmaking power need to actually think about what they want done before they start doing something.

61 Comments | Leave a Comment..

Posted on Techdirt - 11 December 2018 @ 3:23am

Microsoft Posts List Of Facial Recognition Tech Guidelines It Thinks The Government Should Make Mandatory

from the good-rules,-cloudy-motive dept

Earlier this year, Microsoft faced backlash for appearing to be working with ICE to provide it with facial recognition technology. A January blog post from its Azure Government wing stated it had acquired certification to set up and manage ICE cloud services. The key bit was this paragraph, which definitely made it seem Microsoft was joining ICE in the facial recognition business.

This ATO [Authority to Operate] is a critical next step in enabling ICE to deliver such services as cloud-based identity and access, serving both employees and citizens from applications hosted in the cloud. This can help employees make more informed decisions faster, with Azure Government enabling them to process data on edge devices or utilize deep learning capabilities to accelerate facial recognition and identification.

Roughly five months later, this blog post was discovered, leading to Microsoft receiving a large dose of social media shaming. A number of its own employees signed a letter opposing any involvement at all with ICE. A July blog post from the president of Microsoft addressed the fallout from the company's partnership with ICE. It clarified that Microsoft was not actually providing facial recognition tech to the agency and laid out a number of ground rules the company felt would best serve everyone going forward.

This starting point has now morphed into a full-fledged rule set Microsoft will apparently be applying to itself. Microsoft's Brad Smith again addresses the positives and negatives of facial recognition tech, especially when it's deployed by government agencies. The blog post is a call for government regulation, not just of tech companies offering this technology, but for some internal regulation of agencies deploying this technology.

Smith's post is long, thoughtful, and detailed. I encourage you to read it for yourself. But most of it falls under these headings -- all issues Microsoft believes should be addressed via federal legislation.

First, especially in its current state of development, certain uses of facial recognition technology increase the risk of decisions and, more generally, outcomes that are biased and, in some cases, in violation of laws prohibiting discrimination.

Second, the widespread use of this technology can lead to new intrusions into people’s privacy.

And third, the use of facial recognition technology by a government for mass surveillance can encroach on democratic freedoms.

The three points affect everyone involved: the government, facial recognition tech developers, and private sector end users. It asks the government to police itself, as well as any vendors it deals with. It's a big ask, especially since the government has historically shown minimal restraint when exploiting new surveillance technology. It often falls on the nation's courts to regulate the government's tech use, rather than the government being proactively cautious when rolling out new tools and toys.

But it also demands a lot from the private sector and suggests those who can't follow these rules Microsoft has laid out shouldn't be allowed to offer their services to the government. Here's what Smith proposes as a baseline for the tech side:

Legislation should require tech companies that offer facial recognition services to provide documentation that explains the capabilities and limitations of the technology in terms that customers and consumers can understand.

New laws should also require that providers of commercial facial recognition services enable third parties engaged in independent testing to conduct and publish reasonable tests of their facial recognition services for accuracy and unfair bias.


While human beings of course are not immune to errors or biases, we believe that in certain high-stakes scenarios, it’s critical for qualified people to review facial recognition results and make key decisions rather than simply turn them over to computers. New legislation should therefore require that entities that deploy facial recognition undertake meaningful human review of facial recognition results prior to making final decisions for what the law deems to be “consequential use cases” that affect consumers. [...]

Finally, it’s important for the entities that deploy facial recognition services to recognize that they are not absolved of their obligation to comply with laws prohibiting discrimination against individual consumers or groups of consumers. This provides additional reason to ensure that humans undertake meaningful review, given their ongoing and ultimate accountability under the law for decisions that are based on the use of facial recognition.

This is the burden on the tech side. What the government needs to do is just not use it for mass surveillance or the continuous surveillance of certain people. Microsoft suggests warrants for continuous surveillance using facial recognition tech with the expected exceptions for emergencies and public safety risks.

When combined with ubiquitous cameras and massive computing power and storage in the cloud, a government could use facial recognition technology to enable continuous surveillance of specific individuals. It could follow anyone anywhere, or for that matter, everyone everywhere. It could do this at any time or even all the time. This use of facial recognition technology could unleash mass surveillance on an unprecedented scale.


We must ensure that the year 2024 doesn’t look like a page from the novel “1984.” An indispensable democratic principle has always been the tenet that no government is above the law. Today this requires that we ensure that governmental use of facial recognition technology remain subject to the rule of law. New legislation can put us on this path.

It's all good stuff that would protect citizens and curb abusive tech deployment if implemented across the board by tech companies. But that would likely require a legislative mandate, according to Microsoft. The end result is Microsoft asking the same entity its feels may abuse the tech to lay down federal guidelines for development and deployment.

I don't have any complaints about what Microsoft's proposing. I only question why it's proposing it. When a large corporation starts asking for government regulation, it's usually because increased regulation would keep the market smaller and help Microsoft weed out a few possible competitors. I wouldn't say this is the only reason Microsoft is handing out a long wish list of government mandates, but there's no way this isn't a factor.

Microsoft's management likely has genuine concerns about this tech and its future uses. Somewhat coincidentally, it's also in the best position to make these arguments. Other than a supposed misunderstanding about selling facial recognition tech to ICE, the company hasn't set its reputation on fire and/or been caught handing the government loads of tools that can be repurposed for oppression.

Other players in the facial recognition market have already ceded the high ground. Amazon has been handing out tech to law enforcement agencies even as Congress members are demanding answers from the company about its facial recognition software. Google may not be pushing facial recognition tech, but with it currently engaged in building an oppressor-friendly search engine for China's government, it can't really portray itself as a champion of civil liberties. Facebook has used facial recognition tech for years, but is currently so toxic no one really wants to hear what it has to say about privacy or government surveillance. Apple may have some guidance to offer, but the DOJ likely uses Tim Cook headshots for dartboards, making it less than receptive to the company's thoughts on biometric scanning. As for the rest of the players in the field -- the multiple contractors who sell surveillance equipment to the governments all over the world -- they have zero concerns about government abuse or respecting civil liberties, so Microsoft's post may as well be written in Etruscan for all they'll get out it.

I'm in firm agreement with Brad Smith/Microsoft that facial recognition tech is a threat to privacy and civil liberties. I also believe the companies crafting/selling this tech should vet their products thoroughly and be prepared to shut them down if they can't eliminate bias or if products are being used to conduct pervasive, unjustified surveillance. I don't believe most tech companies will do this voluntarily and know for a fact the government will not actively police use of these systems. The status quo -- zero accountability from governments and government contractors -- cannot remain in place. The courts may right some wrongs eventually, but until then, suppliers of facial recognition technology are complicit in the resulting civil liberties violations.

I applaud Microsoft for calling for action. But I will hold that applause until it becomes apparent Microsoft will maintain these standards internally, with or without a legislative mandate. If other companies choose to sign on as… I don't know… ethical surveillance tech dealers, that would be great. Asking the government to regulate tech development the preferred course of action, but a surveillance tech Wild West isn't an ideal outcome either. Ideally, the government would set higher standards for adoption and deployment of tech along the lines Microsoft has proposed, policing itself by vetting its vendors better. But if the federal government was truly interested in limiting its abuse of tech developments, we would have seen some evidence of it already.

These suggestions should be voluntarily adopted by other tech companies, if for no other reason than it insulates them from elimination should the government decide its going to up its acquisition and deployment standards. Microsoft scores a PR win, if nothing else, simply by being first. I appreciate its staking out its stance on this issue, but remain cautiously pessimistic about the company's ability to live up to its own standards.

13 Comments | Leave a Comment..

Posted on Techdirt - 10 December 2018 @ 2:09pm

Atlanta Cops Caught Deleting Body Cam Footage, Failing To Activate Recording Devices

from the all-the-accountability-a-lack-of-internal-accountability-can-provide! dept

Atlanta, Georgia, August 23, 2016:

Officials are promising more transparency on the part of law enforcement, and greater trust between cops and the community. The body cameras “will strengthen trust among our officers and the communities they serve by providing transparency to officer interactions,” said Atlanta Mayor Kasim Reed this past week in announcing a purchase.

Oh, we were all so very young then. Look at us (including me!), pointing to the increasing adoption of body cameras as the ushering in of a new era of transparency and accountability. Didn't take long for this lily to get unceremoniously de-gilded.

Cameras are great tools of accountability. They just can't be controlled and maintained by cops. Two years after promising a better police force brimming with accountable officers steadily working to rebuild relationships with the citizens they police, Atlanta residents are being informed their servants/protectors are cheats and liars.

The audit looked at a random sample of 150 videos from officers’ body cameras. In more than half the cases, officers failed to activate and deactivate their cameras at the required time, the audit said.

Officers also miscategorized 22 of the videos, including a use of force incident. Auditors said mislabeling the videos may have led to some being deleted prematurely.

And the audit said that officers failed to capture two-thirds of dispatched calls between November 2017 and May 2018.

These results shouldn't shock Atlanta residents or readers of this site. It doesn't even shock Atlanta Police officials. Police Chief Erika Shields says she's "not happy" with the results of the audit, but also "not surprised." She excuses her officers actions in the worst possible way:

"I knew that what we are asking of officers is a culture shift."

It's your job to make sure the "culture" actually "shifts," Chief Shields. That it hasn't budged despite the addition of body cameras says a whole lot about the culture at the top of the PD. Whatever discipline Shields has meted out (she only says it happens, not how frequently or severely) clearly isn't enough. And the culture that remains in place in the Atlanta PD is downright nasty.

Auditors identified 64 videos “that were deleted by users who should not have had been authorized to delete videos from the system” from November 2016 to 2018.

Officer use-of-force incident videos are supposed to be handled differently. Supervisors are supposed to upload them and they to be labeled properly in case the department or the public needs to review them later.

But the audit found APD supervisors routinely didn’t understand their responsibilities. One zone supervisor told auditors he was unaware that it was his job to upload use of force videos.

Officers know the system is flawed and abuse it. Those in charge of securing recordings officers may not want retained either don't know what they're doing or are playing dumb when questioned by auditors. At the top of the miserable heap is a chief who has allowed flagrant policy violations to occur under her watch.

An official worth a damn would never express their lack of surprise at this sort of behavior from underlings. There should be shock and dismay at these results, not a shrug of "They're cops, what can you do?" emanating from the top person in Atlanta law enforcement. If that's the official reaction, the next audit will just find more of the same.

34 Comments | Leave a Comment..

Posted on Techdirt - 10 December 2018 @ 11:59am

Federal Courts Aren't ATMs, Angry Judge Reminds Copyright Troll

from the fully-justified-verbal-abuse dept

I will never tire of judges handing down benchslaps to IP trolls. Perhaps I'll never tire of it because it just doesn't happen often enough. Or perhaps it cannot happen often enough, given the sheer amount of troll litigation judges preside over. Not every dismissed case can be given the court's full attention. But this opinion, from Judge Royce Lamberth, should certainly get Strike 3 Holding's attention.

The brutal nine-page opinion [PDF] opens with this caustic appraisal of the porn company's business model. (h/t Eric Goldman)

Strike 3 is [...] a copyright troll. Its swarms of lawyers hound people who allegedly watch their content through Bittorrent, an online service enabling anonymous users to share videos despite their copyright protection. Since Bittorrent masks users' identities, Strike 3 can only identify an infringing Internet protocol (IP) address, using geolocation technology to trace that address to a jurisdiction. This method is famously flawed: virtual private networks and onion routing spoof IP addresses (for good and ill); routers and other devices are unsecured; malware cracks passwords and opens backdoors; multiple people (family, roommates, guests, neighbors, etc.) share the same IP address; a geolocation service might randomly assign addresses to some general location if it cannot more specifically identify another. [...]

Simply put, inferring the person who pays the cable bill illegally downloaded a specific file is even less trustworthy than inferring they watched a specific TV Show. But in many cases, the method is enough to force the Internet service provider (ISP) to unmask the IP address's subscriber. And once the ISP outs the subscriber, permitting them to be served as the defendant, any future Google search of their name will turn up associations with the websites Vixen, Blacked, Tushy, and Blacked Raw. The first two are awkward enough, but the latter two cater to even more singular tastes.

The court goes on to point out it isn't copyright law that vindicates the plaintiff but rather "the law of large numbers." Strike 3 has filed 1,849 copyright infringement lawsuits across the nation (according to Judge Lamberth's count), resulting in an untold number of settlements. And that's all this really is: another attempt to force someone, anyone, to cough up some money rather than face off in court. As Judge Lamberth notes, copyright trolls "consume 58%" of the federal court system's copyright lawsuit docket. They're a burden on the courts and a burden on the public. Judge Lamberth calls Strike 3 on its copious bullshit:

These serial litigants drop cases at the first sign of resistance, preying on low-hanging fruit and staying one step ahead of any coordinated defense. They don't seem to care about whether defendant actually did the infringing, or about developing the law. If a Billy Goat Gruff moves to confront a copyright troll in court, the troll cuts and runs back under its bridge. Perhaps the trolls fear a court disrupting their rinse-wash-and-repeat approach: fie a deluge of complaints; ask the court to compel disclosure of the account holders; settle as many claims as possible; abandon the rest.

Rather than be an accomplice in Strike 3's shady game, Judge Lamberth denies its request for discovery. Without further discovery, Strike 3 can't serve anyone with a lawsuit, which will prevent Strike 3 from refiling in this court at least, even with a dismissal without prejudice that leaves that door open.

No copyright troll is going to try to keep a case alive in Lamberth's court, not after this damning summation of their collective efforts:

Armed with hundreds of cut-and-pasted complaints and boilerplate discovery motions, Strike 3 floods this courthouse (and others around the country) with lawsuits smacking of extortion. It treats this Court not as a citadel of justice, but as an ATM.

This will limit Strike 3's litigation in this particular jurisdiction. But that's what venue shopping is for. At least defendants have another opinion to quote when fighting back against trolls like Strike 3.

Read More | 112 Comments | Leave a Comment..

Posted on Techdirt - 10 December 2018 @ 9:39am

Australian Government Passes Law Forcing Tech Companies To Break Encryption

from the nice-one,-idiots dept

The Australian Parliament has passed a law ordaining compelled access to encrypted devices and communications. The legislation was floated months ago and opened up for comment, but it appears the Australian government has ignored the numerous complaints that such a law would violate civil liberties and otherwise be an all-around bad idea. But that's OK. It's completely justified, according to the Prime Minister.

Scott Morrison, Australia’s prime minister, told local radio on Thursday that encryption laws were necessary to target Islamist terrorism, paedophile networks and organised crime. “These laws are used to catch the scum that try to bring our country down and we can’t give them a leave pass,” he said.

Sure, and if innocent people find their communications compromised by government-mandated holes, so be it. The law was rushed through Parliament in a late evening session since every moment wasted was just one more leave pass for scum. Legislators promise to review the law in 18 months to ensure it hasn't been abused or created more problems than it's solved, but let's be honest here: how often does legislation like this get clawed back after a periodic review? It's never happened in the history of the laws governing our surveillance programs, even after leaked docs exposed unconstitutional practices and widespread abuse of surveillance authorities.

Here's a short summary of the new powers the legislation hands over to law enforcement and national security agencies:

The law enables Australia’s attorney-general to order the likes of Apple, Facebook, and Whatsapp to build capability, such as software code, which enables police to access a particular device or service.

Companies may also have to provide the design specifications of their technology to police, facilitate access to a device or service, help authorities develop their own capabilities and conceal the fact that an agency has undertaken a covert operation.

This law will go into effect before the end of the year. How it will go into effect is anyone's guess. The law provides for compelled access -- including the creation of new code -- but no one seems to have any idea what this will look like in practice. The new backdoors-in-everything-but-name will be put in place by developers/manufacturers at the drop of a court order, with the onus on the smart people in the tech business to iron out all of the problems.

The law only prevents the government from demanding that "systemic weaknesses" be built into devices or programs. Everything else is left to the imagination, including the actual process of introducing code changes in multi-user platforms or targeted devices.

An actual software developer, Alfie John, has put together a splendid Twitter thread pointing out the flaws in the government's assumptions about software development. Since the compelled participants are forbidden from discussing surveillance court orders with anyone (which would include coworkers, supervisors, the general public, etc.), these requested alterations would have to be implemented in secret. The problem is coding changes go through a number of hands before they go live. Either everyone involved would need to be sworn to secrecy (which also means being threatened with jail time) or the process falls apart. Changes ordered by a court could be rejected by those higher up on the chain. Worse, the planned encryption hole could see the compelled coder being viewed as a data thief or foreign operative or whatever.

Law enforcement is going to have to make everyone involved in the product/device complicit and covered under the same prison threat for this to work. The more people its exposed to, the higher the chance of leakage. And if the code will break other code -- or the request simply can't be met due to any number of concerns -- the government make ask the court to hold the company and its personnel in contempt for their failure to achieve the impossible.

To make matters worse, the company targeted with a compelled access request may be monitored for leaks before and after the request is submitted, putting employees under surveillance simply because of their profession.

In some cases, the only weakness that can be introduced will be systemic, which will run contrary to the law. How will the government handle this inevitable eventuality? Will it respect the law or will it simply redefine the term to codify its unlawful actions?

Even if all of this somehow works flawlessly, users of devices and communications platforms will be put at risk. Sure, the compelled access might be targeted, but it will teach users to distrust software/firmware updates that may actually keep them safer. The government may even encourage the forging of credentials or security certificates to ensure its compelled exploits reach their targets. And just because these backdoors theoretically only allow one government agent in at a time, that doesn't mean they aren't backdoors. They may be slightly more difficult for malicious actors to exploit, but once the trust is shattered by compelled access, other attack vectors will present themselves.

It's a terrible law justified by the spoken equivalent of a bumper sticker. And it's going to end up doing serious damage -- not just in Australia, but all over the world. Bad legislation spreads like a communicable disease. If one democracy says this is acceptable, other free-world leaders will use its passage as a permission slip for encryption-targeting mandates of their own.

69 Comments | Leave a Comment..

Posted on Techdirt - 10 December 2018 @ 3:41am

New York Police Union Says More Reporting On Stops/Frisks Will Hurt The NYPD's Effectiveness

from the 'we're-at-our-best-when-we're-not-held-accountable' dept

If anything might make police-community relations better, the Patrolmen's Benevolent Association (PBA) -- the union representing NYPD officers -- is against it. PBA President Pat Lynch has come out against body cameras, community policing, and even his own union members.

The battle over the court-ordered revamping of the NYPD's stop-and-frisk program rages on five years after Judge Scheindlin found it to be unconstitutional. So does the PBA, which is now arguing keeping data on stops is throwing sand in the NYPD's gears.

The Patrolmen’s Benevolent Association swiftly condemned an order issued Nov. 20 by a Federal Judge concerning stop-and-frisk data that it said would further discourage “proactive policing in New York City.”

The directive from U.S. District Judge Analisa Torres requires the NYPD, in consultation with an outside monitor, to submit for approval a plan to implement “a program for systematically receiving, assessing, and acting on information regarding adverse findings on the conduct of police officers involving illegal stops or illegal trespass enforcements.”

The NYPD has been ordered to document its stops numerous times since the 2013 decision. And it has continued to fail to do so. Officers blame a lack of instruction and/or clarity from upper management. Upper management blames multiple court orders and outside oversight for its inability to deliver clear instructions. And the PBA blames the whole mess on officers being forced to engage in Constitutional policing, which apparently is the opposite of "proactive" policing.

What the PBA is agitating for is the return to halcyon days of stop-and-frisk when NYPD officers performed hundreds of thousands of stops a year, a majority of them targeting the city's minorities. Constitutional policing would trim hundreds of man hours from the production of mandated reports, but the PBA wants nothing to do with keeping officers on patrol, rather than tied up doing internal bookkeeping for the DA's office.

Judge Torres said she was requiring that the plan to provide extensive information on the program include “(a) declinations of prosecutions by the District Attorneys in New York City; (b) suppression decisions by courts precluding evidence as a result of unlawful stops and searches; (c) court findings of incredible testimony by police officers; (d) denials of indemnification and/or representation of police officers by the New York City Law Department; and (e) judgments and settlements against police officers in civil cases where, in the opinion of the New York City Law Department, there exists evidence of police malfeasance.”

The PBA's response? To deride the accountability mandates as "unnecessary" -- an abuse of the court's "narrow authority" that will somehow wreak havoc on the NYPD's rank-and-file. This production of information will "end proactive policing in New York City," according to PBA president Pat Lynch.

Fortunately, Pat Lynch has long been recognized as a blowhard who seldom has the full support of the officers he represents. According to this report, the PBA was "quietly critical" of the PD's stop-and-frisk program when it was being abused to its fullest extent. Now that it's being deployed in a more Constitutional fashion -- resulting in a severe decline in stops -- the PBA wants to pretend the same program it criticized as "overused" is now a critical aspect of New York law enforcement.

44 Comments | Leave a Comment..

Posted on Techdirt - 7 December 2018 @ 1:30pm

Indiana Police Chief Promoting As Many Bad Cops As He Can To Supervisory Positions

from the Welcome-to-Zero-Accountability,-Indiana dept

Why is routine police misconduct a problem police departments can't seem to solve? It's a mystery, says Elkhart, Indiana law enforcement.

Twenty-eight of the Elkhart Police Department’s 34 supervisors, from chief down to sergeant, have disciplinary records. The reasons range from carelessness to incompetence to serious, even criminal, misconduct.

Fifteen of them have served suspensions, including [Police Chief Ed] Windbigler himself, who was once suspended for three days and ordered to pay punitive damages in a federal lawsuit alleging excessive force.

Change starts at the stop... unless it's stagnation you're really looking for. Then all you have to do is put someone as questionable as the officers he oversees in charge of the whole mess.

This report -- put together by ProPublica and the South Bend Tribune -- compiles information from public records and court documents to paint a disturbing picture of the Elkhart police force. Making bad cops supervisors ensures misconduct by officers will never be fully addressed.

One promoted officer fired his weapon in three fatal shootings in the span of four years. Sergeant Dan Jones has been promoted twice, despite being found at fault in at least four accidents. He's also Parent of the Year.

Jones was once disciplined for how he picked his child up from elementary school, according to his personnel file. In his squad car, Jones entered a drive marked “wrong way,” cut into line, failed to properly secure his child and then, at a pedestrian crossing, failed to stop for a student holding up her stop sign.

Despite seven reprimands, a suspension, a demotion, and a finding of neglect of duty, Todd Thayer was promoted from corporal to assistant chief in 2016 by Chief Windbigler shortly after he took over the top spot in the department. His suspension involved officers taking suggestive photos of a woman waiting for a ride at the police station.

Another promoted officer shot and killed an unarmed man while serving a search warrant, and tasered a student at a local high school while acting as a school resource officer. Other members of the PD's supervisory team have used data terminals to "talk about white power," repeatedly switched recording devices off, threw away property seized from people they've stopped, slept on the job, filed incomplete paperwork, and been involved in large number of auto accidents and on-the-job shootings.

With these promotions, Chief Windbigler has made it clear he won't hold his officers accountable for their misdeeds. He's been in office for less than two years, but he's already shown he's not willing to mete out discipline.

This month, the city said two Elkhart police officers would be charged with misdemeanor battery after the Tribune requested video that showed them repeatedly punching a handcuffed man in the face. Windbigler had previously opted to limit the two officers’ discipline to reprimands. He told the oversight board they “just went a little overboard when they took him to the ground,” while making no mention of the punches thrown.

There's another level of oversight that may rein some of the worst cops in, but Chief Windbigler is actively avoiding its scrutiny. The Public Safety Board is supposed to be the disciplinary body handling misconduct cases, but Chief Windbigler isn't giving it anything to work with. As the article notes, previous police chiefs brought 20 cases a year to the PSB. Windbigler brought zero cases to the board during his first full year as chief. Since then, he has only brought eight. For all of this accountability-dodging, his officers voted the chief "Officer of the Year," despite the fact the honor is supposed to go to actual officers, not top PD brass.

The news only gets worse for Elkhart residents, who will be paying bad cops to oversee possibly worse cops. The mayor, Tim Neese, has decided to reform the Public Safety Board. Neese, whose son is an Elkhart police officer, will be dropping his two appointees and replacing them with more cops.

He said the board would be made up of five people — and all five would be police officers, including an assistant chief, a captain and an internal affairs lieutenant.

The mayor and police chief don't appear to care how much long-term damage they're doing to community relations and the police department itself. The Elkhart PD spent much of the early 90s defending itself in a long string of civil rights lawsuits that culminated in a study commissioned by the city that showed the department had a "reputation for brutality" and almost zero internal accountability. With these recent brass installations, it's the 90s all over again.

27 Comments | Leave a Comment..

More posts from Capitalist Lion Tamer >>