Capitalist Lion Tamer’s Techdirt Profile


About Capitalist Lion TamerTechdirt Insider

List of blogs started with enthusiasm, which now mostly lie dormant:

[reserved for future use]

[recently retired]

[various side projects]

Posted on Techdirt - 8 February 2016 @ 3:34pm

Documents Show Chicago Cops Routinely Disabling Recording Equipment

from the deliberate-operator-'error' dept

When the dashcam footage of the shooting of Laquan McDonald was finally released by the city of Chicago, it was notably missing the audio. In fact, no surviving footage of the shooting contains any audio. It's 2016 and the Chicago PD is still producing silent films.

There's a reason for this. Turns out cops aren't fans of recordings. DNAInfo Chicago requested information on the police department's camera problems after the eerily soundless shooting video was released. The documents obtained showed the PD may have plenty of cameras, but they're rarely generating complete recordings… or in some cases, any recordings at all.

On the night Laquan McDonald was shot 16 times by a Chicago Police officer, at least three dashboard video cameras in squad cars at the scene didn't work. And the ones that did capture video did not record audio.
This complete failure was no statistical quirk.
In fact, 80 percent of the Chicago Police Department's 850 dashcam video systems don't record audio due to "to operator error or in some cases intentional destruction" by officers, according to a review by the Police Department.

Additionally, about 12 percent of dashcams experience "video issues" on any given day due to "equipment or operator error," police spokesman Anthony Guglielmi said.
Cameras are only a part of the accountability equation. Putting them into use is a step forward, but if there's no accountability built into the process itself, this is the result. A mechanically inoperative camera is rarely going to be considered a problem by either the cops in control of it or the management overseeing them. And if officers feel more "comfortable" with less documentation of their activities, it doesn't take much to render the cameras useless.

The documentation obtained by DNAInfo makes it clear missing footage or recordings are anything but accidental. The following cannot be explained away by coincidence.
Additionally, only three of 22 Chicago Police-involved shooting investigations forwarded to the Cook County State’s Attorney’s Office from the Independent Police Review Authority this year included dashcam video evidence. And none of those videos included audio recordings, state’s attorney spokeswoman Sally Daly said.
Neither can it explain the "errors" that led to the dearth of Laquan McDonald shooting footage.
The dashcam in police vehicle No. 8489, shared by officers Thomas Gaffney and Joseph McElligott the night of Laquan's shooting, recorded 37 “event videos” in October 2014, and had an operational dashcam the night of the shooting. But “due to disk error” no video was recorded at the shooting scene, according to police reports.


Police vehicle No. 8756 had a working dashcam that recorded 124 “event videos” in October 2014 without a single request for maintenance that month.

But on the night of Laquan's shooting, the vehicle assigned to Arturo Bacerra and Leticia Valez reportedly had a “power issue” and the dashcam was “not engaged.”
In both cases, equipment was inspected later and found to have no mechanical problems. And yet, mysterious malfunctions somehow presented themselves during this controversial incident -- an incident in which the surviving footage contradicted officers' reports.

So, even purely as an internal investigative tool, the "recordings" are mostly useless. Officers clearly don't want their superiors to see what they've been up to, much less the general public. DNAInfo's report of the epidemic of unusable/missing recordings was unsurprisingly greeted by the local police union as an unwarranted attack on the reputation of Chicago's finest.
The union president called the report and CPD's statement that the department will not tolerate officers maliciously damaging equipment "just more kicks to the morale and kicks to the people that are out there working every day."

"If there are individuals that are involved in purposefully damaging equipment, they will be cited for it," he said. "But, to cite someone because of a repair tag not being the most recent request for repair, I think that’s arbitrary and I think that’s part of the problem.”
The union president points to "thousands" of repair tickets and months-long waits for service as the real problem here. But his attempt to portray this as a hardware problem doesn't hold up when actual accountability measures are put in place.
“Supt. Escalante sent a very clear message and has held people accountable. And since we took that corrective action, we have seen a more than 70-percent increase in the amount of [video] uploads at the end of each tour … and that is being audited weekly with reports sent to the superintendent.”
If it was mostly a problem with non-functioning equipment and long waits for repairs, the amount of uploaded footage should have remained nearly unchanged, rather than increasing 70 percent.

And the union president's statement would be more believable if similar tampering hadn't occurred at other police departments. This indicates that covering up wrongdoing is the prevailing mindset, rather than just the actions of a few rogue officers determined to thwart accountability at every turn.

Cameras can't fix officer accountability if no one's willing to hold them accountable for missing or incomplete recordings. The problem never seems to get fixed until it's been made public. When agencies are only interested in reacting to issues rather than trying to head them off, they play right into the hands of officers who prefer to perform public duties completely unobserved.

23 Comments | Leave a Comment..

Posted on Techdirt - 8 February 2016 @ 2:09pm

Appeals Court Tells City It Can't Use Its Terribly-Written Zoning Laws To Censor Speech

from the the-aesthetic-value-of-shutting-someone-up dept

Here's a fun free speech win from the 4th Circuit Appeals Court. Well, it's at least a fun read, especially when the judges go after the city of Norfolk's highly-questionable claim that its completely inconsistent zoning statute isn't loaded with content-based restrictions.

First, though, here's a bit of background. Norfolk's Central Radio Company's building was on the list of places to be destroyed by the city to make way for an expansion of Old Dominion University. To protest this plan, it hung a large sign on the side of its building stating its opposition to eminent domain abuse.

It also protested the university's planned expansion by suing it, ultimately undoing the government's plan to demolish CRC's building.

The city, tipped off by an Old Dominion employee, decided to "investigate" the company's sign and, of course, found it to be in violation of city advertising statutes.

This prompted another lawsuit from the Central Radio Company, this time seeking to have the ordinance found unconstitutional. Unfortunately, it wasn't quite so lucky this time. The district court found the statute did not infringe on the company's First Amendment rights. The Fourth Circuit Court of Appeals agreed.

CRC petitioned the Supreme Court. Its timing was fortuitous. The Supreme Court had recently handed down a decision in a similar case (Reed v. Town of Gilbert). The decision reaffirmed that government entities cannot impose content-based restrictions without narrowly crafting the limitations to "further a compelling government interest."

The US Supreme Court booted the case back to the appeals court with instructions to apply its recent Reed decision. Taking this into consideration, the Appeals Court finds in favor of Central Radio Company and isn't too impressed with Norfolk's ill-advised attempt to censor content that didn't agree with its eminent domain plans.
Based on Reed, we hold that the City’s regulation was a content-based restriction of speech. The former sign code exempted governmental or religious flags and emblems, but applied to private and secular flags and emblems. In addition, it exempted “works of art” that “in no way identif[ied] or specifically relate[d] to a product or service,” but it applied to art that referenced a product or service. On its face, the former sign code was content-based because it applied or did not apply as a result of content, that is, “the topic discussed or the idea or message expressed.”
Because of the internal inconsistencies in the statute (which has since been rewritten), the government can't claim its restrictions aren't content-based. Those assertions have been undone by the city's inability to craft a coherent policy. The law was supposedly put in place to improve the city's aesthetics and cut down on distracted driving. According to the city of Norfolk, these two things were supposedly "compelling government interests." The court disagrees, finding it to be a badly-written law with severe Constitutional issues.
With respect to the City’s stated interest in preserving aesthetic appeal, for example, the flag of a private or secular organization was “no greater an eyesore” than the flag of a government or religion, id. (quoting City of Cincinnati v. Discovery Network, Inc., 507 U.S. 410, 425 (1993)), and works of art that referenced a product or service did not necessarily detract from the City’s physical appearance any more than other works of art. Yet, the former sign code allowed the unlimited proliferation of governmental and religious flags, as well as works of art that met the City’s dubious criterion, while sharply restricting the number and size of flags and art bearing other messages.


The City also has not shown that limiting the size and number of private and secular flags, as well as works of art that referenced products or services, was necessary to eliminate threats to traffic safety. There is no evidence in the record that secular flags were any more distracting than religious ones, or that a large work of art displaying a reference to a product threatened the safety of motorists any more than any other large, exempted pieces of artwork.
A workable, Constitutional policy wasn't handed down by the city until well after its original statute proved to be a problem. Because the policy has been altered since the filing of the suit in 2012, the court finds no need to issue an injunction. Even if the city wasn't directly trying to censor critical speech (although it certainly appeared to be doing exactly that), the statute was so badly written that it couldn't help but trip over itself. Worse, it put the government in the position of deciding what was or wasn't "approved" art, and implied that art and commerce were mutually exclusive expressions.

"Nominal damages" are on the way to the Central Radio Company, which managed to not only save the building where it has spent the last half-century from destruction, but managed to get a bad law rewritten in the process.

Read More | 7 Comments | Leave a Comment..

Posted on Techdirt - 8 February 2016 @ 11:43am

Bandai-Namco Blows Money On DRM Rather Than Fixing Its Terrible PC Port Of Tales Of Symphonia

from the to-nail-down-a-$20-game-that-was-cracked-within-hours dept

When console games are ported to the PC platform, the end result is often merely adequate. Some ports are amazing because the software developer actually knows and cares about the platform their game is being ported to. Others are just quick cash-ins, relying on name recognition to bring in sales the end product hasn't earned.

Some turn out well. Some turn out bad. And some are Tales of Symphonia, a twice-ported title that originally appeared on Nintendo's Gamecube back in 2004. Tales has landed on PC with all the grace of limbless cat with an inner ear disorder.

Here's a NeoGAF forum member's list of everything that's wrong with the port.

The games resolution is locked internal at 720p, no matter what resolution you choose.

The different languages are broken, since they used a wrong font and some words dont even show up. And some things havent even been translated into other languages.

The game is locked at 30fps

It has new typos

It still partially uses Ps3-Button-controls

Random crashes (including when using alt-tab to switch programs)

Only 6 save slots.

Opening the config and save menu can take 30 seconds to load.
Then there's this:
It uses a DRM thats called VMProtect, that creates a new *.exe everytime the game starts.
How cool is that. Every time the game is played, the DRM dumps another .exe on the user's hard drive. Why? Because DRM is stupid. In this case, the DRM runs the whole game in a "virtual machine with non-standard architecture." Sure, storage is cheap and no one's really in danger of filling up their drives with "fake" .exes, but is that the gold standard of DRM? One that creates its own bloatware while you play?

And why is the DRM even needed? Namco-Bandai is utilizing top-of-the-line DRM for a PC port of an eleven-year-old game that's selling for $20. Now, it has a lot of pissed off PC gamers on its hands, wondering why they were handed a fourth-rate piece of crap, rather than a port that shows the manufacturer cares for its games or its customers. A game with this many problems doesn't need DRM weighing it down (and shedding .exes every time the program is accessed).

Game modder Peter Thoman, in his review for PC Gamer, absolutely nails how effed-up Namco-Bandai's priorities are.
Namco-Bandai cannot afford even the very minimal changes required to support arbitrary resolutions or superficially QA their product, but they can afford a completely ineffective DRM system. An ineffective DRM system for a game which people, if they were so inclined, have been able to pirate freely for over a decade.

That is apparently the quality of the decision making processes within this company. Their fans—and PC gamers—deserve better

24 Comments | Leave a Comment..

Posted on Techdirt - 8 February 2016 @ 3:22am

UK Investigative Agencies Want To Be Able To Send Warrants To US Companies

from the lots-of-'solutions,'-all-of-them-terrible-in-different-ways dept

Because citizens are localized but their data isn't, things aren't going to get any less weird as time progresses. Or any less legally troublesome. Ellen Nakashima and Andrea Petersen of the Washington Post have seen a copy of a draft negotiating document between UK and US representatives that would allow MI5 (and presumably other agencies) to access data and communications held on US servers.

The transatlantic allies have quietly begun negotiations this month on an agreement that would enable the British government to serve wiretap orders directly on U.S. communication firms for live intercepts in criminal and national security investigations involving its own citizens. Britain would also be able to serve orders to obtain stored data, such as emails.
UK agencies would still be locked out of obtaining information or data on US persons and it would take legislation to actually make this access a reality, but it's apparently being considered, as UK officials feel this issue is standing in the way of investigations/counterterrorism efforts.

As it stands now, UK agencies must make formal diplomatic requests which rely on a Mutual Legal Assistance Treaty -- a process that can take months. That's not good enough, apparently. Everyone wants instant access, including UK agencies, and a strong streak of entitlement (the same entitlement guiding FBI director James Comey's one-sided "debate" on encryption) runs through the arguments for this expansion of the UK's legal powers.
“Why should they have to do that?” said the administration official. “Why can’t they investigate crimes in the U.K., involving U.K. nationals under their own laws, regardless of the fact that the data happens to be on a server overseas?”
Why indeed? Why comply with existing laws or territorial restrictions? After all, the FBI is working toward the same end, pushing for the right to hack servers located anywhere in the world when pursuing criminals.

Several issues need to be addressed before UK agencies can be granted permission to demand communications and data from US companies. For one thing, a warrant issued in the UK is not exactly the same thing as a warrant issued in the US. The legal standards may be similar, but they're still a long ways from identical.
The negotiating text was silent on the legal standard the British government must meet to obtain a wiretap order or a search warrant for stored data. Its system does not require a judge to approve search and wiretap warrants for surveillance based on probable cause, as is done in the United States. Instead, the home secretary, who oversees police and internal affairs, approves the warrant if that cabinet member finds that it is “necessary” for national security or to prevent serious crime and that it is “proportionate” to the intrusion.
Note the "silence" on the differences between the legal standards. It appears no one involved in this discussion is interested in digging into these disparities.
A second administration official said that U.S. officials have concluded that Britain “already [has] strong substantive and procedural protections for privacy.” He added: “They may not be word for word exactly what ours are, but they are equivalent in the sense of being robust protections.”

As a result, he said, Britain’s legal standards are not at issue in the talks. “We are not weighing into legal process standards in the U.K., no more than we would want the U.K. to weigh in on what our orders look like,” he said.
That's great. Both countries won't examine each other's legal standards because they don't want to upset the reciprocity implicit in the draft agreement. The UK can ask for stuff from US companies and vice versa, with neither country playing by the other country's rules. In between all of this are citizens of each respective countries, whose data and communications might be subjected to varying legal standards -- not based on where the data is held, but who's asking for it.

Of course, the alternatives are just as problematic. If an agreement like this fails to cohere, overseas governments will likely demand data and communications generated by their citizens be stored locally, where they would be subject only to local standards.

Then there's the question of what information these agencies already have access to, thanks to the surveillance partnership between the NSA and GCHQ. Although neither agency is supposed to be focused on domestic surveillance (although both participate in this to some extent), the NSA is allowed to "tip" domestic data to the FBI for law enforcement purposes. Presumably, GCHQ can do the same with MI5. The tipped info may not be as comprehensive as what could be obtained by approaching a provider directly, but it's certainly more than the black hole the current situation is being portrayed as. (Especially considering GCHQ already has permission to break into any computer system located anywhere in the world...)

No matter what conclusion the parties come to, legislation addressing it is likely still several months away, if it ever coheres at all. Congress -- despite its occasional lapses into terrorist-related idiocy -- is likely not interested in subjecting US companies to foreign laws, no matter the stated reason for doing so. But if it doesn't oblige the UK (and others who will jump on the all-access bandwagon), it's safe to assume the British government will move towards forcing US companies to set up local servers and segregating communications and data by country of origin.

31 Comments | Leave a Comment..

Posted on Techdirt - 5 February 2016 @ 12:46pm

Prosecutors Argue Cell Site Location Data Is Something Every User Shares With 'The Rest Of The World'

from the no-expectation-of-privacy-in-things-we-insist-everyone-knows dept

The state of Maryland's defense of the Baltimore PD's warrantless use of Stingray devices continues, taking the form of a series of motions unofficially titled Things People Should Know About Their Cell Phones.

The last brief it filed in this criminal prosecution claimed "everyone knows" phones generate location data, therefore there's no expectation of privacy in this information. As commenters pointed out, people may know lots of stuff about records they're generating, but that doesn't mean law enforcement should have warrantless access to those records.

Everyone Knows… That my Doctors generate medical data about patients, so how about we get their medical records on public display without warrants!
With no expectation of privacy, there's no need for a warrant. And with no warrant requirement, there's no chance of having evidence tossed. That's a win Maryland needs, considering the Baltimore PD alone has deployed IMSI catchers several thousand times without obtaining warrants. Everything runs through pen register orders, which both lower the burden of proof and (in many cases) obscure the technology actually being used.

Now, it's back with its response to the defendant's motion to dismiss and it's again claiming People Know Stuff, therefore no expectation of privacy. (h/t Brad Heath)

After dismissing the defendant's arguments about police use of location tracking devices as "dystopian fantasies," the state argues it's time for the accused (not just this one, but any others facing prosecutions predicated on warrantless cell phone tracking device usage) to stop pretending they don't know how much data their phones are coughing up.
While cell phones are ubiquitous, they all come with "off" switches. If a cell phone is turned on, it is receiving signals from cell towers, and sending signals back out to cell towers. The cell site simulator used in this case took advantage of that fact in order to locate Andrews's phone. Because Andrews chose to keep his cell phone on, he was voluntarily sharing the location of his cell phone with third parties. Under the doctrine set forth by the Supreme Court in Smith, supra, he cannot claim a Fourth Amendment privacy right in this case.
The "Smith" the state refers to is 1979's Smith v. Maryland, which law enforcement loves to use in cell phone surveillance cases, because:

a) it's incredibly outdated, and
b) it provides a very broad and favorable reading of the Third Party Doctrine as it relates to phone usage.

The state says it's the defendant's own fault he was located. After all, he had a choice. And he chose badly.
Andrews complains that the police "invaded" a "constitutionally protected area," and therefore this search triggered Fourth Amendment protections under United States v. Karo, 468 U.S. 705 (1984) and Kyllo v. United States, 533 US. 27 (2001). But in Karo, the suspect was unaware that he had brought a police transponder into his home, and in Kyllo, the suspect was unable to prevent grow-lights (or his body) from emitting heat. Andrews, by contrast, was quite aware that he was bringing his own cell phone into the house. And he was quite capable of turning it off
The government's argument, while technically solid when used in conjunction with these precedent-setting decisions (Smith's outdated view of phones notwithstanding), but it becomes completely disingenuous when it describes the "sharing" of identifying phone data.
Just as the telephone company in Smith used transmitted phone numbers in a way quite distinct from the way in which the police used them, so, too, Andrews's cell service provider used the ID number broadcast by his cell phone in ways quite distinct from the way in which the police used it. The way in which the information was used does not alter the "expectation of privacy" in the information itself. Smith controls here. Andrews's addition of the adjective "exact" to the noun "location" does not alter that fact. The issue is not whether Andrews was aware that the police could find the location of his cell phone to within 20 yards. The issue is whether Andrews can claim an objectively reasonable expectation of privacy in information which he was voluntarily broadcasting to third parties at all times. Under Smith, the answer is no.

There is no Fourth Amendment right to evade a valid arrest warrant. Andrews was wanted on multiple counts of attempted murder. A life "on the lam" may require some inconveniences, such as not staying in one's home, and turning one's cell phone off when not in use. There is no constitutional right to avoid being arrested for one's crimes, and nothing unreasonable about the police using the same information that Andrews was sharing with the rest of the world to apprehend him.
The "rest of the world?" Really? Andrews may have been able to talk his cell phone provider into turning over a copy of all the data his phone had generated, but it's not as though the general public has access to this information, expectation of privacy or no. Just because law enforcement can access this information with warrants or (more likely) pen register orders does not make it information "shared" with "the rest of the world." It is not shared indiscriminately and it's only because cell providers are legally compelled to cooperate with law enforcement (CALEA, etc.) that cops can obtain this information with a pen register order, rather than a warrant.

And, in this case, the information was not obtained with a court order. There may be a court order on record that would give the impression the BPD would approach a telco for phone records, but the actual collection of Andrews' location info was done with a Hailstorm cell tower spoofer. The state claims the request specified the use of a cell tower spoofer but there's no indication the presiding judge had any idea how much information these devices can obtain. A pen register order refers to a targeted phone number. A cell tower simulator gathers information from everyone in the area.

This isn't just a fight over this particular prosecution. This is the state safeguarding its thousands of Stingray deployments. If it's going to be able to keep those prosecutions from falling apart -- now that the BPD's devices are an open secret -- it needs the court to agree there's no expectation of privacy in cell phone location data. And in order to do that, it apparently needs the court to believe everyone using a cell phone is sharing all sorts of information with "the rest of the world."

Read More | 48 Comments | Leave a Comment..

Posted on Techdirt - 5 February 2016 @ 11:39am

Another Cop Treats Sexting Teens Like Child Pornographers

from the teen-would-have-been-better-off-engaging-in-sexual-activity dept

More sexting stupidity, this time in Michigan.

A Three Rivers, Michigan, teenager is both the victim and perpetrator of a sex crime. He might land on the sex offender registry, and face criminal charges, all because he took an inappropriate photo—of himself.

The boy is unnamed in local news reporters, which note that he is under 15 years of age. He allegedly took a nude photo of himself on a girl’s cell phone. That girl sent the picture to another girl, who sent it to another. Preliminary charges are pending for all three—the boy was charged with manufacturing child porn, and the girls with distributing it. A prosecutor is still weighing whether to pursue the charges.
Hopefully, the prosecutor will realize that pursuing the suggested charges could ruin a few teens' lives. The police detective working the case seems to want to destroy these kids' lives… for the good of other teens, or something.
Police Detective Mike Mohney told that sexting is a serious crime because it leads to “bullying,” and “real severe things like people committing suicide or violent crimes against others because they're so embarrassed about it.”
As Reason's Robby Soave points out, Detective Mohney is a walking contradiction. Apparently, it's never occurred to him that bringing child porn charges against these young teens might result in bullying and suicide. Nothing makes the future look dim and hopeless like a long stint on the sex offender registry. Nothing destroys someone's reputation faster than being listed alongside criminals who manufactured actual child porn, rather than just took a photo of their own adolescent body.

For that matter, the preliminary charges make this teen's decision to photograph his own body and send it to another teen a far worse crime than if he'd simply showed up at the girl's house, stripped off his clothes and proceeded to engage in sexual activity with her.

Taking off his clothes at her house would have been nothing more than indecent exposure, a misdemeanor. More importantly, unless the person has been convicted for other sexual-related crimes, there's no sex offender registration tied to the charge.

Even if he'd pursued sexual contact with the other teen, it still would have been a better outcome than being branded a child pornographer. Michigan has no "Romeo and Juliet" law, so any contact between teens -- no matter their closeness in age -- could trigger statutory rape charges. (Obviously, if the sexual activity was not consensual, this would be actual rape, but there's no reason to believe a [possibly] unsolicited naked photo rises to the level of aggravated sexual assault.)

If the activity was consensual, the worst charge would be statutory rape, which does not require sex offender registration for teens.
[P]eople who are convicted of criminal sexual conduct based on consensual sexual conduct with children over the age of 13 who are not more than four years older than their victims are not required to register.
And, if the sexual contact contained no penetration, no criminal charges would be brought at all.
[A] 17-year-old who engages in consensual petting with a 14-year-old could not be prosecuted for a crime. However, if the parties engaged in oral sex, the 17-year-old could face prosecution.
So, this so-very-concerned detective has taken a digital photo -- taken by a teen of his own body -- and turned it into something worse than actual in-person nudity and/or sexual contact. That's a pretty fucked up way to show concern for sexting teens. Treating photos taken by minors and distributed to other minors as child porn is the worst possible way to handle a situation that, in all reality, should be left to the discretion of the teens' parents.

69 Comments | Leave a Comment..

Posted on Techdirt - 5 February 2016 @ 10:32am

Enigma Software Decides The Best Way To Deal With A Negative Review Is To Sue The Reviewer

from the ungracious,-ESPECIALLY-in-defeat dept

Nothing pushes a negative review of your product out of the public eye faster than a lawsuit, am I right? That's the line of thinking Enigma Software has chosen to entertain. It recently filed a lawsuit against BleepingComputer, alleging that its 2014 "review" (actually a forum post detailing Enigma's SpyHunter history as "rogue" software and the deceptive business practices the company has deployed) is defamatory.

What would seem to be a mixture of opinion and fact-based assumptions (backed by links to other sources) is portrayed by Enigma as a malicious attempt by BleepingComputer to damage its reputation so the site can push readers to affiliate partners and advertisers.

Enigma Software claims in its lawsuit that BleepingComputer has the negative SpyHunter review because it takes part in an affiliate advertising program which grants BleepingComputer a commission for redirecting users to Malwarebyte’s site. The Enigma Software Group claims, “Bleeping not only has unlawfully benefited from its smear campaign to the detriment of ESG, it has damaged the reputation of ESG by refusing to take down its false and misleading statements which have been reposted numerous times on other anti-spyware related forums and websites.”
Other computer security sites have already leapt to BleepingComputer's defense. Malwarebytes has donated $5,000 to the site's legal fees and points out that BleepingComputer is not some fly-by-night operation that solely acts as a funnel to preferred vendors.
The content is provided by the volunteer efforts of security professionals and the more than 700,000 registered users who ask and answer all questions presented on the site. To summarize, Bleeping Computer is a valuable resource in the efforts to help users live in a malware free world.
Over at CSO's Salted Hash, Steve Ragan points out the reputation Enigma claims BleepingComputer is destroying has already been severely damaged by the company's own actions over the years.
[T]he lawsuit says, "Bleeping has a direct financial interest in driving traffic and sales to Malwarebytes and driving traffic and sales away from ESG."

While that claim is true at face value, the affiliate programs used by Bleeping Computer help keep the website online and they use affiliate links for a number of vendors, not just Malwarebytes.

Also, most of the comments that are critical of Enigma Software and SpyHunter exist because the company has gained a bad reputation over the years due to spam, as well as questionable detection rates.
Ragan then runs down Enigma's history, including the high number of refunds it's had to hand out to maintain its A+ BBB rating, as well as the years it spent being blacklisted as a security risk by respected anti-virus firms.

He also notes, as BleepingComputer did in its disputed forum post, that SpyHunter has never been classified as malware or targeted for removal by competing anti-virus products, but that's apparently largely due to Engima's past litigious efforts, rather than Enigma dropping the more questionable "features" of its product -- like automatic renewals, suspicious scan results and its "pay-to-clean" pricing. (The scan is free. The removal requires a six-month subscription, which will be automatically renewed by Enigma in perpetuity unless otherwise instructed.)

The lawsuit is already off on the wrong foot, what with it clearly being filed solely to shut down criticism. While Enigma may find New York's lack of a universal anti-SLAPP statute useful (the current version only protects speech related to the discussion of public permits, and even then, it only protects certain people [bloggers, non-traditional journalists] from SLAPP lawsuits brought by government entities), it's now facing Marc Randazza, who has taken up BleepingComputer's defense.

Adding to this is the fact that the specific statements Enigma claims are false and defamatory aren't even directly quoted from the posted review. They're rephrased to put words in the mouth of the forum moderator who posted it. This low-level deception might have made sense if Enigma hadn't included a screenshot of the post it's misquoting as an exhibit in the filing.

Here are Enigma's claims, followed by the actual wording used by BleepingComputer.
In these posts, Bleeping makes the following assertions falsely and without any reasonable basis to believe that the statements were true when made:

That SpyHunter 4 or ESG engage in "deceptive advertising which violates several consumer protection laws in many states";
[The "quoted" statement does not actually appear in this post, or in any of the ones following it in the thread.]
ES: That SpyHunter 4 or ESG has a "history of employing aggressive and deceptive advertising";
BC: SpyHunter by Enigma Software Group USA, LLC is a program that was previously listed as a rogue product on the Rogue/Suspect Anti-Spyware Products List because of the company's history of employing aggressive and deceptive advertising.
[This claim is backed up by a footnote linking to an outside source that reinforces BC's claim.]
ES: That SpyHunter 4 is a "rogue product";
BC: SpyHunter by Enigma Software Group USA, LLC is a program that was previously listed as a rogue product on the Rogue/Suspect Anti-Spyware Products List…

BC: SpyHunter is not classified as malware or rogue security software and other antivirus and antimalware vendors do not target it for removal.
ES: That SpyHunter 4 or ESG have not cooperated in submitting their program for testing "most likely due to the program's ineffectiveness and high rate of false positives?";
[Again, this "quoted" phrase does not appear in the post, or in any the moderator's posts in the same thread. The moderator notes it has not been tested by other AV firms to determine its effectiveness, but does not make any related claim about false positives or ineffectiveness. The closest thing to it is this sentence, which is clearly an opinion.]
In my opinion SpyHunter is a dubious program with a high rate of false positives.
[This is backed up by a link to supporting information from an outside source.]
ES: That SpyHunter 4 or ESG engage in deceptive pricing;
BC: While there are mixed reviews for SpyHunter, some good and some bad, my main concern is the reports by customers of deceptive pricing, continued demands for payment after requesting a refund, lack of adequate customer support, removal (uninstall) problems and various other issues with their computer as a result of using this product. For example, some users are not aware that when purchasing SpyHunter, they have agreed to a subscription service with an automatic renewal policy.
[Again, these statements are supported by links to information sources. The addition of "my main concern" clearly shows the moderator is making a statement of opinion based on available information. And the connecting phrase "reports by customers" makes it clear he's making an inference based on statements by others.]
ES: That most users of SpyHunter 4 "are not aware that when purchasing SpyHunter, they have agreed to a subscription service with an automatic renewal policy"; and
[See the above quote and note, again, that multiple links in the review direct readers to outside sites backing up this statement, like the numerous complaints about this practice found at ComplaintsBoard and the Better Business Bureau.]
ES: That SpyHunter 4 is "malware" or "rogue security software" despite not being classified as such by security vendors.
BC: SpyHunter by Enigma Software Group USA, LLC is a program that was previously listed as a rogue product…

BC: SpyHunter is not classified as malware or rogue security software and other antivirus and antimalware vendors do not target it for removal.
[These two directly contradict the assertion being made by Enigma in its lawsuit. The author of the post never states that SpyHunter is "malware" or "rogue security software."]

Enigma doesn't have much of a case. But it has just enough of one to be troublesome. It's forced others to bend to its will in the past by aggressively litigating, and it can drain BleepingComputer of time, energy and money just by forcing it to defend itself from ridiculous claims.

Read More | 18 Comments | Leave a Comment..

Posted on Techdirt - 5 February 2016 @ 6:23am

TV Station Educates Public On Dangers Of Teen Sexting By Exposing 14-Year-Old's Name... And Penis

from the an-hero dept

According to a recently-filed lawsuit, the media is apparently every bit as "helpful" as law enforcement when it comes to the responsible, logical handling of teens and sexting. Confusing "hurting" with "helping," Colorado's KOAA allegedly exposed not only the name of a teen involved in a sexting incident, but also the part that puts the "sex" in "sexting."

The station, KOAA TV, aired footage of the boy’s erect penis during a news report that was put together after his father’s girlfriend approached producers about an alleged blackmail attempt, according to a complaint filed Friday in U.S. District Court.

Producers were told on Feb. 24 by the woman that someone had tried to blackmail the teen, now 16, using sexually explicit material. That same day they arrived at the family house in Pueblo, Colorado to investigate the claims and interview the boy’s father, Elijah Holden. While on assignment, the suit alleges that the news team collected screenshots from the teen’s Facebook page, as well as images from the YouTube page where the blackmail video had been uploaded, to be used in their coverage.

The plaintiff and his father both asked that the name “be kept confidential through any report presented by Defendant KOAA,” attorney Matthew Schneider said in the filing.
Since law enforcement largely seems to feel sexting = child porn, the station should have found itself under investigation for distributing child porn. Instead, the only negative result of its allegedly terrible editorial practices so far is Holden's lawsuit.

Holden is seeking damages related to the outing of his name and sexual organs, with damages sought clearing the $1 million mark. In its defense, the station had this to say:
“Through a series of stories during the last several years, KOAA has informed its viewers about the dangers of sexting and cell phone security,” KOAA president and general manager Evan Pappas said in a statement to Courthouse News, where the suit was first reported on Tuesday this week. “At the specific request of the victim’s father, we ran a story two years ago about his son being blackmailed over a cellphone video.”
Well, I guess nothing better illustrates the dangers of sexting more than irresponsibly splashing a minor's name and penis all over the TV screen. Of course, considering these were tied to blackmail allegations by an adult, it would seem more -- much more -- discretion would have been in order. Instead, the TV station went the other way, displaying the name of the minor involved over a screen cap of his penis and topped it off by dragging his social circle into the mess.

The station claims the allegations are unsubstantiated, but there's really no excuse for using a minor's name -- even if the guardian gave permission to the news outlet to do so. But going past that, how does the station hope to explain its use of an explicit photo of a minor in a publicly-broadcast news report? According to the lawsuit, something that could be considered child pornography somehow made its way past internal censors and ended up on the evening news.
Defendant KOAA aired the thumbnail image of the YouTube video depicting Plaintiff's erect penis and his name as a part of the story shown on February 24th 2014.
While journalists have played an important part in exposing ridiculous prosecutions of sexting teens, there's no denying the lurid nature of the subject matter is also beneficial to the entities covering the stories. The implicit suggestion that YOUNG NAKED TEENS lie just beyond the next commercial break attracts additional viewers. This additional motivator might explain the apparent lack of discretion on the part of KOAA.

As of now, what we have is a news agency that claims it broadcasts these stories to educate the public on the dangers of sexting while apparently feeling compelled to drive that point home through its own actions.

Read More | 52 Comments | Leave a Comment..

Posted on Techdirt - 5 February 2016 @ 3:21am

Software Company Asks Users For Input On DRM; Goes Ahead And Institutes It Anyway Over Their Objections

from the by-'listening,'-we-meant-nodding-thoughtfully-while-moving-forward-with dept

Nothing does more damage more quickly to your community than deciding to place your fear of piracy over the the concerns of those who've already paid for your product. DRM is rarely, if ever, the answer. And yet, it remains an inexplicably popular "solution."

Daz 3D, which produces 3D art software as well as assets for use with third party software, has decided to do something about its perceived piracy problem. Last November, it had this to say:

[W]e feel the best way to fight piracy is make the convenience of doing something legally more so than the inconvenience of pirating. That is why we made finding, downloading, installing, and loading content in Studio as streamlined and easy as possible while making getting a pirate-able copy of the original product harder.
The solution to the problem, according to Daz 3D, was to have the software "phone home" at least once to obtain a key for content/software files, which would only arrive in encrypted form. Supposedly, this would be limited to once per computer but the new, encrypted files would pose problems for existing users.

Those on older versions of Daz's software would be unable to access any new content. Transferring old data could also result in problems -- something Daz acknowledged in a later post, noting that scripts and tools might not work with unencrypted content.

At the time of the announcement, no plan was in place to provide offline users with authentication keys, nor would it be possible to purchase new content without running through Daz's "Connect" service, which not only authenticates users but "assembles" newly purchased content for use with Daz software.

Daz did the right thing and put its proposals up for discussion. This generated dozens of pages of comments, many of which were from users opposed to the addition of DRM. Some were concerned about the Daz Connect DRM breaking content they'd already paid for. Others simply didn't like being treated like pirates when they'd actually paid for software and add-ons.

Daz's representatives were active in forum discussions and very straightforward about their reasons for looking into instituting DRM. The company is hoping a few extra installation hoops and another layer of authentication would deter casual pirates, leaving them only the diehard crackers interested in "capturing" a niche "market."

The willingness to listen and participate in the discussion separates Daz from many other companies who've added DRM to their products. Unfortunately, it appears the discussion had little effect on Daz's final decision. The post may be titled "You've been heard," but the content contained in it indicates the listening was little more than a formality. Daz will be moving ahead with its original plan, despite customers making it clear they'd rather have a product that doesn't introduce compatibility problems. Nor do they want to be limited to a single distribution system. And they're less than thrilled about the "phone home" requirement.

The new post, delivered four months after the original announcement, changes nothing about the DRM structure. While it does add some fail-safe measures (like third-party escrow that will prevent users from being locked out of their purchases if Daz goes out of business), the end result is still the same. DRM is coming to Daz and there's nothing users can do about it.
Currently Daz Connect gives customers the ability to install (among other things) encrypted content. Daz Connect also lets customers retrieve a Key to decrypt their content. Customers have raised the concerns of:

What if Daz is not available to provide the keys anymore, chooses not to, or starts charging an additional fee to get a key for previously purchased content?

Solution: We have developed and fully tested a utility which will decrypt, and save in non-encrypted formats, Daz products on a customer’s computer. We are also working out details with a software escrow company who will provide this utility to the public free of charge in the event that Daz is no longer in a business position to, or is unwilling to continue offering this as a free service. This will also be added to the Daz EULA to ensure customers of our commitment to enable them to always be able to use content that they have purchased a license for.
Obviously this does not address other issues such as scripts and tools that work on un-encrypted content. But those are solved in other ways. We are working (and will continue to work) with developers who have this need, in order to show them how to do it with encrypted content.
Apparently, "hearing" actually means ignoring concerns people expressed, including portability from older versions of Daz's software. And, as is nearly always the case when DRM to added to a previously DRM-free product, the company is presenting it as a win for paying customers.
Is the encryption associated with Daz Connect essentially Digital Rights Management (DRM)?

We strive to add great benefits to being connected while limiting the impact to the user experience. Although we have included file encryption to protect our artist community, the primary target is to provide a better experience for our users. Daz Connect delivers and updates products more efficiently but relies on the fact that files are in a location and format that is maintained by the application. In this sense, Daz Connect provides some measure of digital rights management.
So, Daz is thinking of its customers while simultaneously willing to ignore those customers to institute something it thinks will decrease piracy. While I can appreciate the fact Daz wants to protect its bottom line, it needs to be aware that instituting these new restrictions will result in actual lost sales -- something that may ultimately prove more harmful than the theoretical lost sales Daz attributes to piracy.

39 Comments | Leave a Comment..

Posted on Techdirt - 4 February 2016 @ 11:41am

Russia Blocks Another Archive Site Because It Might Contain Old Pages About Drugs

from the block-bloc dept

The Russian block party continues. The government agency in charge of censoring the internet is still working its way backwards, hoping to erase the collective memories of the web… or at least, keep Russian citizens from seeing certain bits of the archived past.

Last summer, Russia blocked the Internet Archive's "Wayback Machine," an extremely useful tool that allows users to see historical snapshots of websites. The government may only have intended to block a single page, but because the Internet Archive utilizes HTTPS, the only practical way for ISPs to block the targeted pages was to block it at the domain level.

The same thing is now happening to, another useful tool that allows users to archive pages they feel might be altered or disappear altogether at some point in the future. (via Google Translate and an anonymous TD reader)

Roskomnadzor introduced service to Internet resources registry, prohibited by the law of the Russian Federation.

On the site supervisory authority pointed out that entered in the register by order of the Federal Service for Drug Control 28 January 2016.

Service continues to work as usual, but for many Russian customers of providers it is no longer available.
The problem here is the Russian's take on the War on Drugs. Because it's illegal to discuss drug use/abuse/sales, Roskomnadzor has disappeared another archive that might contain copies of pages it's blocked in the past. That the service would be of use to Russian citizens for non-drug related purposes appears to be of no concern to the Russian government.

And again, it's the use of HTTPS that's resulted in the entire site being blocked. Targeted pages can't be targeted if the connection is encrypted. So, down goes the entire site and, of course, no one in the web censorship body seems to be bothered by the collateral damage.

8 Comments | Leave a Comment..

Posted on Techdirt - 4 February 2016 @ 9:25am

Napolitano Says She's Always Wanted To Talk About The Secret Surveillance She Hasn't Talked About Since Last August

from the it's-all-just-a-big,-opaque,-pitch-black,-secretive-misunderstanding! dept

A Techdirt reader has sent us a copy of former DHS head/current University of California President Janet Napolitano's official response to the outcry over the secret surveillance of UC staffers -- surveillance she personally approved.

Napolitiano's letter to UC-Berkeley employees immediately ties the secretive surveillance implementation to the UCLA Medical Center cyberattack, just in case anyone (and it's a lot of anyones) feels the effort was unwarranted.

A group of faculty members at the Berkeley campus has articulated concerns regarding some of the security measures we adopted in the wake of the UCLA cyberattack last year. The concerns focus on two primary issues: whether systemwide cyber threat detection is necessary and whether it complies with the University’s Electronic Communications Policy (ECP); and why University administrators failed to publicly share information about our response to the cyberattack.
If your privacy is being compromised, the real villains here are the people behind the cyberattack. As for the secrecy surrounding it, Napolitano seems to indicate she'd like to discuss it, but immediately abandons that line of inquiry to blame disgruntled staffers and the media for misrepresenting her snooping initiative.
The Berkeley faculty members have shared their concerns with colleagues at other campuses and with various media outlets. Unfortunately, many have been left with the impression that a secret initiative to snoop on faculty activities is underway. Nothing could be further from the truth.
Please explain.
I attach a letter from Executive Vice President and Chief Operating Officer Nava explaining the rationale for these security measures.
Great, except that Nava's letter arrived five months after the program was implemented and two months after a university official said the program would be shut down -- a statement which itself preceded (by a month) the news that the program has actually been allowed to continue uninterrupted.

Napolitano claims there was no secrecy.
As you know, leadership at all levels, including The Regents, Academic Senate leadership, and campus leadership, has been kept apprised of these matters, including through the establishment and convening of the Cyber Risk Governance Committee (CRGC). The CRGC, comprises each campus’s Cyber Risk Responsible Executive (CRE), as well as a representative of the University’s faculty Senate, the General Counsel, and other individuals from this office with responsibility for systemwide cybersecurity initiatives.
Yes, look at all the people who were informed! And were apparently informed they could not pass this information on to anyone else!

From our earlier post on the subject -- directly from some of those on Napolitano's "approved" list.
UCOP would like these facts to remain secret. However, the tenured faculty on the JCCIT are in agreement that continued silence on our part would make us complicit in what we view as a serious violation of shared governance and a serious threat to the academic freedoms that the Berkeley campus has long cherished.


For many months UCOP required that our IT staff keep these facts secret from faculty and others on the Berkeley campus.
This assertion directly contradicts Napolitano's depiction of the events.
I have from the beginning directed my staff to make every effort to actively engage with all stakeholders and to minimize to the extent possible the amount of information that is not shared widely.
This seems highly unlikely, considering no one began publicly talking about this secret surveillance until just recently. If the information had been widely disseminated (as Napolitano's claims she directed), the backlash would have begun months ago.

And, of course, Napolitano is all about that privacy.
Personal privacy and academic freedom are paramount in everything we do. But we cannot make good on our commitment to protect individual privacy without ensuring a sound cybersecurity infrastructure. While we have absolutely no interest in the content of any individual’s emails or browsing history, we must accept that active network monitoring is a critical element of a sound cybersecurity infrastructure and the interconnectedness of the University and all of its locations requires that such monitoring be coordinated centrally.
School officials -- at least those allowed to see email content/web browsing history -- may claim they have "no interest" in seeing it, but that doesn't change the fact that any of them can access it without fear of repercussion. Not only that, but a third party has access to this same data -- a third party Napolitano won't identify.

She closes her official "this is all fully justified because cyber" letter with the same assertion so many officials make when secret goings-on are dragged out into the sunlight: "I've always wanted to have this discussion I'm now being forced to have!"
I invite further robust discussion and debate on this topic at upcoming meetings of the CRGC and COC.
That's just disingenuous. Don't extend an invitation to a conversation you can no longer avoid.

As the TD reader who sent this over explains, they're not exactly thrilled the former DHS head is using a privacy breach to further undermine UC staffers' privacy.
This sort of thing, by the way, is exactly the reason that everyone had the "say what?" reaction when Napolitano was appointed. This is why people were concerned.

P.S. I'm one of the people whose information was compromised in the UCLA Med Center hack, and don't appreciate their screw-up then being used as an excuse to screw us over now.

Read More | 25 Comments | Leave a Comment..

Posted on Techdirt - 3 February 2016 @ 11:36am

Former DHS Boss Puts University Of California Employees Under Secret Surveillance

from the you-didn't-see-anything-so-you'd-better-not-say-anything dept

Former DHS boss Janet Napolitano -- who once stated she "doesn't use email" (for many reasons, but mainly to dodge accountability) -- is now showing her underlings at the University of California why they, too, might not want to "use email": someone might be reading them over their shoulders.

UC professor Christopher Newfield has the inside details of the recently-exposed monitoring system secretly deployed by the University of California (and approved by school president Napolitano) to keep tabs on the communications, web surfing and file routing of its employees. The SF Chronicle has an article on the secretly-installed spyware behind its paysieve [try this link], but Newfield has the internal communications.

The installation of the third-party monitoring software was so secretive that even the university's campus information technology committee was forbidden from discussing it with other staff. The committee has now decided to go public.

UCOP would like these facts to remain secret. However, the tenured faculty on the JCCIT are in agreement that continued silence on our part would make us complicit in what we view as a serious violation of shared governance and a serious threat to the academic freedoms that the Berkeley campus has long cherished.

Some salient facts:

- The UCOP had this hardware installed last summer.

- They did so over the objections of our campus IT and security experts.

- For many months UCOP required that our IT staff keep these facts secret from faculty and others on the Berkeley campus.

- The intrusive hardware is not under the control of local IT staff--it sends data on network activity to UCOP and to the vendor. Of what these data consists we do not know.

- The intrusive device is capable of capturing and analyzing all network traffic to and from the Berkeley campus, and has enough local storage to save over 30 days of *all* this data ("full packet capture"). This can be presumed to include your email, all the websites you visit, all the data you receive from off campus or data you send off campus.
The official excuse for the installation of intrusive spyware is "advanced persistent threats" possibly related to a cyberattack on the UCLA Medical Center last summer. How monitoring staff emails plays into the thwarting of "threats" hasn't been explained. Now that the secret's out, the university is claiming it's all good because policies prevent the university from using any intercepted information/communications for "nonsecurity purposes."

The university may have a policy forbidding this activity, but that's not really the same thing as guaranteeing abuse of this surveillance will never happen. Its belated not-an-apology offers no contrition for keeping this a secret from a majority of its staff. And the statement does not name the third party in charge of the collection and monitoring.

While it certainly isn't unusual for employers to monitor employees' use of company computers and devices, it's normally clearly stated in policy manuals, rather than installed surreptitiously and cloaked in deep secrecy.

As Newfield points out, no one was apprised of the monitoring until after it was underway. Some heard a few weeks after the monitoring was put in place (August of last year) when the university updated its security policies following the medical center breach. Many more heard nothing until the first week of December. Following the wider exposure, staffers were assured by the school's vice president that the monitoring would cease and the software would be removed.

The VP said one thing and the school did another.
On Jan. 12, 2016, The Berkeley Joint Committee on Campus Information Technology (JCCIT) met with Larry Conrad and others. The committee was informed that contrary to the Dec. 21, 2015 statements, UCOP had decided to continue the outside monitoring and not disclose any aspects of it to students or faculty.
At this point, the decision was made to go public. A letter was drafted and sent to school administration. It was also sent to the New York Times. This prompted the generation of bullshit from the Executive VP's office.
On Jan. 19, 2016, UCOP Exec. VP and COO Rachael Nava sent a letter to those who signed the Jan. 15, 2016 letter. The original version was marked "CONFIDENTIAL: DO NOT DISTRIBUTE" and invoked "Attorney-Client privilege". After several recipients responded to her via email questioning who is the client and why her letter must be kept secret, a revised version of the letter was sent the next day removing that language, stating: "All: Please accept my apologies with regard to the confusion on the attorney client privilege language on the letter. It was a clerical error and was not intentional. Please find a revised version of the letter with the language removed."
The full letter contains some truly incredible statements.
With respect to privacy, the letter and structure of the University’s Electronic Communications Policy (ECP) reflect the principle that privacy perishes in the absence of security. While the ECP establishes an expectation of privacy in an individual’s electronic communications transmitted using University systems, it tempers this expectation with the recognition that privacy requires a reasonable level of security to protect sensitive data from unauthorized access.
Privacy does not "perish" in the absence of security. This conflation of the two is ridiculous. If a malicious party accesses private communications, that's a security issue. If an employer accesses these communications, that a privacy issue. Claiming to value privacy while secretly installing monitoring software (and then lying about removing said software) only serves to show the university cares for neither. By adding a third party to the monitoring process, the university has diminished the privacy protections of its staff and added an attack vector for "advanced persistent threats." It has effectively harmed both privacy and security and, yet, still hopes to claim it was necessary to sacrifice one for the other.

The other statement, tucked away as a footnote, absurdly and obnoxiously claims the real threat to privacy isn't the school, but people making public records requests.
Public Records Act requesters may seek far more intrusive access to the content of faculty or staff records than what the ECP permits for network security monitoring. The limits on the University’s own access to electronic communications under the ECP do not apply to Public Records Act requests.
Meanwhile, the school's tech committee has pointed out its IT staff is more than capable of handling the privacy and security of the network and, quite obviously, would show more respect for their colleagues' privacy while handling both ends of the privacy/security equation.

It's perfectly acceptable for entities to monitor employees' use of communications equipment. But you can't do it this way. You can't install the software secretly, swear certain employees to secrecy, not tell anyone else until the secret is out in the open, promise to roll it back and then secretly decide to do the opposite, etc. And when challenged, you can't play fast and loose with "security" and "privacy" as if they were both the same word spelled two different ways.

[Update: a TD reader has given us a copy of Janet Napolitano's response to the outcry over the school's secret surveillance efforts. A new post on that letter is on the way. If you'd like a head start, it's embedded below.]

Read More | 38 Comments | Leave a Comment..

Posted on Techdirt - 3 February 2016 @ 8:29am

Utah Politician Looking To Tackle Doxing, DoS Attacks And Swatting With New Slate Of Cybercrime Amendments

from the as-usual,-bill-has-arrived-in-completely-unfinished-state dept

Three of the Four Horsemen of the Internet Apocalypse (*Revenge Porn not included) are being targeted by Utah legislator David Lifferth with a package of amendments to the state's cybercrime statutes.

Utah Representative David E. Lifferth (R) has filed House Bill 225 which modifies the existing criminal code to include cyber crimes such as doxing, swatting and DoS (denial of service) attacks. According to the amendments, these crimes can now range anywhere from misdemeanors to second-degree felonies.
As is often the case when (relatively) new unpleasantness is greeted with new legislation, the initial move is an awkward attempt to bend the transgressions around existing laws, or vice versa. Lifferth's is no exception. As GamePolitics points out, only one of the new crimes is specifically referred to by its given name: DoS attacks. The other two can only be inferred by the wording, which is unfortunately broad.

Swatting becomes:
[making] a false report to an emergency response service, including a law enforcement dispatcher or a 911 emergency response service, or intentionally aids, abets, or causes a third party to make the false report, and the false report describes an ongoing emergency situation that as reported is causing or poses an imminent threat of causing serious bodily injury, serious physical injury, or death; and states that the emergency situation is occurring at a specified location.
It's the stab at doxing that fares the worst. In its present form, the wording would implicate a great deal of protected speech. This is the wording Lifferth would like to add to the "Electronic communication harassment" section.
electronically publishes, posts, or otherwise makes available personal identifying information in a public online site or forum.
Considering it's tied to "intent to annoy, alarm, intimidate, offend, abuse, threaten, harass, frighten, or disrupt the electronic communications of another," the amended statute could be read as making the publication of personal information by news outlets a criminal activity -- if the person whose information is exposed feels "offended" or "annoyed." Having your criminal activities detailed alongside personally identifiable information would certainly fall under these definitions, which could lead to the censorship (self- or otherwise) of police blotter postings, mugshot publication or identifying parties engaged in civil or criminal court proceedings.

It also would to make "outing" an anonymous commenter/forum member/etc. a criminal act, even if the amount of information exposed never reaches the level of what one would commonly consider to be "doxing." Would simply exposing the name behind the avatar be enough to trigger possible criminal charges?

While it's inevitable that lawmakers will have to tangle with these issues eventually, it's disheartening to see initial efforts being routinely delivered in terrible -- and usually unconstitutional -- shape. We expect our legislators to be better than this. After all, it's their job to craft laws and to do so with some semblance of skill and common sense. If nothing else, we expect them to learn something from previous failures to pass bad laws, whether theirs or someone else's.

17 Comments | Leave a Comment..

Posted on Techdirt - 3 February 2016 @ 3:20am

Federal Judge Says The FBI Needs To Stop Playing Keepaway With Requested FOIA Processing Documents

from the the-first-rule-of-FOIA-processing-is... dept

Score one for the American public. A federal judge has reached the same conclusion many FOIA requesters have: the FBI simply doesn't play well with public records laws.

The FBI unlawfully and systematically obscured and refused to answer legitimate requests for information about how well it was complying with the Freedom of Information Act (Foia), a Washington, DC court found last week.

US district judge Randolph D Moss ruled in favor of MIT PhD student Ryan Shapiro, finding that the government was flouting Foia, a law intended to guarantee the public access to government records unless they fall into a protected category. Moss found that the FBI’s present policy is “fundamentally at odds with the statute”.
The 63-page opinion dives deep into the FOIA exemption weeds. Moss does grant the FBI a few of its motions for summary judgment, but on the whole, he finds the FBI's responses (or lack thereof) to several disputed FOIA requests to be unjustified.

The documents sought by Shapiro and his co-plaintiffs (Jeffrey Stein, Truthout, National Security Counselors) deal with the FBI's FOIA response procedures. These include "search slips," which detail the FBI's efforts to locate requested documents, case evaluations (which can give FOIA requesters some insight on the application of exemptions and search efforts made by individual staffers) and other processing notes. The FBI refused to part with any of these background documents if they pertained to other denied FOIA requests.

The FBI argued that most of what it withheld fell under "law enforcement techniques and procedures," which it feels are categorically excluded from disclosure, thanks to FOIA exemption 7(e). Of course, it all depends on which court it's making this assertion in, as the clause pertaining to this exception is punctuated badly.
would disclose techniques and procedures for law enforcement investigations or prosecutions, or would disclose guidelines for law enforcement investigations or prosecutions if such disclosure could reasonably be expected to risk circumvention of the law
In some districts, the courts have interpreted the wording to mean these records are exempt. In other districts, the courts have read the FOIA exemption clause as meaning these documents are only exempt if the FBI can offer evidence that releasing them might compromise national security or ongoing investigations.

Judge Moss' opinion agrees with the first interpretation. In doing so, he meets the FBI halfway, which is far further than the FBI has been willing to meet the suing FOIA requesters. Even with the additional slack, the FBI still isn't living up to FOIA standards.
Moss agreed that even if individual documents were protected by that Foia exemption, the entire categories of document the FBI withholds were emphatically not. “[The FBI] concedes that the vast majority of [the records in question] are not protected at all,” he wrote. “It is only arguing that by withholding all search slips, even those not protected by Foia, it can amass a haystack in which to hide the search slips that are protected [emphasis his].”

“[T]he FBI’s exercise of its statutory authority to exclude documents from Foia’s reach is not the kind of ‘technique’ or ‘procedure’” to which the necessary exemption refers, wrote Moss.
Moss is not the first DC District judge to order the FBI to explain its overuse of FOIA exemptions. In another FOIA lawsuit filed by Ryan Shapiro, Judge Rosemary Collyer found the FBI's lack of responsiveness and explanations to be problematic.
"(Shapiro) argues that FBI has not established that it actually conducted an investigation into criminal acts, specified the particular individual or incident that was the object of its investigation, adequately described the documents it is withholding under Exemption 7, or sufficiently connected the withheld documents to a specific statute that permits FBI to collect information and investigate crimes.

Mr. Shapiro further alleges that FBI has failed to state a rational basis for its investigation or connection to the withheld documents, which he describes as overly-generalized and not particular. On the latter point, the Court agrees."
I'm sure the FBI will challenge Judge Moss' order. It has no interest in providing additional documents to Ryan Shapiro as it's convinced the prolific FOIA filer will "trick" it into revealing stuff it doesn't want to with multiple, overlapping FOIA requests. The FBI's "mosaic theory" is being tested in court. With the claims it's made here, it clearly wants the court to reinterpret the letter of the law in its favor -- something that would move the agency even further away from the spirit of the law, which is exactly where it wants to be.

Read More | 23 Comments | Leave a Comment..

Posted on Techdirt - 2 February 2016 @ 9:33am

The NSA Lost In Court, So This DMCA Notice Is Totally Valid

from the not-how-any-of-this-works dept

The misuse of DMCA notices to remove unwanted information from the web has been well-documented here. The "right to be forgotten" has sort of codified this behavior, but only applies to citizens of certain countries.

James Kutsukos would like something removed -- a search warrant application hosted by the ACLU, which details a US Postal Service investigation which culminated in his being convicted for marijuana distribution. It's easy to see why Kutsukos would want this removed:

It's far less simple to divine why the ACLU should feel compelled to remove it.

Kutsukos has his reasons.
Re: This needs to be taken off ASAP NOW THAT THE NSA LOST THEIR CASE


Explanation of complaint
this must be removed now.
The NSA hasn't "lost" any "cases," so far. I assume the "lost case" Kutsukos is referring to is Judge Leon's determination that the Section 215 bulk collection was unconstitutional (back in December of 2013). This would predate the April 1, 2014 timestamp on the takedown notice (which, for some reason, appears to have been received by the ACLU one year before Kutsukos sent it).

If so, then the decision had not been overturned by the Appeals Court yet, so it was technically still in the loss column. Even so, there's nothing about this that involves the NSA. The investigation was initiated by the US Postal Service and later involved the FBI.

The evidence obtained by the postal inspector consisted of text messages sent using Google Voice, which is not one of the providers implicated in the NSA's bulk collection efforts. (At least as far as we know... The phone metadata program [which also sweeps up other "business records"] targets telcos, not Google. Google's data is likely gathered under a different authority using a separate NSA collection program.)

So, it looks like either a misreading of Judge Leon's decision or -- as we've seen in other cases -- a sad attempt to intimidate a takedown recipient by throwing around government agency acronyms.

Either way, the document remains intact on the ACLU's servers and in Google's search results for Kutsukos, which lead off with a link to the affidavit.

And, because his woeful takedown attempt has been archived for posterity, Kutsukos is once again linked to a document he'd rather bury.

Read More | 10 Comments | Leave a Comment..

Posted on Techdirt - 1 February 2016 @ 8:36am

DHS Official Thinks People Should Have To Give Up Their Anonymity To Use The Internet

from the screwing-the-nation-for-the-good-of-the-nation dept

Apparently, the only way to stop terrorists from hating us for our freedom is to strip away those offensive freedoms.

Erik Barnett, the DHS's attache to the European Union, pitched some freedom-stripping ideas to a presumably more receptive audience via an article for a French policy magazine. Leveraging both the recent Paris attacks and the omnipresent law enforcement excuse for any bad idea -- child porn -- Barnett suggested victory in the War on Terror can be achieved by stripping internet users of their anonymity. You know, all of them, not just the terrorists.

After a short anecdote about a successful child porn prosecution in Europe. Barnett gets straight to the point. Here's Kieren McCarthy of The Register.

Before we have an opportunity to celebrate, however, Barnett jumps straight to terrorism. "How much of the potential jihadists' data should intelligence agencies or law enforcement be able to examine to protect citizenry from terrorist attack?", he poses. The answer, of course, is everything.

Then the pitch: "As the use of technology by human beings grows and we look at ethical and philosophical questions surrounding ownership of data and privacy interests, we must start to ask how much of the user's data is fair game for law enforcement to protect children from sexual abuse?"
In short, if you value internet-related freedoms, you're basically supporting terrorism and child porn. No person -- especially no legislator -- would want to be seen as valuing personal freedoms over the good of the nation's infrastructure/children. And, because terrible ideas must be buttressed by terrible analogies, Barnett theorizes that the internet is basically a car.
"When a person drives a car on a highway, he or she agrees to display a license plate. The license plate's identifiers are ignored most of the time by law enforcement [unless] the car is involved in a legal infraction or otherwise becomes a matter of public interest. Similarly, should not every individual be required to display a 'license plate' on the digital super-highway?"
To use the Fourth Amendment for a moment, a lowered expectation of privacy is in play when operating a vehicle on public roads. However, the Fourth Amendment affords a great deal of privacy to the interior of people's homes. Because the government (in most cases) does not provide internet access, it has no basis to demand ongoing access to citizens' internet activities. It may acquire this information (along with subscriber info) using search warrants and subpoenas during the course of investigations, but it cannot demand (or at least shouldn't) -- for national security reasons or otherwise -- that every internet user be immediately identifiable.

Discussions of requiring a license for internet usage have been raised previously but rarely go anywhere. To do so is to start heading down the path to totalitarianism. Unfortunately, being in a constant state of war against an ambiguous foe often results in legislators and government officials declaring their interest in seeing this path not only surveyed, but the first layer of asphalt applied.

Barnett is one of this number, and he wants a strawman to serve as construction foreman.
"Social media is used to generate support for terrorist groups ... How appropriate is the law enforcement engagement of the social media companies to reveal digital fingerprints of these extremist groups? Who determines the level of 'extremism' of a group? Few would disagree that law enforcement and intelligence services should have the ability..."
Actually, lots of people would disagree, starting with many citizens and running all the way up to their service providers. On top of that, the nation's courts would find the institution of a law that strips the anonymity of internet users to be unconstitutional, so that's another hurdle Barnett and like-minded officials would not be able to clear, no matter their stated justification.

60 Comments | Leave a Comment..

Posted on Techdirt - 1 February 2016 @ 6:34am

New Report Debunks FBI's 'Going Dark' FUD

from the it-only-seems-dark-because-you've-been-staring-at-the-sun dept

The way things are going, pretty soon FBI Director James Comey is going to be out there alone, flipping off light switches and blowing out candles, all the while cursing the going darkness.

A new report by Harvard's Berkman Center for Internet and Society debunks law enforcement's fearful statements about encroaching darkness. (h/t New York Times) As the report points out, there may be some pockets that are darker than others, but the forward march of technology means other areas are brighter than they've ever been. In particular, the growing Internet of Things is pretty much just the Internet of Confidential Informants.

Three trends in particular facilitate government access. First, many companies’ business models rely on access to user data. Second, products are increasingly being offered as services, and architectures have become more centralized through cloud computing and data centers. A service, which entails an ongoing relationship between vendor and user, lends itself much more to monitoring and control than a product, where a technology is purchased once and then used without further vendor interaction. Finally, the Internet of Things promises a new frontier for networking objects, machines, and environments in ways that we just beginning to understand. When, say, a television has a microphone and a network connection, and is reprogrammable by its vendor, it could be used to listen in to one side of a telephone conversation taking place in its room – no matter how encrypted the telephone service itself might be. These forces are on a trajectory towards a future with more opportunities for surveillance.
On top of the additional opportunities for surveillance, there's encryption itself. The best friend of Public Enemies #1 -- whatever is far from the insurmountable obstacle Comey and others have presented it as. While some companies are offering encryption by default and others are specializing in secure communications apps and tools, this is still mostly in service to niche markets.
[C]ompanies typically wish to have unencumbered access to user data – with privacy assured through either restricting dissemination of identifiable customer information outside the boundaries of the company (and of governments, should they lawfully request the data). Implementing end-to-end encryption by default for all, or even most, user data streams would conflict with the advertising model and presumably curtail revenues.
Even Apple and Google -- the two companies that added encryption-by-default to their devices -- aren't interested in encrypting everything.
Google offers a number of features in its web-based services that require access to plaintext data, including full text search of documents and files stored in the cloud. In order for such features to work, Google must have access to the plaintext. While Apple says that it encrypts communications end-to-end in some apps it develops, the encryption does not extend to all of its services. This includes, in particular, the iCloud backup service, which conveniently enables users to recover their data from Apple servers. iCloud is enabled by default on Apple devices. Although Apple does encrypt iCloud backups, it holds the keys so that users who have lost everything are not left without recourse. So while the data may be protected from outside attackers, it is still capable of being decrypted by Apple.
In short, far more surveillance doors have been opened in the past decade than have been closed. As the authors point out, smart devices and online services have implemented voice commands, giving them the capability to record conversations far more private than those that might take place over other encrypted channels. As a case in point, the report notes the FBI exploited in-car microphones more than a decade ago, using a luxury auto "concierge" service to eavesdrop on conversations between organized crime members.

They also point out that encryption isn't always surveillance-proof. NSA officials have encouraged the use of encryption -- not just because it protects ordinary citizens from attacks, but also because it can crack some of it and grab tons of metadata no matter what form is being used. Not only that, but officials have admitted that the use of encryption "lights up" potential surveillance targets, making its haystack trawling much more efficient.

Comey is the odd man out here, abandoned by the NSA, administration and, with few exceptions, other law enforcement agencies. The solution isn't bans or backdoors. The solution is the exploitation of every new attack vector willingly created by social media apps, smart devices and the general interconnectedness of the world wide web. If he persists in this fashion, it won't be too long before he's considered no more credible than the ranting doomsayers who prowl city streets and subway platforms.

And let's not forget law enforcement agencies solved crimes and captured criminals for over two hundred years in this country -- and never found the lack of access to smartphone contents to be a hindrance.

Read More | 9 Comments | Leave a Comment..

Posted on Techdirt - 29 January 2016 @ 12:46pm

DOJ Agrees To Hand Over Document To EPIC, But Only Because The Document Has Already Been Made Public

from the damn-foreigners dept

EPIC is reporting that the DOJ has finally caved and is handing over a document it requested last fall. The document EPIC sought was the "Umbrella Agreement" between the US and Europe on the handling of each entities' citizens' data.

On September 8, 2015, European and US officials announced that they have concluded an agreement, the so-called Umbrella Agreement, which is a framework for transatlantic data transfer between the US and the EU. The proposed goal of the Agreement is to provide data protection safeguards for personal information transferred between the EU and the US. Despite the announcements, neither US officials nor their European counterparts made the text of the Agreement public.
Two days after this announcement, EPIC filed expedited FOIA requests on both sides of the pond for the text of this agreement, arguing (logically) that the people this would affect had a right to know what their governments were agreeing to. EPIC specifically had concerns that the US would offer less protection to foreign citizens' data than to its own citizens, given that it has historically refused to extend these niceties to those residing elsewhere on the planet.

The DOJ has provided EPIC with a copy of the agreement. In doing so, it hopes to bring to an end EPIC's FOIA lawsuit against the agency. But the DOJ notes in the letter attached to the agreement that it's only doing so in the most begrudging fashion. If only its partners on the other side of the Atlantic hadn't blinked first…
After carefully reviewing the record responsive to your request, I have determined that, as a matter of discretion, this document may be released in full. While this record is likely subject to Exemption 5, which concerns certain inter- and intra-agency communications protected by the deliberative process privilege, given the fact that the European Commission has provided you with a copy of the record and is making the file publicly available on its website, I have determined to release the record as a matter of discretion.
That's the "most transparent administration" at work. The European Parliament released the agreement on September 14, 2015 -- six days after the announcement. The DOJ, on the other hand, held out for nearly six months and is only releasing it because it's already in the public domain. And it's arguing that it should still be exempt as a "deliberative document" -- using the government's most-abused FOIA exemption -- even when another, larger government agency has determined the document deserves no such protection.

Read More | 4 Comments | Leave a Comment..

Posted on Techdirt - 28 January 2016 @ 11:23pm

Criminal Defendants Sue State Of Utah For Blowing Off The Sixth Amendment

from the the-best-lawyers-$0-can-buy dept

So much for those "inalienable rights." The Sixth Amendment -- among other things -- guarantees representation for criminal defendants. This guarantee has been declared null and void in two states: Utah and Pennsylvania.

The problem isn't that these states aren't willing to comply with both the Sixth Amendment and the Supreme Court's Gideon v. Wainwright decision. It's just that they're not going to spend any of their money doing it. In these states, funding for indigent defense is left up to local governments, with no additional support coming from the state level.

This causes problems for smaller locales, which often don't have the revenue to fully fund the legal defenders the accused are (supposedly) entitled to. But it's not just a matter of funding. It's also a matter of priorities. The state of Utah is currently being sued because of its unwillingness to ensure public defenders are properly funded. There's money available, but lawmakers have shown an unending willingness to only fund half of the criminal justice equation.

Utah is one of only two states nationwide that provide no state funding for indigent defense. It ranks 48th in the nation in per capita funding of indigent defense, according to the complaint.

Nor has the state set standards for contracted indigent defenders, or ensured that counties provide "constitutionally adequate" legal representation, the men say. Utah counties design and administer their own indigent defense programs.

Washington County uses fixed-price contracts to pay local attorneys for indigent defense, and budgeted $760,688 for indigent defense in this year. The county budgeted $2.8 million for prosecution this year, and the state has budgeted $18.6 million for criminal prosecution, and not a dime for defense.
The lawsuit points out that the lack of funding has hampered both of their cases. For one of the two defendants bringing the suit, the lack of funding resulted in his public defender's contract not being renewed, basically leaving him without capable representation.
At the time this lawsuit was filed, Plaintiffs were being represented by public defenders. During that representation, Washington County did not renew the public defender contract for Mr. Paulus’ public defender which makes it impossible for him to continue to his currently scheduled trial date.

Mr. Paulus faces 25 years to life in the Utah State Prison if he is convicted of his crimes. Mr. Paulus had a previous public defender, Ed Flint, who had obtained a private investigator interview a number of witnesses, but Mr. Flint had not retained any expert witnesses because of the contract issues with the Defendants as it relates to the public defender system in Washington County. As of the date of the filing of this action, Aric Cramer, who had two contracts and subcontracted with Ed Flint, did not have these two contacts renewed. Mr. Cramer would also subcontract with Ariel Taylor. On information and belief, one of those contacts are still unfilled.

On information and belief, attorney Ariel Taylor has been awarded one of those two vacant contacts. However, Mr. Taylor has no knowledge or involvement in the Mr. Paulus’ case prior to the non-renewal of Mr. Flint and Mr. Cramer’s contracts.
And it's not just that defendants are in danger of losing lawyers familiar with their cases if contracts aren't renewed. Years of underfunding and neglect by local governance has led to an ad hoc public defense network which does little to ensure defendants receive competent assistance.
Defendants exercise no supervision over the county indigent defense programs. They have also failed to establish, require, or enforce any practice standards or gridlines for the portions of noncapital indigent defendants receive constitutionally adequate representation.

National standards pertaining to the administration and provision of indigent defense programs have been in existence for decades . State and local entities across the country have adopted many of these practices standards. Washington County has refused to do so.
The lawsuit is aiming for class status, which would draw in many other criminal defendants -- either imprisoned or still awaiting trial.

The numbers cited in the suit aren't anomalous. The complaint that defenders' offices are underfunded can be heard all over the nation. It's just that two states have further tipped the scales in favor of prosecutors by passing all costs on to local governments. And when there's a limited amount of money to spend, it plays better with voters to hand it to the law enforcement side, rather than a system that helps "guilty" people "escape" punishment. Yes, I'm aware our justice system is predicated on the presumption of innocence, but that's the ideal, not the prevailing perception.

A system that is routinely a travesty at best is a complete debacle in Utah, quite possibly to the point of being unconstitutional. But that's the way the accused are treated. The system prefers plea bargains to trials and convictions to exonerations by a large margin -- something that can be immediately confirmed by taking a glance at government balance sheets.

At best, this case will force the state to start funding indigent defense. But much more needs to be done before the system can be considered equitable.

Read More | 48 Comments | Leave a Comment..

Posted on Techdirt - 28 January 2016 @ 11:39am

Cops Getting Free License Plate Readers In Exchange For 25% Of The 'Take' And All The Driver Data Vigilant Can Slurp

from the an-equitable-partnership dept

What happens when you lower the barriers to entry? More participants join the market. It works everywhere, even when the market is "law enforcement" and the "customers" are everyone else.

Vigilant Solutions, one of the country’s largest brokers of vehicle surveillance technology, is offering a hell of a deal to law enforcement agencies in Texas: a whole suite of automated license plate reader (ALPR) equipment and access to the company’s massive databases and analytical tools—and it won’t cost the agency a dime.


Vigilant is leveraging H.B. 121, a new Texas law passed in 2015 that allows officers to install credit and debit card readers in their patrol vehicles to take payment on the spot for unpaid court fines, also known as capias warrants. When the law passed, Texas legislators argued that not only would it help local government with their budgets, it would also benefit the public and police.
Well, we can see how this will benefit law enforcement and others on the government food chain, but it's unclear how this will benefit the public. The bill's sponsor said the law would "relieve the burden" of having their vehicles impounded or being jailed for unpaid fines. But beyond those vague perks, the benefits seem to flow mostly in one direction.

The EFF quotes legal blogger Scott Henson of Grits for Breakfast, who speculated the combination of license plate readers and credit card readers would push cops towards chasing down unpaid fines rather than enforcing traffic laws or performing more routine patrol duties. If so -- and it appears to be the case -- this is exactly the outcome Vigilant was expecting. It didn't hand out its tech for free. There may be no price tag on the plate readers at the point of purchase, but that's only because Vigilant has points on the back end.
The “warrant redemption” program works like this. The agency gets no-cost license plate readers as well as free access to LEARN-NVLS, the ALPR data system Vigilant says contains more than 2.8-billion plate scans and is growing by more than 70 million scans a month. This also includes a wide variety of analytical and predictive software tools.

The government agency in turn gives Vigilant access to information about all its outstanding court fees, which the company then turns into a hot list to feed into the free ALPR systems. As police cars patrol the city, they ping on license plates associated with the fees. The officer then pulls the driver over and offers them a devil’s bargain: go to jail, or pay the original fine with an extra 25% processing fee tacked on, all of which goes to Vigilant.
To make this relationship even more explicit, officers who issue tickets to parked vehicles rather than drivers leave a note instructing them to visit Vigilant's website to pay the fine. On top of the 25% fee, Vigilant also gets to collect massive amounts of sweet, sweet driver data, which it can then sell to other law enforcement agencies (database access licenses) and private firms (insurance companies, repo men, etc.). And, if the locals seem understaffed, Vigilant is more than happy to pick up the slack.
In early December 2015, Vigilant issued a press release bragging that Guadalupe County had used the systems to collect on more than 4,500 warrants between April and December 2015. In January 2016, the City of Kyle signed an identical deal with Vigilant. Soon after, Guadalupe County upgraded the contract to allow Vigilant to dispatch its own contractors to collect on capias warrants.
As the EFF points out, this freemium service benefits Vigilant and law enforcement, but does very little for the general public… including protect them from Vigilant's inability to perform its job competently.
During the second week of December, as part of its Warrant Redemption Program, Vigilant Solutions sent several warrant notices – on behalf of our law enforcement partners – in error to citizens across the state of Texas. A technical error caused us to send warrant notices to the wrong recipients.

These types of mistakes are not acceptable and we deeply apologize to those who received the warrant correspondence in error and to our law enforcement customers.
Apologies are nice, if of limited utility, but…
[T]he company has not disclosed the extent of the error, how many people were affected, how much money was collected that shouldn’t have been, and what it’s doing to inform and make it up to the people affected.
As has been discussed here before, turning law enforcement agencies into revenue-focused entities is a bad idea. Case in point: asset forfeiture. Further case in point: speed trap towns. Improper incentives lead to improper behavior. Agencies may like the idea of a "free" license plate reader, but the price still has to be paid by someone -- and that "someone" is going to be the general public.

As priorities shift towards ensuring ongoing use of the "free" ALPRs, other criminal activity is likely to receive less law enforcement attention. Unpaid fines and fees are in law enforcement's wheelhouse, but should never become its raison d'etre. Once it does, the whole community suffers. Anything that could be implemented to lower crime rates would also serve to lower revenue, making it far less likely to be implemented. Fewer infractions mean fewer opportunities to collect court fees. And while the legislators pushing the new law Vigilant is leveraging talked a good game about sending fewer people to overcrowded jails, the governments overseeing these agencies still have budgets to meet and law enforcement to lean on to ensure this happens. Actually achieving the bill's stated aims would mean a steady reduction in court fees, which would lead to the loss of "free" plate readers. And no one wants that, at least not on the government side of things.

41 Comments | Leave a Comment..

More posts from Capitalist Lion Tamer >>