So, the charge is "disturbing the peace", and that is a vague catch-all for any so-called crime that doesn't violate some, more specific, law. The rationale for this charge can't be arbitrary though. If the same behavior was perfectly legal outside of school then you can't define it as a crime simply because it violates school rules. Otherwise, the school district is in a position of legislating, creating new crimes, and they do not have that authority. A criminal charge for violating school rules against toy guns should not stand, precisely for this reason. The other underlying behavior for the crime was intending to threaten a teacher. Actually threatening a teacher could be a real crime but there was no actual threat. The boys were arrested before a threat could take place and just intending to threaten is not a crime. This needs to be challenged in court and the DA needs to be taught a basic lesson about the justice system in America.
I would be very surprised if there was not a national ALPR database already. Most of the funds that police departments use to purchase these systems are grants from various agencies under DHS. The main focus of those grants is providing for protection of critical infrastructure. It would be hard to believe that such grants did not come without strings attached requiring the sharing of collected information. A large number of documents requesting and issuing these grants for various police departments across the country was requested by the ACLU for their report which was issued last year. There is always some redacted information in the contracts related to the grants. Already, a lot of information is shared with the fusion centers operated under DHS purview. Why not just one more step of aggregating ALPR database information from the 72 fusion centers. This proposal may be just to unify and improve the database so that there are no barriers in merging information from disparate databases and improving the ability to conduct queries.
I can tell you of an instance where AT&T was throttling my bandwidth and the evidence is fairly convincing. This happened a few years ago at my home. I have DSL service through AT&T but I use a different ISP, namely Cruzio who has a contract with AT&T for providing the DSL infrastructure and service to each customer. My service rate is nominally 1.5 Mbps but I generally test out at a download max of 1.3 Mbps. I live in a rural area so, yep, I can't get faster DSL service. One day I noticed videos were pausing unexpectedly. I ran a speed test. The max download rate was consistently 384 Kbps. That was a very suspicious number and suggested that my line had been capped at that rate at the central office. I called Cruzio and asked if they knew what the problem was. The service rep said that they had a number of Cruzio customers who had recently run into that exact problem in my area. They said they would call AT&T. Less than ten minutes later my max download rate was back up to 1.3 Mbps. This problem has not occurred again. My theory is that AT&T had a capacity problem at that central office as folk in the area were increasingly adopting DSL in place of dial-up internet service. AT&T decided to handle this in a surreptitious way by capping individual DSL rates at a fraction of what my agreement with Cruzio stated and, undoubtedly, in violation of Cruzio's contract with AT&T. Those who complained were uncapped but those who didn't suffered, perhaps unknowingly, with a lower rate. This may well have been temporary until equipment updates at the central office increased total throughput. It does show that the telecoms are willing to quietly shaft their customers. Mine was a general problem, not just throughput from a particular site like Netflix. In the latter case, as this article points out, it is impossible for the end-user to know if a rate problem is not due to congestion rather than deliberate capping.
This is actually a good idea as long as the records are anonymized well enough. Anonymization of medical data can be difficult when dealing with rare diseases or medical conditions. Let's take a look at how the care.data system handles this.
"Your date of birth, full postcode, NHS Number and gender rather than your name will be used to link your records in a secure system, managed by the HSCIC. Once this information has been linked, a new record will be created. This new record will not contain information that identifies you. The type of information shared, and how it is shared, is controlled by law and strict confidentiality rules."
See, the database will not contain information that identifies you. Problem solved.
My comment about the Google bot was rhetorical. Of course, it is not up to the web spiders, or the companies that build them, to try to figure out if the builders of the website really wanted a page to be public or not. Outside of the convention of robots.txt, if a bot can read a page then it gets read, indexed, and cached. If ANSES had done the authentication and authorization correctly they wouldn't even need to use robots.txt. My point was that it is equally absurd to penalize a person who reads and caches a webpage that has no effective protection against unauthorized persons reading it. There is a cultural assumption that pages on the Internet are for public consumption unless there is some technical method which prevents straightforward navigation and reading. This is contrary to the usual trespassing analogies where the cultural assumption is that a place is private property and you are trespassing unless you have explicit permission.
Here, we have a situation where attempted webpage protection was completely ineffective. This allowed Google, and any other bot or human, to read, index, and cache a large set of pages that were intended to be private. You can't punish someone for doing a search and then reading the resulting webpages that are unprotected. Laurelli is being punished because, after reading those pages, he travels back to the home page and sees that ANSES intended those pages to be accessible only after logging in. This is very screwed up justice and I will dare to offer this trespassing analogy:
Suppose you have a park in the US which seems to be public. You walk into the park, wander around, and then leave through the main entrance. At this entrance you turn around and there is a sign, in Russian, which says "no trespassing". Is the government only going to prosecute those trespassers who can speak Russian?
Effectively, there was no security, but why didn't the Google bot notice there was a log-in required on the home page? A person not as technically astute as Laurelli would not have known they weren't supposed to be looking at these documents. It seems that the security was supposed to be limiting access to URLs to only those who logged in on the home page. I am speculating that the mistake was that at the same time someone was logged in (who had the password "Fatalitas") the Google bot came by to index all the linked pages without needing a separate login. Once indexed, and also in the Google cache, this allowed any person access to the pages. Laurelli has been fined only because he admitted traveling back to the home page and noting there was an authentication step. It seems that knowledge is enough to warrant a penalty. This goes beyond the matter of criminal intent being a required element of a crime. What we have here is mind boggling in that a crime is only a crime if you know it is a crime.
"a cryptographic function known as a "hash" -- a transformation that converts it into a unique string of characters -- it produces an encrypted version of the sender's message, ready to be decrypted with the recipient's key."
When I read this my skepticism reached overload. A hash is a one-way mathematical function, and by definition, cannot be decrypted with a key. I figured that maybe it was just Andy Greenberg who misunderstood the algorithm here. That appears to be true, but I will cut him some slack because Bram Cohen's explanation of this on Github sucks, to be frank. From what I think I understand the algorithm to be, it is rather clever. It does go to show that sometimes smart people don't have the ability to explain well what they know.
I haven't read the code yet, only the textual notes. So, this may not be correct, but here goes. The elements are:
-cover text, for which there exists a set of short alternate segments. For each of these segments there is a single alternate which makes as much sense as the original.
-A shared cryptographic key.
-a value, which is the message to be hidden.
-SHA3 cryptographic hash algorithm
-a custom stream cipher which is a variation of AES in Output FeedBack mode (OFB). An important aspect of the algorithm for this stream cipher is that the set of segment alternates can be found, by the "encoding" portion of the program, that, with the chosen key, will produce the desired value (the message) as the first part of the encrypted output of the cipher.
The first step, for the sender, is to encrypt the cover text with chosen alternates using the shared key and an initialization vector (what Cohen is calling the salt), and AES in OFB mode. This initialization vector is created by using the first 4 bytes of the SHA3 hash of the chosen cover text.
There is a packing step which adds a length prefix and a checksum. The resulting data can be posted on a public website.
The message receiver will also have the shared key and can apply the custom stream cipher to reveal the message.
There are some details which I am still unclear about. I am not a cryptologist, so I cannot evaluate this scheme.
Also true in this case. The black bar just overlays, still existing, text. Highlight, copy and paste, reveals the underlying text. Newspapers should have all their reporters take a tutorial on redaction methods.
This trademark would also apply to clothing. What upsets me is that if this trademark survives I will no longer be able to buy my girlfriend a sexy candy striper dress to satisfy my fetish for, well, sexy candy stripers. I am outraged and will submit my opposition to the USPTO today!
Apart from the whole ubiquitous word issue, Candy Land was the first thing to come to mind upon reading this. I played this as a child, which probably dates me. Hasbro own the rights to this game and still markets it. However, King's "candy" trademark does not apply to board games but only to games on electronic devices (e.g. computers and smart phones). The interesting thing is that Hasbro also sells Candy Land on DVD to be played on a TV. This is not interactive enough to be called a video game but it does conflict with following claim in King's trademark filing: "Video disks and video tapes with recorded animated cartoons". Also, I don't see how Hasbro can be prevented from marketing an actual video game based on Candy Land.
There seems to be a conflict if Hasbro decided to market "Candy Land" in just about any way on electronic media and particularly if they wanted to market electronic equipment or clothing by slapping "Candy Land" on the device or item. If Hasbro takes notice, I don't see how this service mark can survive the 30-day opposition period. Since it is likely there will be an, Adam Sandler, movie based on Candy Land, I am sure that Hasbro will not ignore this.
I think not. Prokofy Neva's writing style and strategy is to wear down critics, opponents, other trolls with a stream of seemingly never-ending verbiage that contains hints of extensive knowledge but is overall, not cohesive and often crosses the border into incoherency. You can never win playing on that field. Instead, you should counter-troll by utilizing tactics similar to when multiple photos of her real-life face were created floating in the sky above her Second Life abode. OOTb has a couple similar characteristics but cannot approach the epic troll capability of Prokofy Neva.
Isn't warning the populace about dangerous weather one of the main reasons for the Emergency Alert System (EAS). As far as I know TWC isn't part of that system except as yet another media outlet.
An aside: My six year old son watches cartoons on Youtube and on 2 occasions I have overheard the emergency alert system go off, warning about potential tornadoes in the local area. Obviously, the cartoon had been recorded off a TV in the heartland of the U.S. which is alternatively known as Tornado Alley. So, whoever posted the Youtube video didn't care that it was interrupted by a severe weather warning. They still posted it to Youtube. Amazing! Now my son, who will never witness a tornado unless he moves away from the area, has experienced an emergency alert for one.
To put the NSA monitoring of cell phone (and landline) traffic in the U.S. into perspective, there are two ways to do this. Encryption only comes to play between the mobile phone hand set and the base transceiver station (the local cell tower). The contents of a call is not encrypted within the trunks and switching equipment of the telecoms. Since it appears the NSA has it talons, and high capacity Narus monitoring equipment within the telecom infrastucture, they don't have to bother with decrypting call contents. The only reason they would bother to monitor handset/tower communications is where they don't have such core access or perhaps when they have a particular target.
The news is about why the telecoms don't fix a well known security weakness, not that there is a weakness. Publicly known attacks against A5/1 have been known since 1994. Undoubtedly, the NSA and GCHQ were able to crack this from the time of it's initial adoption in GSM. The following is from the Wikipedia entry for A5/1:
"According to professor Jan Arild Audestad, at the standardization process which started in 1982, A5/1 was originally proposed to have a key length of 128 bits. At that time, 128 bits was projected to be secure for at least 15 years. It is now estimated that 128 bits would in fact also still be secure as of 2014. Audestad, Peter van der Arend, and Thomas Haug says that the British insisted on weaker encryption, with Haug saying he was told by the British delegate that this was to allow the British secret service to eavesdrop more easily. The British proposed a key length of 48 bits, while the West Germans wanted stronger encryption to protect against East German spying, so the compromise became a key length of 56 bits."
Firstly, I would think a home security camera recording would not fall under this law because the recording is incidental. At least in my view it is incidental although I can see how it could be argued that it wasn't. I would also presume that if someone was charged because their home security camera caught a burglar, the public outcry would be tremendous. Given that,let's get to the comparison. Take a look at South Carolina law and be impressed at how harshly burglary is punished. Since this recording law only is triggered for 1st and 2nd degree burglary, (I don't really see how this type of burglary is automatically considered a violent crime but South Carolina does) the comparison must start there. 2nd degree burglary is punishable by up to 15 years imprisonment compared to a 5 year max for recording it. 1st degree burglary, which includes night-time burglary with no priors, is punishable by an amazing minimum of 15 years to life.
I remember, while growing up, during the revolutionary days of the late 60s and early 70s that people would bomb the towers supporting long distance power transmission lines. My idea was to shoot cables over the lines with a crossbow to short them out. Not that I ever thought about doing that seriously. I am not even sure that would work. The, rather conservative, dad of a friend of mine in high school, who was a civil engineer, said that somehow allowing the pumps that pumped water from the Central Valley in California over the Tehachapi mountains to LA to run in reverse would destroy those pumps which would take weeks to repair. Nowadays, one may be able to do that via the Internet but you cannot ignore physical security. Cybersecurity is very sexy these days and the media loves to focus on it and the expert color commentators they use, who are probably likely to profit, find this a great way stoke FUD.
I suspect whoever did this substation attack has similar motivations. The group that did this had some knowledge about the systems but not enough to show that it was some kind of insider attack. Four years ago, some fiber optic cables were cut nearby in San Jose cutting communications to parts of Silicon Valley and Santa Cruz County. That may have been an insider attack though (authorities still don't know who or why). All the heavy equipment at Granite Rock's Quail Hollow sand quarry in Santa Cruz county, CA were damaged when someone put a substance into the gas tanks which was very effective in destroying the engines. This happened, I think, last spring around the time of the substation attack.
Silicone Valley Area – Adjacent to City of San Jose, CA – Between US 101 and a 600 MW Calpine generating plant. Communication vaults for two communications providers damaged prior to substation attack. AT&T first. Then Level 3 Communications. Fiber cut flush with conduit entrance to vault to make repairs more difficult. Team apparently brought ladders or ropes to access the Level 3 vault. Although utility communications went through those vaults the utility has alternate communications paths through microwave communication links. Communications to substation was not interrupted. 911 communications affected by the communications interruptions. Communications cut off to closest three towns from AT&T cut. Generating plant communications cut off by Level 3 vault attack. Fence alarm detection, cameras on fence line, card reader access through fence. Fence alarms triggered three times due to bullets hitting fence. Attackers never entered substation. More than 120 - 7.62x39 rifle rounds fired at autotransformers. 10 of 11 – 500/230 kV transformers and 3 of 4 – 230/115 kV transformers damaged and taken out of service. Only energized transformers shot. Shots fired primarily low on the radiators. > 51,000 gals of oil spilled. Transformers tripped due to high temperature or low oil as cooling lost. First alarms came in about one minute after first shots detected. Appears to have been a team of multiple people not just one or two. Spotters, shooters, communications attack, etc.
I believe in scientific research and I have been a subject in many research studies as part of an internship I had at NIH. I think the roadside survey study is useful so that law enforcement doesn't get to justify laws and programs based just on their hunch. In particular, we are going to be faced more and more with DUI enforcement where drug use and not alcohol is the underlying cause. There is a push by law enforcement to have zero-tolerance laws or policies for illegal drugs without understanding the real nature of any impairment to driving by those drugs.
I do not support the police being involved in the roadside survey. However, I am also trying to clear up misconceptions about the program. The police are only used to direct traffic from the road onto the survey site. They are not present on the survey site. Anyone you deal with on the survey site, including the person who directs cars to a parking spot, is not going to be law enforcement. At this point you should be able to see signs, and be told, that this is a voluntary survey and be able to drive back onto the road without participating or even stopping.
This study is a difficult one because it requires inconveniencing people while they are driving somewhere. The statistical accuracy of the study is dependent upon a high percentage of those asked, to actually participate. Again, I think it is a mistake for them to use the police to encourage participation. They should realize that this tactic will backfire and end up being counter-productive.