Al algorithm that works as described (identifies individuals at risk of committing a crime) is working with *extremely incomplete* information for its modeling.
Now, using it to predict future lawbreaking and then comparing to future data (ie, do they commit violent crimes) could lead (assuming the model is clean) to pinning down innocuous causal factors that can be fixed without human risk or cost, but the way they're using it is most certainly Doing It Wrong.
Speaking as someone who works with cognitive models of human thinking and is quite familiar with the benefits and detriments of computer-run algorithms...
Whoever is championing the code should be taken out back and given some wall-to-wall therapy before being tied to a chair at the university's "Programmer Ethics" equivalent. And probably a basic class on algorithms, heuristics and the like.
And if their university doesn't cover such material... then that explains the problems, I suppose.
Computer decision-making should only be an *adjunct* to human decision-making in non-trivial circumstances. An algorithm is only as good as the worst of the people who created it AND the people who implemented it.
And while human decision-making, with on-the-fly application and development of new heuristics as needed, can generally adapt to situations with incomplete or incorrect information at least reasonably well (all things considered), computer decision-making just sits there and craps itself.
Fairly sure such terms don't work like that - that is, a company can't change the terms on someone willy-nilly.
I'm thinking of that... was it a jewelry seller? Suing over negative reviews.
Any changes are also limited by a court's interpretation of "reasonable". For example, declaring six months after signing a two-year cellphone contract (with really nasty penalties for breaking it) that the monthly price has increased by an order of magnitude is not kosher.
Which is why generally contract changes only take effect on 'turnover' (ie, a renewal for the old contract isn't offered, only the new one). Companies tend to put "free to change at any time" in terms and conditions, but it actually standing up in court if they try to enforce the 'new' terms (unless, of course, the user has been prompted to agree to the new contract and has done so) is uncertain.
It goes back to the (groundless, surprise surprise) "Broken Window Theory" - that because you see petty crime occurring at increased rates when serious crime does, then by stomping hard on vandalism and the like you can drop serious crime rates.
Needless to say, it's a nice hearty mixture of confusing correlation and causation and "magic thinking" (ie, the voodoo approach) and has generally been debunked every time there's a serious look at it.
But yet it endures in the form of Zero Tolerance; a fact with irritates me about as much as the "War on Drugs" and "War on Terror" does.
Amusingly, the seller's financial contacts are known by Amazon (so they know where payment goes)... which gives a nice trail for the authorities if they were inclined to consider this an actionable threat if/when the reviewer makes a complaint about it. And oh man, if there's *any* appearance of maliciousness that targets the reviewer... well, we've got threats, we've got action; that's enough to start an investigation.
So the seller is kinda risking *their* life, in a manner of speaking.
A nice bit of reversed appeal to authority there to go with your appeal to ignorance?
I'm thinking they don't need one to state their opinion on the matter.
And while I don't have a degree relating to religion or philosophy, I *do* have a solid understanding of both the scientific method and proper validation, and Scientology is full of bullshit pretending to be scientific.
[A]fter spending nearly a quarter billion dollars and over 4 years on its two TECS Mod programs
... okay, pay me, another three computer scientists (specializing in data processing/searching, interfaces/HCI, artificial intelligence and computer agents and one to coordinate it all) and six specialists in areas related to the objectives of the computer system 100k a year... and we could almost certainly have a good master plan in that first year *easily*.
f they are strong, they are hard to remember, and if you can remember them they probably aren't strong.
Not strictly true; current thought is that a nonsense or semi-nonsense passphrase is both easy to remember and difficult to crack.
For example, "Random guises fool Johnson". Pretty easy to remember. Direct brute force would be computationally impossible (given a secure algorithm, naturally). Even if the cracker knows it's a phrase, they don't know how many words or how long they are.
Let's say they guess four words and they've got a dictionary. There's about 171k words in current usage; let's say the cracker goes for the 50k most used and that the passphrase uses words from that 50k. That's 50,000 to the fourth (minus a bit if you assume no duplicates) or 125 trillion possibilities.
And even one name or non-standard word jumps the attempts needed by orders of magnitude.
“We are the people creating the future – not manufacturers of computers or cables. We are the extraordinary."
Yeah, umm, Mr. Artist? Lots of luck making electronic music without cables or, you know, electronic noisemakers. And computers with specialty software to eliminate background noises from your music. And so forth.
As for "creating the future"? Yeah, I would consider (granted, I'm a computer guy, so...) the people developing augmented reality, high-precision surgical robots, video streaming services and such to be making the future. Don't get me wrong; music is nice. But it's not going to connect me with video to a friend from across the world, it's not going to fix up my internal organs and it's not going to generate new vistas to interact with.
You arrogant twat. Know what the major thing on my phone is? Photos. That I take. And you want me to have to pay *you* for those? I fucking took them, asshat.