Activists Cheer On EU's 'Right To An Explanation' For Algorithmic Decisions, But How Will It Work When There's Nothing To Explain?

from the not-so-easy dept

I saw a lot of excitement and happiness a week or so ago around some reports that the EU’s new General Data Protection Regulations (GDPR) might possibly include a “right to an explanation” for algorithmic decisions. It’s not clear if this is absolutely true, but it’s based on a reading of the agreed upon text of the GDPR, which is scheduled to go into effect in two years.

Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which “significantly affect” users. The law will also create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them.

Lots of people on Twitter seemed to be cheering this on. And, indeed, at first glance it sounds like a decent idea. As we’ve just discussed recently, there has been a growing awareness of the power and faith placed in algorithms to make important decisions, and sometimes those algorithms are dangerously biased in ways that can have real consequences. Given that, it seems like a good idea to have a right to find out the details of why an algorithm decided the way it did.

But it also could get rather tricky and problematic. One of the promises of machine learning and artificial intelligence these days is the fact that we no longer fully understand why algorithms are deciding things the way they do. While it applies to lots of different areas of AI and machine learning, you can see it in the way that AlphaGo beat Lee Sedol in Go earlier this year. It made decisions that seemed to make no sense at all, but worked out in the end. The more machine learning “learns” the less possible it is for people to directly understand why it’s making those decisions. And while that may be scary to some, it’s also how the technology advances.

So, yes, there are lots of concerns about algorithmic decision making — especially when it can have a huge impact on people’s lives, but a strict “right to an explanation” seems like it may actually create limits on machine learning and AI in Europe — potentially hamstringing projects by requiring them to be limited to levels of human understanding. The full paper on this does more or less admit this possibility, but suggests that it’s okay in the long run, because the transparency aspect will be more important.

There is of course a tradeoff between the representational capacity of a model and its interpretability, ranging from linear models (which can only represent simple relationships but are easy to interpret) to nonparametric methods like support vector machines and Gaussian processes (which can represent a rich class of functions but are hard to interpret). Ensemble methods like random forests pose a particular challenge, as predictions result from an aggregation or averaging procedure. Neural networks, especially with the rise of deep learning, pose perhaps the biggest challenge?what hope is there of explaining the weights learned in a multilayer neural net with a complex architecture?

In the end though, the authors think these challenges can be overcome.

While the GDPR presents a number of problems for current applications in machine learning they are, we believe, good problems to have. The challenges described in this paper emphasize the importance of work that ensures that algorithms are not merely efficient, but transparent and fair.

I do think greater transparency is good, but I worry about rules that might hold back useful innovations as well. Prescribing exactly how machine learning and AI needs to work too early in the process may be a problem as well. I don’t think there are necessarily easy answers here — in fact, this is definitely a thorny problem — so it will be interesting to see how this plays out in practice once the GDPR goes into effect.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Activists Cheer On EU's 'Right To An Explanation' For Algorithmic Decisions, But How Will It Work When There's Nothing To Explain?”

Subscribe: RSS Leave a comment
49 Comments
orbitalinsertion (profile) says:

The amount of useful algorithms ever used in a useful and decent manner will probably be so small as to be worth explaining as far as possible, and otherwise explaining what is inexplicable. The personalized marketing, advertising, and content sorting, along with their invasive, commoditized data gathering can fuck right off. Of course, the push in the EU, otherwise, seems to be the opposite, which is catering to BS whining from companies about how data protection makes things soooo haaarrrrd. No one is good with data. They don’t secure it, properly anonymize what should be anonymized, and use it for manipulative purposes. Let’s see the amazing algorithms and non-abusive uses of data first. There are some pretty interesting things one can do already, only those things mostly aren’t in any kind of general use and made available to the public. Never mind the increasing data and algorithmic processing with respect to the rise of the hideously awful IoT, or what governments can demand or slurp off the data or processed data. Really, if this might impede some innovation, pretty much so be it. If it’s all that good, you’ll be able to explain it satisfactorily. I’d opt for more of this, as long as we don’t have control over how our data is commoditized or what algorithms (mysterious human thinking included) affect us. If i don’t care about an explanation, i’ll just opt in. Oh wait, that isn’t a choice we get to make either.

anon says:

It's great news

It’s great news to have a right to an explanation.

The argument that computer algorithms cannot show valid reasoning is no excuse.

If – for example – a bank refuses me a loan, they can’t hide behind ‘the computer said so’. If the bank cannot show the reasoning of the computer because it’s a mystery to them, the bank has to employ a human who can show why I wouldn’t apply.

That way I have a fair chance at a fair treatment.

Anonymous Coward says:

Re: It's great news

What about if the reasoning the computer does happen to come up to is race related based on the computer’s history with questions involving race?

What if one of the questions the bank asks is race related and later on the computer determines that people of a certain race are more likely to default on their loans. What if later the computer decides to partly hold that against some people of that race that apply for a loan because the computer is just applying its algorithms.

What if these computer algorithms end up using race to partly determine someone’s likelihood of being a suspect to a crime? Or to a specific type of a crime?

Now if a computer were making medical diagnostics there would be no problems with it using race as a diagnostic question to figure out if race is statistically associated with a particular diagnostic. But if the computer were using it as a loan qualification indicator or a crime suspect indicator …

Hephaestus (profile) says:

Re: Re:

Neural nets, evolutionary algorithms, and deep learning are inherently difficult to understand. Explaining why they came to a particular decision is almost impossible. When the model is sufficiently complex, and continuously updating itself based on the current input(s), it becomes impossible to describe the rationale, for a given decision after the fact, as the “state” of the machine will have changed.

Basically, the EU just shot itself in the foot where AI is concerned.

https://en.wikipedia.org/wiki/Genetic_algorithm
https://en.wikipedia.org/wiki/Artificial_neural_network
https://en.wikipedia.org/wiki/Deep_learning

Anonymous Coward says:

Re: Re: Inherently simple

These kinds of systems are inherently simple to understand. The range of solutions is very small. In effect, the process is a simple iterative solution with a selective change criteria. The number of iterations may be huge as in more than 7, but is still computationally quite small.

When the combinations required exceed the number of atoms in the universe by many orders of magnitude then we can consider them complex.

Anonymous Coward says:

Re: Re: Neural nets, evolutionary algorithms, and deep learning

Great point,

Explaining an algorithm to you average joe in any fashion to which he is likely to understand it is a rather expensive prospect.

Few people understand this stuff in any meaningful way. The ones who are getting paid to study it are pretty much all working for the dark side. Those who understand some of it and are with the rebellion are likely to have difficulty finding ways to monetize technologies that are inherently designed to preserve civil rights.

Based on recent history, there is even some question as to whether product releases of civil rights oriented technologies are likely to get their authors locked up.

IMHO micro targeting creates a predisposition for psychological feedback loops. The way these are generated are not predictable, since a persons interest in something is easy to distinguish, but the emotion driving that interest isn’t.

For example, I regard a lot of drug company advertising as being disturbing enough to regard them it as assault. There is no question that these are engineered with the intent of being as disturbing as they are, which makes the assault premeditated.

But if I reference any of them online, algorithms will decide to send me MORE advertising of that nature. Which, if I was unhinged, might result in the sellers of those products coming into a greater need of them.

So this loop is:

assault -> consumer complaint -> more intense assault

This is fairly simple. But the cumulative effects of all such social media ego-bait loops cannot be reasonably predicted. But it does seem heavily weighted towards maximally leveraging base behaviors.

Or IOW, it isn’t unreasonable to suggest that micro targeting is parasitical to civilization itself.

Anonymous Coward says:

Re: Re:

Basically, in machine learning, the “inputs” are just huge amounts of raw data, and the outputs are patterns that the learning network has found. Sometimes those patterns aren’t obvious to a human, given the data size and how it’s being analyzed. Machine learning may not always look at data in a way that makes logical sense to humans, and that’s part of the reason why it’s such a powerful tool.

In the end, the computer basically “has a hunch” that there’s a pattern because of all the data it’s looked at.

Anonymous Coward says:

Re: Re: Re:

There are many problems with this too though.

As everyone always says garbage in garbage out.

In my medicine example above you can theoretically create a medical doctor computer that makes everyone sign up with it with an account.

Upon signing up you enter a bunch of data into a computer. Medical records, known allergies, symptoms, race, gender, birth date, diet, and you answer a bunch of questions and follow up questions.

Later on the computer makes a recommendation. That recommendation could be something as simple as a diet change, maybe you have a vitamin deficiency. Or it might be something like a drug.

However then the computer can answer follow up questions. How well did the recommendation work in the short run. In the long run. What were the short term and long term side effects.

As more and more people enter all this data the computer can start to use it to make better and better recommendations based on input data.

The problem with trying to diagnose whether or not someone is guilty or not based on a computer algorithm is how do you have an honest follow up question session? Your inputs are basically much more limited. The computer can only really determine if someone with these characteristics under these circumstances are likely to be convicted if prosecuted but the problem is are those convictions the result of the person actually being guilty or are they the result of biases due to human guilt judgement. The computer doesn’t actually know if a specific person is guilty or not, the only thing it can ever know is if a person with certain characteristics under certain situations is likely to be convicted if tried. and that itself is subject to all kinds of human biases.

For instance say the person being tried is of a particular race. Lets assume that the jury is of that same race. It could be that people of a particular race judging the guilt of someone of their own race are more likely to give a guilty verdict than people of another race judging the guilt of someone of their own race regardless of whether the person on trial actually committed the crime. What you can’t tell the computer as a follow up question is whether or not the person is actually guilty or not, you can only give them the results of the trial, and so the computer itself can only use that data to figure out the likelihood of a guilty verdict not whether or not the person actually committed a crime since the computer doesn’t actually know. Hence garbage in garbage out, if convictions of a specific type are garbage then the computer is going to return garbage relative to actual guilt. It’s ultimately a human judging guilt and so if that human judgement is garbage so will the judgement of a computer be since the computer can only base its results and statistics on the judgements you feed it.

Anonymous Coward says:

Re: Re: Re: Re:

In the above example the bank loan example is also a good example of where computers can give more objective results because the result of a follow up question of whether or not someone defaulted on a loan is very objective to a bank. The bank can enter all this data into a computer and the computer can learn, with experience, what characteristics result in loan defaults.

With a crime the problem is the follow up question involving whether or not someone actually committed a crime itself is often what’s in question. and the computer can only base its results on these possibly subjective follow up questions which can be tainted with human bias.

That’s not to say there can’t be objectivity to it. For instance a computer can look at a set of characteristics and determine the likelihood that drugs will be found on a specific property if searched. Perhaps objective results could be entered after a search into a computer and the computer can then use that data to help judge the merits of future searches. But even then that could be subject to bias in all sorts of ways. For instance if the police are more likely to search people of a certain race due to being racist and people of that particular race are more likely to have a specific character in common then the input data the computer is receiving itself may be tainted resulting in bias racist outputs even if the input questions don’t directly address race. It’s up to the police not to discriminate by race over who qualifies to have their data entered into a computer as a possible candidate to be searched, not to ignore the computer’s recommendations based on race, not to conduct searches based on race so that the post search results the computer receives are not based on race, and to ensure that the searches the police do conduct (ie: on properties) are just as thorough regardless of the race of the person being searched. Garbage in garbage out.

Anonymous Coward says:

Re: Re: Re:2 Re:

For instance it could be the case that people of a specific race, age, or gender are more likely to drive a specific type of car.

If a bank is more likely to give loans to people of a specific race the computer’s input data will be more limited to people of that race and its results may not do a good job reflecting people of different races.

Anonymous Coward says:

Re: Re: Re:2 watch out for trolls

I recall a while back stumbling on a random HTTP link generator that looped back onto itself with a delayed page load for each subsequent page. The purpose was to slow down inconsiderate web crawlers and fill them up with garbage. They called it a tar baby if I remember correctly.

Seeding links to a number of these “tar baby’s” into the HTTP referrer field using a plugin, could befoul quite a few micro-targeting databases.

Of course Dr. Evil would insist that modifying the referrer field in your own software was a DOS. As if broadcasting data about your communications to unrelated third parties without your consent, was somehow consistent with natural law to begin with.

Anonymous Coward says:

Re: Re:

Are we really building system where even the program itself can’t articulate its own criteria?

There are only so many inputs into a system. If nothing else, a company should be able to explain what those inputs are – what the program could possibly be considering.

Indeed we are. A classic example happened some years back with a vision system. The military wanted a system that would automatically identify NATO vs Soviet tanks. They built a neural network and fed it photographs of different tanks until it was able to successfully identify them correctly. Then they tried using the newly trained system “in the field” and it failed abysmally. So they went back to the data and tried to figure out what was happening. As things turned out, the problem was the photographs they used to train the system with. For NATO tanks since they had ready access to them, the photographs were nice and clear, well focused, etc. But the soviet tank photographs were what could be taken surreptitiously, fuzzy, not clear, etc. What the neural network had been learning was “clear focused images = NATO tanks, badly focused fuzzy images = soviet.”

Richard (profile) says:

Re: Re: Re:

Exactly!

The statement “we don’t know how the system works” is true of many new AI developments when they first break through. After about a year it stops being true but by that time the MSM have lost interest. Hence the public gets the impression that we don’t understand how AI works – however most experts (talking in private) will admit that we DO understand how these things work – but the MSM is much more interested in you if you say that you don’t.

DB (profile) says:

Yes, we are certainly building systems where no one understands the criteria.

That’s considered one of the major advantages of Machine Learning (‘ML’), Deep Neural Networks (DNN), etc. You don’t have to pay programmers and experts for years to develop and test a system. You just train the network, do some automatic refinement of the structure, train a big more, and you magically can solve problems.

It does work quite well, but the essence really is that no one understands what the structure is doing.

If you know how FFTs work, think of one of the intermediate results. This is a very well understood network for calculating a result, yet most people couldn’t explain what the intermediate result represents.

Richard (profile) says:

Re: Re:

It does work quite well, but the essence really is that no one understands what the structure is doing.

Not so fast…

A lot of work is being done to understand how these things work. Not least because they can go suddenly, spextacularly, wrong. Currently work is being done using the mathematics that is used also by general relativity, to understand the multi-dimensional spaces that underly these systems,

Anonymous Coward says:

Algorithm vs. Data Set.

In micro targeted content, the data sets will and are becoming progressively larger. It is fair to say that increasingly effective manufacturing of buyer intent (ie. committing psychological rape of unsuspecting citizens) is as likely to be due to increases in data sample, as in advances in algorithmic analysis.

Which means that the explanation could potentially be a basis for forcing disclosure of institutional surveillance by the corporate sector.

In any case, I look forward to watching the related litigation. Who’s bringing the popcorn and the rotten tomatoes?

Anonymous Coward says:

Re: Algorithm vs. Data Set.

“In micro targeted content, the data sets will and are becoming progressively larger.”

Computers are good at working with complete information. For example computers excel at games like chess.

They aren’t good at working with incomplete information, which is what reality is filled with. For instance computers struggle at games like poker when facing humans. Taking the logical move each time makes you predictable. Being too random could cost you. A predictable computer that never takes risk is one that people can exploit the predictable nature of. Never taking risk itself is risky. One that takes risk is one that may end up losing (it wouldn’t be a risk otherwise).

As data sets get larger and larger the amount of incomplete information decreases. Problems are that gathering information is a slow and expensive process and often times by the time you have gathered that information it may be too late to act. By then that information might be less relevant. The benefits of having that information might not be as great by then and a competitor that acted earlier and took risk by making assumptions and assuming right might have already overtaken you. With multiple competitors making different assumptions there is bound to be someone that made the right assumption. Even if they assumed wrong acting might have been a better choice than information gathering. Another problem is potentially not knowing the accuracy of that data.

Anonymous Coward says:

Re: Re: Algorithm vs. Data Set.

You sound like an advocate for complete surveillance of everything. Vacuum it all up just because it is there, more is better – well, maybe not. What is the purpose of all this data analysis and to what nefarious end could it be used for, ie, what could possibly go wrong. These and other questions need to be answered.

Anonymous Coward says:

Public vs. private

I don’t see any comment about who this would apply to – governments, big companies, small companies, or individuals.

If someone is denied gov. disability benefits, then that is one thing.

If a person refuses to grant permission for someone to copy their photograph, then that is something else. Why do they grant permission to some and not to others?

For example:
IN THE UNITED STATES DISTRICT COURT FOR THE EASTERN DISTRICT OF LOUISIANA Lisa Romain, Stacey Gibson, Joanika Davis, Schevelli Robertson, Jericho Macklin, Dameion Williams, Brian Trinchard, on Behalf of Themselves and All Others Similarly Situated,
Plaintiffs,
v.
SUZY SONNIER, in her official capacity as Secretary of Louisiana Department of Children and Family Services,
Defendant.


Defendant’s threatened terminations of SNAP results from the DCFS’s pattern and practices…

plaintiffs… challenge the defendant’s policies and practices of terminating individuals…

questions of law and fact… whether defendant’s policies and practices…

Defendant’s practices… were deficient because they
failed to include in practice a fair system…

Richard (profile) says:

alphago

AlphaGo beat Lee Sedol in Go earlier this year. It made decisions that seemed to make no sense at all, but worked out in the end.

Actually it didn’t. If anything it was Lee Sedol who played like that – if you look at (9 dan) Michael Redmond’s analysis of the games you will see that in fact Alphago made quite reasonable moves.

To be fair, the earlier program “Mogo” that firs beat a pro (with a big handicap) some years ago did play strange moves, but things have moved on since then.

Anonymous Coward says:

Who would need to understand these decisions? An expert programmer? A mathmatician? Or is it every person out there? If people like my parents needs to understand the decisions a computer makes, there will never be a legal system again.
It is hard enough to explain to some people why the “magic” computer doesn’t print suddenly or why restarting is an essential step in IT problem solving, because they don’t need or want to know that there are 100 different services working together under the surface.
I wonder how it will go when possibly thousands of criteria will have to be explained so everyone can understand.

Anonymous Coward says:

Re: Re:

or why restarting is an essential step in IT problem solving

The only reason restarting the computer is an essential part of problem solving is that it is the easiest way to cut the Gordian knot. We don’t want to or don’t have the resources or are too lazy to actually investigate why, we just want the problem gone (until next time).

Mark Allen (user link) says:

Right To An Explanation is Reasonable and Possible

As an inventor of Progress Corticon, the leading rules engine for automated decision processing, I find the right for explanation not only rational, but entirely feasible. Best of breed rules engines support this today by providing audit trails that fully explain the results of automated decision processing.

As the author explains, it is feasible that parts of complex decisioning logic could be represented as algorithms that are not easy to understand. That said, for regulated decision, such algorithmic logic must be constrained by clearly understandable business rules, derived from policy or legislation.

An example is credit determination. Legislation requires non-discrimination based upon race, religion and ethnicity. Within the constraints of this legislation, an automated decision service may also apply predictive algorithms that determine propensity to default. All of this could be explained quite clearly in an audit trail of the decision result. Again, this is not only a rational request, but one that can be supported today by best-of-breed technology.

Leave a Reply to Anonymous Coward Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...