Who Am AI to Judge?

Jan 23, 2019

You have been convicted of a crime.

Would you rather be sentenced by a judge or a computer?


It’s not a far-fetched question.

In the United States, judges weigh various factors laid out in sentencing guidelines, including the guilty person’s conduct, criminal history, and the severity of the offense. Today, computers can do the same job – but would that lead to better results, or just create more problems? Read on to learn more about the challenges citizens will face as machines increasingly make important decisions about their lives.

The Dangers of A.I.

Justice is supposed to be impartial, but sentencing is a judgement call, and numerous studies have shown that sentences handed down to African-American and Hispanic offenders in the criminal justice system tend to be harsher on average than those passed on whites. Some courts already use algorithms to guide judges on what punishments they impose. It’s plausible that judges could be one day be replaced completely – by computers that use artificial intelligence to determine what length of sentence fits a particular crime.

Removing human emotions and biases should eliminate the problems of discrimination that afflict sentencing and criminal justice, right? Not exactly. It’s possible that the computer – even an artificially intelligent one — will just end up magnifying precisely the biases it’s meant to do away with.

Traditional algorithms take data and put it through a series of pre-programmed steps to come up with an answer. Artificially intelligent machines comb through vast amounts of data to find answers to problems that would otherwise require human intuition to solve.

But if the data feeding an algorithm or an AI program itself is skewed, whether due to bad information or historical bias in the criminal justice system itself, the results will probably be tainted too.

There is already a disturbing precedent: in 2016, investigative reporters at ProPublica found that a pilot computer program used in courtrooms to determine how likely defendants were to offend again was “twice as likely to falsely flag black defendants as future criminals.”

These challenges of AI and algorithmic bias in the justice system illustrate a much broader problem. Rapid advances in artificial intelligence and big data mean computer programs may soon be able to detect diseases, make financial decisions, or drive a car better than human beings can. People will live longer, healthier, and more productive lives, but they’ll also be relying on computers to make important – or even life and death – decisions that other human beings make today.

Here are some of the vexing questions that governments, companies, and citizens will have to answer as AI plays an increasingly important role in our lives.

Who is responsible for ensuring that the decisions made by AI are ethical and just?

An elderly man steps off the curb, right into the path of an AI-powered driverless car. The AI must choose: hit the man or swerve dangerously into oncoming traffic. Who is legally liable for the decision that the AI makes? Here’s a less dramatic example: if an AI program denies your mortgage application, who can you appeal to? Accountability is critical if these technologies are going to win public trust. One option is for governments to regulate, as some already are. Europe’s tough data protection laws grant people the right to know why computers have made certain decision about them — but how the law will be applied in practice is still an open question. Another option is for companies to develop their own voluntary standards for “algorithmic transparency” and other ethical issues arising from AI. We’ll see whether a solution emerges that can assure people that the decisions being made about their lives by computers are fair and just.

How does accountability work when some AI decisions are opaque, even to their programmers?

It’s not always possible to understand how AI makes a particular decision. Some of the biggest advances responsible for putting modern AI on the map have come from a machine learning approach called “artificial neural networks” – which runs huge amounts of information through groups of powerful microprocessors arrayed in a way that mimic the way neurons are connected in the human brain. The neural networks ‘teach’ computers how to accurately answer narrowly posed questions. Within them, the digital equivalent of thousands of overlapping synapses are firing every millisecond. So even if you had access to the detailed source code that guided the AI, it might not tell you anything useful about what mistakes were made or which biases were amplified. You don’t understand exactly how your brain decides that the thing that just darted in front of your car is a harmless plastic bag, not a kid on a bike, right? The people programming the AIs that will power driverless cars don’t understand exactly how these programs make decisions, either. All they know is that when they design the network a certain way and feed it the data, they get a certain result.

Should AI be allowed to kill people?

Remotely-controlled drones have already raised tricky ethical issues on the battlefield, but just wait until AI makes takes human fighters out of the picture completely. Governments are already examining when and how they might deploy so-called lethal autonomous weapons systems that could one day be capable of finding and killing enemy soldiers without human input. Lethal AI-powered robots would have a number of advantages over human troops: they would be expendable, wouldn’t require sleep, and – more controversially – wouldn’t hesitate to pull the trigger on a target in their sights. But who will be held accountable if an AI-powered robot accidentally kills civilians? Would lethal robots make governments more likely to wage wars? Militaries around the world are likely to step carefully as they explore potential uses of these systems, but they – and the citizens they protect – won’t be able to escape the tremendous ethical and safety questions that they raise.

Video

The Black Box

Ethics and A.I.

Further Reading

1. Machine Bias – an investigation into an algorithm that is used to predict the likelihood of future criminal behavior, used at various stages of the US criminal justice system.

2. The Dark Secret at the Heart of AI – An article that explains why it can be difficult, or even impossible, to know how a driverless car or other technology using artificial intelligence makes decisions.

3. How to Make AI That’s Good for People – An AI researcher’s perspective on how to ensure that artificial intelligence is designed with human needs and concerns in mind.

This post is part of Digital Revolution: Technology, Power, & You. Funding for this project was generously provided by Harold J. Newman

A brighter future for all