Q&A: Penn Criminologist Richard Berk on the Future of Artificial Intelligence

​​​​​​​​​​​​​​Artificial intelligence has great potential to transform many facets of our society, from cars to health care to the way the criminal justice system uses information about arrest records.

Richard Berk, a University of Pennsylvania professor of criminology and statistics, has studied machine learning and AI as it relates to criminal justice. At the request of the General Accountability Office and the National Academy of Sciences, he presented his research to about 50 representatives of Silicon Valley, AI-focused NGOs, universities and the federal government.

The aim of the conversation: To investigate where artificial intelligence fits now and in the future from the perspective of those who Berk describes as “without skin in the game.” He talked about myths surrounding AI, as well as what he considers eye-opening about this technology.

1. What are some of the most frequently misunderstood facets of AI as it relates to criminal-justice decision-making?

To start, none of these tools is perfect. They’re going to make some mistakes in accuracy, and there’s going to be some unfairness automatically. So one common myth is that we should reject these machine-learning tools because they can make mistakes and can have built-in biases. To which I say, those performance concerns are absolutely right, but typically there will be fewer mistakes and less bias than current practice. Don’t let the perfect be the enemy of the good.

Another myth is that these tools are doing something mysterious, but that’s really not true. The algorithm searches through very large datasets and looks for associations that might help predict outcomes. That’s all. There are, however, many ways to search, and doing a full search is usually computationally impossible, so we apply all kinds of algorithmic shortcuts. Those, I agree, can be tricky to understand, but there’s nothing mysterious going on with the basic process. In criminal-justice applications, we’re trying to find out which features of individuals predict subsequent crime. 

2. You’ve said there are legitimate concerns about these tools, for example, what happens when they intentionally exclude characteristics like race or gender. Can you explain?

Men commit the vast majority of violent crimes. It should not be surprising that, therefore, when you look at longer prior records for violent crimes, you’re going to pull in a lot of men. That’s inevitable. But by purging the data of gender, you’re going to be less accurate, so you’re going to make more mistakes. That means if, for instance, you’re considering a parole decision, you’re going to release more people who are really a threat to public safety, and you’re going to keep behind bars more people who aren’t. But you’ll be doing it more equally for men and women, which means that male and female offenders, their families and their potential victims are all going to be equally worse off.

Performance concerns are absolutely right, but typically there will be fewer mistakes and less bias than current practice. Don’t let the perfect be the enemy of the good.”

There also are many different kinds of unfairness and tradeoffs between them. For example, the fact that men are overrepresented in prison compared to women may be a sign of gender bias in the criminal-justice system, but to fix that the algorithm has to take less seriously the violent crimes that men commit. In a search for equality of outcomes — that is, comparable fractions of men and women in prison — you introduce inequality of treatment because you’re treating the violent crimes that men commit as less serious than those committed by women.

 3. What takeaway message did you hope to convey?

First, there are inevitable policy tradeoffs that Congress has to address or at least recognize. If they search for perfection, they’re going nowhere. Secondly, a significant amount of good data is usually a prerequisite for the high-quality AI procedures. Too little interest and effort has been put into collecting good data. Without good data, no algorithm that trains on that information will perform as it should.

4. What did you glean after hearing from people from so many sectors?

My consciousness-raising takeaway was about cybersecurity. For all of our important computer systems and electronic data, we’re concerned about hacking. And the concern is justified. It’s done by other countries, it’s done by cybercriminals and it can be very sophisticated. We are no doubt engaged in many of the same activities ourselves. So you wind up with battles between computers — ours versus theirs. Everyone is trying to protect their computer systems and data while attacking the computer systems and data of others. When it comes to cybersecurity, we’re in a new kind of arms race.  

5. Finally, what was most eye-opening?

Despite the heterogeneity of the people in the room and of the wide range of AI applications out there — and they’re remarkable by the way, science fiction coming to life — there was amazing consensus about what the key issues were and what needed to be done. I found that remarkable. This meeting was not like trying to herd cats.