decorative

The future of predictive policing?

The company Palantir hit the headlines this week as it made its debut on the New York Stock exchange.

Known for its cutting-edge use of big data to support security info and interventions—especially with three letter agencies—the company is also known for its controversial work on predictive policing.

I wrote about Palantir a couple of years ago in Films from the Future, and given the company’s prominence this week, thought it was worth posting a short excerpt from the chapter below.

As an aside, the company also has an interesting, although indirect, link to ASU. Early on, it received seed funding from the venture capital arm of the CIA, In-Q-Tel, where the chairman of the Board of Trustees is none-other than ASU’s Michael Crow.

The excerpt below is part of a longer piece on the dangers of using advanced technology to try and differentiate between a propensity toward “good” and “bad” behavior that riffs off the movie Minority Report (hence the references to the film below)—you can read a longer excerpt from the chapter on Medium.

And the connection to global futures?

As data-based technologies become ever-more sophisticated, we’re facing a future that is increasingly affected by the use of data to influence, control and manipulate our lives. And if we’re serious about ensuring a just and vibrant future for everyone, we ignore this at our peril!

Predicting Bad Behavior

From chapter 4 of Films from the Future

In 2003, a group of entrepreneurs set up the company Palantir, named after J. R. R. Tolkien’s seeing-stones in Lord of the Rings. The company excels at using big data to detect, monitor, and predict behavior, based on myriads of connections between what is known about people and organizations, and what can be inferred from the information that’s available. The company largely flew under the radar for many years, working with other companies and intelligence agencies to extract as much information as possible out of massive data sets. But in recent years, Palantir’s use in “predictive policing” has been attracting increasing attention. And in May 2018, the grassroots organization Stop LAPD Spying Coalition released a report raising concerns over the use of Palantir and other technologies by the Los Angeles Police Department for predicting where crimes are likely to occur, and who might commit them.

Palantir is just one of an increasing number of data collection and analytics technologies being used by law enforcement to manage and reduce crime. In the US, much of this comes under the banner of the “Smart Policing Initiative,” which is sponsored by the US Bureau of Justice Assistance.* Smart Policing aims to develop and deploy “evidence-based, data-driven law enforcement tactics and strategies that are effective, efficient, and economical.” It’s an initiative that makes a lot of sense, as evidence-based and data-driven crime prevention is surely better than the alternatives. Yet there’s growing concern that, without sufficient due diligence, seemingly beneficial data and AI-based approaches to policing could easily slip into profiling and “managing people” before they commit a criminal act. Here, we’re replacing Minority Report’s precogs with massive data sets and AI algorithms, but the intent is remarkably similar: Use every ounce of technology we have to predict who might commit a crime, and where and when, and intervene to prevent the “bad” people causing harm.

Naturally, despite the benefits of data-driven crime prevention (and they are many), irresponsible use of big data in policing opens the door to unethical actions and manipulation, just as is seen in Minority Report. Yet here, real life is perhaps taking us down an even more worrying path.

One of the more prominent concerns raised around predictive policing is the dangers of human bias swaying data collection and analysis. If the designers of predictive policing systems believe they know who the “bad people” are, or even if they have unconscious biases that influence their perceptions, there’s a very real danger that crime prevention technologies end up targeting groups and neighborhoods that are assumed to have a higher tendency toward criminal behavior. This was at the center of the Stop LAPD Spying Coalition report, where there were fears that “black, brown, and poor” communities were being disproportionately targeted, not because they had a greater proportion of likely criminals, but because the predictive systems had been trained to believe this. And here, there are real dangers that predictive policing systems will end up targeting people who are assumed to have bad tendencies, whether they do or not.

The hope is, of course, that we learn to wield this tremendously powerful technology responsibly and humanely because, without a doubt, if it’s used wisely, big data could make our lives safer and more secure. But this hope has to be tempered by our unfailing ability to delude ourselves in the face of evidence to the contrary, and to justify the unethical and the immoral in the service of an assumed greater good.

From Films from the Future (2018), Mango Publishing

*It looks like the Bureau of Justice Assistance Smart Policing Initiative was rebranded at Strategies for Policing Innovation (SPI – same acronym) sometime in the past 2 – 3 years. The original Smart Policing Initiative website is no longer functional,  but an archived version can be accessed through the Wayback Machine.