decorative

Is Artificial Intelligence Going to Kill Us All?

Future building, it has to be said, is tough–really tough. Especially when the aim is to create a future that’s better than the past, and not just one that’s different.

The irony is that we live in a time when there is so much incredible potential to build a better future. Our knowledge, our understanding, our imagination and creativity, and our capacity for innovation, all far-surpass those of previous generations.

And yet, we have more ways of destroying, or at least seriously diminishing, what lies in front of us, than ever before.

On the one hand there are the in-your-face planetary threats–the charismatic megafauna of the global threats world–threats like climate change, environmental pollution and loss of biodiversity; all of them having their roots in our myopic profligacy as a species.

Then there are the persisting challenges of social justice and equity that you’d have thought we’d be grown up enough to handle by now; but apparently not.

And on top of everything, there are threats from the increasingly powerful technologies we’re creating which, despite our best intentions, have the capacity to potentially rob us the futures we aspire to.

Enter Artificial Intelligence …

As you may have gathered from the admittedly click-baity title of this post, artificial intelligence often ends up in this category of technologies that sit on a knife edge between incredible benefits, and potentially catastrophic failure. Elon Musk, Stephen Hawking, and many others, have warned about the dangers of runaway AI. And philosopher Nick Bostrom’s book Superintelligence put the fear of … well, AI … into people when it came out in 2014.

Yet despite these fears, the risks of AI are often far more mundane–but no less serious for this. They are also incredibly challenging to wrap our heads around as they involve often-subjective but desperately important areas like autonomy, justice, equity, and our ability to have control over our lives and how they play out.

Addressing this challenge, I was asked to put together a playlist this past summer on the potential risks of artificial intelligence for the ASU YouTube channel.

That playlist has just been released, and it’s one that I’d encourage you to check out:

Thinking differently about ethical and safe AI

This, I’ll warn you up front, is not a tedious collection of boring educational videos–far from it! (And apologies to anyone who was hoping for ten hours of interminable talking heads!)

For one, it’s punctuated by two beautiful and thought-provoking videos by Joy Buolamwini–an inspiring “poet of code” and researcher in the MIT Media Lab.

For another, it includes a number of insights into the potential social impacts of AI from smart and engaging experts that may surprise you.

In the playlist, I draw on an eclectic collection of videos to explore some of the key social and ethical risks associated with AI. Not surprisingly, it covers Nick Bostrom’s concerns as well as hopes around “superintelligent” AI. But it also includes videos that debunk some of the myths that have grown up around this.

The playlist also gets into some of the subtler social risks associated with AI, including algorithmic bias and the challenges associated with facial recognition.

And through a combination of information and entertainment, it challenges viewers to think differently about the risks of AI.

New Perspectives

The bottom line here is that there are vitally important yet fiendishly complex intersections between the emergence of artificial intelligence and associated technologies, and our global future. And without gaining deeper insights into the nature of these connections, we’ll struggle to build the futures we aspire to.

And while watching this playlist won’t provide any easy solutions, it may just help frame the questions we should all be asking if we want to see AI support a future that is substantially better than might might otherwise be–for everyone, not just a privileged few.