Photo Credit: Joshua Lott/Pool Photo via AP
From September 23 to 28, 2019, the Markkula Center’s Director of Technology Ethics, Brian Green, attended two Vatican-sponsored meetings on AI. AI and Faith, an organization seeking to promote conversations about AI with religious organizations, interviewed Dr. Green on his experience. This interview is cross-posted on the Markkula Center website from AI and Faith with permission.
First, Brian, please summarize your background for us, especially as it relates to AI. What, particularly, has drawn you to the topic of ethical AI?
I have been working on technology ethics issues, including AI, for about 15 years, starting with my graduate studies at the Graduate Theological Union in Berkeley, where I had a strong focus on biotechnology and ethics. Back then AI was not such an immediate concern, however, I was fascinated by technological progress in general, and couldn’t help but study it, and its ethical impacts. At the time I didn’t yet realize how much technology was going to be a defining factor in the human future. I was just following my interests.
As I reached the end of my PhD, I applied for a lot of teaching positions, mostly in religious studies and theology departments, but none of them were interested in technology. But Santa Clara University’s School of Engineering – out of perhaps 50 jobs I applied to – was the one place that wanted me. They could see the technology ethics problems, not just on the horizon, but already here. This perspicacity on the part of the leadership of the School of Engineering proved to be a providential stroke of luck for me. I began teaching there in 2011, and then joined the Markkula Center for Applied Ethics in 2013.
More recently, the field of AI has grown incredibly fast, thanks to major advances in data, compute, and algorithms, all converging at once. It was in the midst of this technological explosion that I realized that AI desperately needed more attention from ethics, and quickly. Others at the Markkula Center agreed, as well as our industry advisors, so we started in that direction, for example, by applying to join the Partnership on AI in 2017, and I attended their inaugural All Partners Meeting in Berlin that year.
By January of 2018 the Markkula Center created my current role as Director of Technology Ethics thanks to crucial funding from several donors. Within a few months, Shannon Vallor (SCU Philosophy), Irina Raicu (Markkula), and I had designed our Ethics in Technology Practice corporate tech ethics workshop materials, we had started making crucial connections to corporations and other organizations such as the World Economic Forum, and I also designed a course on AI ethics at the School of Engineering, among other initiatives.
Since then, the ethical issues associated with AI have only increased, thus demonstrating the wisdom of my colleagues, our advisors, and our funders. Looking forward, you could say I am in the AI ethics space because it needs so much attention and better decision-making, but speaking more historically, I am where I am because of many good decisions made by a superb team of people, not only at the Ethics Center itself, but extending out into the Silicon Valley community.
What do you see as the most important AI ethics issues?
As a general purpose technology, AI is going to have widespread effects upon human life, much like the adoption of electricity or computers. With such a generalized impact foreseeable, the first ethical issue is merely recognizing the potential for impact in many different places: from the way we transmit information (social networks mediated by AI), to relationships (AI is crucial to matching algorithms), to war (where AI powered cyberweapons are growing in power), to finance (where AI is changing the way money moves in the economy), and so on. In my AI ethics course I cover 16 areas of concern, but I have also written an academic article on 12 concerns, and a Markkula Center “top ten” list of concerns. So, rather than answer this here, I think I should instead refer to those resources.
What do you believe the faith communities can add to the discussion about ethical AI?
Faith communities represent core aspects of human existence: our connection to meaning, common life, relationships, and morality, among others. AI needs to be constructively connected with these core aspects of humanity or else it is going to be harmful. For example, AI-powered addictive online games are one way to potentially make a lot of money, but if they destroy relationships and de-skill us away from social interaction, then they are really causing much more damage to society than is offset by the money they make. Faith communities can see these harms and draw attention to them, to hopefully improve the situation.
Similarly, but on the more positive side, most faith communities have long histories of helping people and promoting human development, so they can be perceptive to new ways that AI might benefit society, perhaps in ways that the technologists themselves might not be aware of. Some faith communities have already contributed to these efforts, for example by helping technologists see how they can work together to better consider the common good, or by connecting technologists to vulnerable groups who could be benefitted by new technologies.
Most of all, faith communities can help provide humanity with meaning and purpose, because as we grow in power we really need to make sure those powers are directed towards good ends, or we will come to live in a terrible world.
Tell us about the Vatican conference you’ve just attended. Any big takeaways, or big controversies, from the conference? How serious is the Vatican about trying to play a significant role in the ethical AI discussion?
I just attended the Common Good in the Digital Age conference, and a pre-conference meeting, together running from September 23 to 28. The big takeaway for me is that the Vatican is in fact very serious about AI, serious enough for the Pope himself to speak on it in an address to the conference-goers.
It seems to me like this initiative may be one of the Pope’s personal interests, taking up the precious attention that the Vatican could direct towards anything in the world, including much more traditional areas of interest such as education or healthcare. He is doing this, I think, because he sees that AI has the potential to revolutionize everything that the Vatican works on, including education, healthcare, and more.
As with previous technological revolutions, such as the Industrial Revolution, the Vatican is aware that this revolution will be disruptive to civilization, and when we speak of disruption to civilization, what we need to keep in mind is that particular human lives will be disrupted: human beings with goals, and families, and feelings. Disruption hurts people – it injures their autonomy, purposes, social position, mental state, and many other things. Among many things, the AI Revolution’s disruption will cause social stress and bad actors will exploit that social stress towards bad ends. In fact, I believe we are experiencing this stress already, as visible in populist movements and the power of misinformation and disinformation campaigns around the world.
AI has the power to bring tremendous benefits, but as we learn to wield this power and learn how it needs to be promoted or limited, we are in a dangerous time, and someone needs to take the lead in steering the technology towards its better uses. The Pope is steering the Vatican, and the Vatican is hoping to help the world by convening these conferences where the issues can be explicitly discussed and actions planned.
What particular AI issue most worries you, or most excites, in the mid- to long-term future?
There are a lot of shorter-term issues that need attention right now, such as AI bias and preventing the abuse of social media through AI. So mid- to long-term issues can easily be overlooked. But I’ve always been interested in the largest scale and longest-term questions, so I am biased towards thinking that way. The mid-term issue that most worries me is existential risk, especially as AI contributes to already existing risks such as nuclear and biotechnology. If nuclear weapons and AI are combined then the choice to end human civilization is taken out of human hands. And with AI-powered biotechnology, especially synthetic biology, we will begin to even more strongly grasp the reins of evolution itself, not only for other creatures, but ourselves – and even worse, learn better how to kill and destroy the life-forms that Earth uniquely holds of all the places we know in the universe.
My long-term concern, if humanity survives these existential threats, is how we will use AI on ourselves, and whether we will make poor decisions on what human life is really all about – our meaning and purpose. Humanity has grown so powerful in the past few centuries, and billions of people have been lifted out of poverty and suffering, and yet in our hearts we are still discontent and we have, as a civilization, lost all shared sense of meaning and purpose.
If we ask a question like “What is human life for?” or “What is the purpose of the United States?” many people are not good at answering. Meaning has been drained from reality, and in the absence of good meanings and purposes, pathological ones will arise, because humans desperately desire purpose, for many of us even more than we desire life itself. This is a dangerous situation of great power and little direction – and also a place where faith communities can do so much more to make the world a better place, by giving humanity good purposes.
What most excites me most about AI is the prospect of gains in efficiency that will possibly allow humanity to spend more time on what truly matters in human life, like character development, spending time with family and friends, and helping those in need. The philosopher Hans Jonas quotes the playwright Bertholt Brecht, saying “First comes food, then morality.” In other words, desperate people do desperate things, regardless of ethics. If this is true, then as our basic needs are met, morality ought to become easier. Similarly, Dorothy Day quotes Peter Maurin, one of the founders of the Catholic Worker movement, as saying that the purpose of the movement was “to make that kind of society where it is easier for men to be good.” We are making this world right now, I think, but we are not fully pursuing the goodness aspect of it – in other words, the world is easier, but we are not using it to be good.
Therefore, what we need to do now is to fulfill the visions of these, and other, sages of the past. We can make a better world, we are making the conditions for such a world to exist, but we are not fully pursuing all of the good that we can do. Therefore ethics is truly the task of this generation, like never before. We should rise to the challenge, and I hope, and believe, that people and communities of faith will do their part.