How Companies Can Avoid the “Whack-a-Mole” Approach to Ethics
The fusion of advances that define today's revolutionary technologies—from artificial intelligence to robotics, genetic engineering to the Internet of Things, and more—has brought great benefits to many, but also risks and harm.
IBM, for example, stumbled when its facial recognition software was discovered to have a disproportionately high error rate when looking at darker-skinned women.
Microsoft’s AI-powered chatbot named Tay was supposed to get smarter the more it interacted with humans on Twitter. Then a group of users targeted Tay to respond with inappropriate and denigrating responses, teaching it to mimic racist, sexist, and anti-semitic language.
How we use these and other technologies for good is the challenge behind the Responsible Use of Technology project, launched in 2018 by the Switzerland-based World Economic Forum. The initiative aims to help world leaders in business, civil society, and government tackle problems through practical tools.
To co-chair the project, the 51-year-old non-profit reached out to the Markkula Center’s Director of Technology Ethics Brian Green. Together, the two groups have been researching and writing a series of case studies on major companies that are working to operationalize responsible technology practices. Three cases have been completed thus far, starting with Microsoft, then IBM, and most recently, Salesforce.
We sat down with Green to talk about some of the highlights and processes of working with the tech behemoths.
How do some companies approach ethics?
To use Salesforce CEO Marc Benioff’s colorful terminology, companies often approach ethics like whack-a-mole—a problem pops up and you whack it down again. Another way to think of it is like firefighting: if a fire appears, you put the firefighter on it, and the fire goes out. But the bigger question is: Why are we building everything out of flammable materials? And leaving cans of gasoline everywhere, and matches?
Companies need help with fire prevention. Our goal is to build a library of case studies other companies can learn from about real-world practices they can implement in order to spot ethical issues and prevent them from growing into problems.
While you're working on these studies, does Markkula offer its ideas and advice along the way? Is the Center being paid to work on these case studies?
If you’re an ethics center, the only thing you have is your good reputation. So, the case studies are not a quid pro quo. These reports for the WEF are all descriptive work, rather than offering advice. We take what they tell us and we focus on the ethical lessons companies have learned and the changes they’ve made as a result. However, there's a constant tension when writing these papers. The companies naturally want to make themselves look as good as possible and you also need an insider perspective to understand the inner workings of a problem. But, ultimately, we want to try to give a more academic and neutral perspective. So by the time the case studies are published, we’ve re-written them over and over again trying to strike the right balance.
There are other instances where Markkula does do paid work with companies, and that’s prescriptive work. When a company says, ‘Hey, what should we be doing here?’ then they pay us for that advice. But that’s a business-type exchange, covered by a non-disclosure agreement, for fixed periods of time. We don’t want to become beholden to anyone, but we also want to help as many people as we can, in whichever way is most effective.
Companies generally value profit over the good of the world. Beyond improving their public image, what’s their motivation to embrace ethics?
Technology is really, really powerful right now, and it gives us the ability to change society dramatically. If we don’t use technology to change society for the good, we are all going to live in a terrible world. Some of these companies are out in the forefront and have recognized this, and it’s not just a matter of reputation. It’s not just a matter about being trustworthy for their companies, either. They have to actually live in the world that they’re creating. And because tech companies are producing the technology that is shaping the world, if they create a horrible world where things are going wrong, that’s not good for anybody.
You can go back to what Jesus said, ‘To whom much is given, much is required.’ Or you can go to Spider-Man, whose Uncle Ben told him, ‘With great power comes great responsibility.’ It doesn’t matter who you appeal to for that. The logic is there, which is that this power has given them responsibility, and they need to use it well, otherwise we’re not going to have a happy future together.
Salesforce approached your WEF partnership to tell its story, and one part of that was an incident in 2018 concerning its contract with U.S. Customs and Border Protection, which at the time was involved in separating migrant children from their parents at the U.S.-Mexico border. Employees asked the company to drop the contract. Benioff resisted, yet the controversy spurred him to hire a Chief Ethical and Humane Use Officer to guide Salesforce through these and other challenges. What’s changed?
Salesforce has always been generally interested in trying to be a good company, and had that culture strongly enough that employees did feel empowered to protest when something appeared that they thought was wrong. That is a good thing, but the CBP incident helped the company recognize that ethics has to be more than good intentions. It takes hard work too—good thoughts need to become good practice.
So, since then, they have worked to grow a corporate culture of ethics involving responsible innovation which considers diverse perspectives. For example, the company has five ethical use guiding principles which are implemented within the products themselves to protect human rights, privacy, safety, honesty, and inclusion. This is then put into practice in, for example, a cloud service called “Einstein Content Selection,” which highlights where certain attributes, values or categories could contribute to bias, including age, race, gender or ZIP code data.
All new hires in any tech, marketing, or product role at Salesforce also are sent to a “boot camp” that trains employees to develop an ethics-by-design mindset. The company provides a wide range of resources for training, including ethical benchmarks for anyone building artificial intelligence systems. These are meaningful first steps, but, like everybody, they’ve still got work to do.
Did any common themes emerge in the case studies?
You have to have strong leadership at the top to advocate for ethics. Microsoft actually went through a pretty big culture transition over time. When Satya Nadella came in as the new CEO in 2014, he really pushed that transition towards ethics. It was not an emphasis before. In fact, I know someone who’s worked at Microsoft for a long time. He said the corporate culture back in the 1990s was very much all the bad things you hear about start-ups in Silicon Valley. That’s the way Microsoft was back in the day.
But Nadella basically said, “Look, we’ve got a lot of power now, and we really need to use this power responsibly.” He’s not the first person there to say that; there have been other people at Microsoft who were saying that in a grassroots sort of way, like Eric Horvitz, who is Microsoft’s Chief Scientific Officer and involved in AI. Nadella had to make the case to the company’s own employees that it was going to be worth it, and that they should actually change their behavior in the ways they’re thinking about things.
What do you say to employee skeptics?
Some people will say, “Oh, this is a waste of time. Why are we doing this? It’s not useful. This doesn’t do anything for me. It doesn’t do anything for the company.” But ultimately, you make the case for these sorts of things by saying there is a business case for it, and you put in the money in order to prevent those fires later on.
You also do that because of your customers. You’re making yourself trustworthy, basically, because if your customer is a bank, or the U.S. government, they need to be able to trust the product they’re buying from you. And if you’re having a lot of ethical problems, they’re not going to trust you, and they’ll go with somebody else they think they can trust.
IBM, Salesforce, Microsoft, they all agree: if you want people to trust you, you need to be actually trustworthy, because if it is deception, it will be found out and then you are even worse off than if you hadn’t done anything. The ethics effort has to be real, and with these three companies, I think it is.
Finally, how do you decide on case studies?
We will work with a company as long as they have resources to share, that are actually good resources. We’re working with two other companies right now; one is in Silicon Valley, and the other is not. Generally, if a company is doing something internally and they come to us and say, ‘We think this is a case, and we have resources to share with people,’ we will say, ‘OK. Tell us what they are.’ But if it turns out that they don’t actually have those resources and they’re just trying to use us for ethics-washing, we’re going to have to say, ‘Sorry. We don’t think this is going to work out.’