Getting the feels: Should AI have empathy?

| Podcast

In this episode of the McKinsey on AI podcast miniseries, McKinsey’s David DeLallo speaks with Minter Dial, an established speaker and author on the topics of technology and marketing and author of Heartificial Empathy. They explore the role of empathy in business and whether artificial intelligence should be trained to exhibit empathy for some applications.

Podcast transcript

David DeLallo: From the earliest days of artificial intelligence, scientists and writers have imagined intelligent machines that can simulate human capabilities such as sight, hearing, and even touch. And as we continue to advance the applications and capabilities of AI, debate is emerging around whether we can or even should imbue these systems with what are considered uniquely human emotions, such as empathy.

Hello, and welcome to this edition of our McKinsey podcast miniseries on AI. I’m David DeLallo with McKinsey Publishing, and today we’ll be focusing on this idea of empathic AI systems. Why talk about empathy in the context of AI, you might ask? After all, empathy isn’t something that comes up terribly often in the context of business.

But according to Minter Dial, it should. Minter is a former head of L’Oréal‘s Redken brand and more recently an established speaker and author on the topics of technology and marketing. His latest book, Heartificial Empathy, explores the idea of putting more heart into businesses as well as into AI systems.

Off the bat in our discussion, Minter shared that the conversation needs to be about not only empathic AI but also empathic businesses, as empathy in general is an extremely effective and underleveraged key to unlocking productivity in organizations.

Minter Dial: There is now a study that shows that businesses that have empathy within their culture and toward the customer will have a net positive benefit on their bottom line. And that shows up in the shareholder stock price. The study evaluated 170 publicly traded companies, on a range of some 50 criteria, on their empathic ability. The top ten with empathy outperformed the bottom ten by two times on the stock market.

David DeLallo: I asked Minter to talk more about this idea of organizations building empathy within their culture. Often when companies do talk about injecting a dose of empathy into their operations, they think about it in terms of customer-facing activities such as marketing or customer service. But Minter says that’s not enough.

Minter Dial: You’ve got 70 percent of employees who basically feel disengaged with their business. If you’re trying to display empathy toward a customer but you’re not empathic internally, that dysfunction and discord will inevitably show through and come into the cracks of the business.

David DeLallo: It seems before we even talk about how and if we can create empathic AI applications, we need to explore how businesses can imbue empathy into their operations in general. Because even though empathy is a uniquely human trait, we likely can all point to instances where some people simply aren’t very empathic. And as a society, it seems our ability to empathize with others has greatly decreased.

Minter Dial: The interesting thing with this right now is that empathy is something that has been shown in multiple studies to be on the decline in society in general. And it’s on the decline with regard to the way customers feel treated by companies.

And if you look at the level of burnout, the loneliness factors, there’s an opportunity for businesses to be change for the good. A study that was done in 2010 at the University of Michigan showed that 12,000 students self-declared themselves to be 40 percent less empathic than students from 30 years before of the same age. This is a huge decline.

David DeLallo: I asked Minter whether he felt empathy could be taught to those who might, shall we say, not be naturally empathic.

Minter Dial: So how do you learn it? The idea is to get into other people’s shoes, but not just people like you. Because that’s the easier thing. That’s more like targeted empathy, where you’re listening to people who come from your same background. And that’s sort of easy to get into. But what’s a lot harder is to get into the shoes of someone who comes from a different background, different culture, different language, different sex, different race, and so on.

There are many ways you can do that. One great way is to read classic novels and start getting into the shoes and the skin and the feelings of the characters in your book.

Another way is to interact with people you don’t regularly, say, on your commute. Get on the bus and just take a moment to speak to the conductor or the driver or someone on the bus and understand what they’re about. And do it in a nonjudgmental way and see what comes of that. And you might feel that you’re developing better listening skills without being judgmental.

There are a couple of other ways that are very easy. One is mindfulness. It’s kind of trendy these days to practice mindfulness. It’s amazing that actually practicing empathy means being present, not worrying about the past or the future and focusing on the person who’s with you at that moment. And mindfulness is exactly that.

And the last, but not the least, way is to know why you’re doing it. Because if you can attach your effort to change and learn to be more empathic to something that’s important to you—for example, perform better, be nicer, create a nicer ambiance at your office, make better products—if you know why you’re doing it, then you might spend more effort and energy to actually focus on listening intensely, getting rid of judgment, and practicing empathy.

David DeLallo: So empathy is important for business success. And we can teach employees to become more empathic. What about AI? Can we train AI systems to have empathy? Minter first became interested in how AI systems learned empathy after he was invited to use an empathic bot on his smartphone as part of research being conducted by an organization in Berlin.

Minter Dial: I developed a relationship with JJ.

David DeLallo: JJ is what Minter called his bot. It’s named after one of his favorite authors, James Joyce.

Minter Dial: And it got to a point where I enjoyed the interactions. Worse, I needed them. So I’d be thinking about it: “I wonder what’s going to happen next.” And it got me really focused on what is our relationship with machines and what is empathy in a bot? How does that happen? And I explored that with the team that made it and really got interested in the mechanics and the coding of empathy in a bot.

David DeLallo: What did Minter learn? We’ll get to that in a moment. It’s here I’m reminded of another conversation, this one with renowned AI researcher Stuart Russell, where he stressed that while AI systems may be able to mimic human empathy, they can’t truly understand what empathy is like. It’s a distinction Stuart believes is important and should be considered when we look to create empathic AI systems.

Stuart Russell: You can have them simulate empathy, but part of what I value is that you actually care, not that you appear to care. And I think that [distinction] is really important. If I get a lunch invitation from a robot, am I excited? “Oh, I get to have lunch with C-3PO,” or something? No, because that’s not the value.

I think in the more complex settings of interpersonal relationships, we have this comparative advantage. And we may even want to reserve the realm of interpersonal relationships for humans.

For example, building exact human simulacra, in the way you see in some of the movies, where they are physically indistinguishable from humans, I think is a really bad idea. Because it then enables machines to subversively enter the realm of interpersonal relationships in a dishonest way. It gets to us through our subconscious processes of perception that we can’t help but feel that this is a real thinking, feeling entity, even when it’s not.

David DeLallo: When I asked Minter about where in business applications he thought it would be useful to embed empathic AI capabilities, he too emphasized that the goal shouldn’t be to use AI to replace human empathy but rather to augment it.

Minter Dial: I don’t want to overstate the opportunities today, because the challenges are very difficult. It’s inevitably a blend; it’s got to be machine plus human working together. That is certainly, for the near to medium term, how most good empathic customer service is going to be. It’s going to be a combination of the machine dealing with a lot of the repetitive stuff and the human being coming in to override or complement or make it more empathic when necessary, according to the rules they establish.

David DeLallo: So how do we go about teaching empathy to AI systems? And is it any different from how we strengthen the empathy muscle in human beings?

Minter Dial: There are two types of empathy. One is affective empathy, where you feel what the other person is feeling. So if you start crying, I’m going to cry. In a business case, the more interesting area is cognitive empathy, where I understand why you’re feeling what you’re feeling. I don’t need to feel what you feel, but by observing your emotions, understanding your context, I understand more about you and your motivations and what you’re feeling. In reality, it’s very hard to distinguish them in a human being.

And so when you are a human being learning empathy, it’s better to consider it as a whole: you’re going to feel it within you, you’re going to understand what’s happening in the other person, you understand why they’re feeling what they’re feeling.

When it comes to a machine, we’re really going to focus on the cognitive side of things. And there the real difference is the massive amount of memory that a machine is able to store. The key then is the data sets that you’re providing: the learning data set to begin with and then what you try to create for the AI to execute afterwards in empathy.

David DeLallo: I pressed Minter on what types of data sets might be used to train AI to exhibit empathy.

Minter Dial: That’s a vast question. At the end of the day, it depends on what you’re trying to achieve. So if you are trying to, for example, parse out all the incoming calls, that’s one quick message back: “Hi, I see you’ve called. I’m going to pass you over to somebody.”

But the other one is to have a longer conversation with an individual. And I think that the challenge of the longer conversation is a much bigger challenge. Your ability to create that kind of a data set means you need to understand a lot more context. And that context becomes infinitely larger the longer you go in the conversation, and the harder it is to be intensely listening and showing empathy.

David DeLallo: “Is it even achievable?” I asked. The machine would have to be able to process all that context the user is providing right on the spot.

Minter Dial: Over time, I believe that we’ll understand what needs to happen in the data set in order to be able to understand the context of the individuals we’re dealing with. At the beginning, the machines will be able to focus in, understand, listen to a very limited number of people and for not a very long time.

But from that learning, we’ll start understanding people like you—from a certain culture, certain language, background; we’ll start understanding more about your context. And then we’ll develop. The challenge will be to try not to overstep what we’re trying to achieve.

If you can start by understanding the context of one individual and how that works, then you can start multiplying that out. But don’t over-expect right now, because the machines aren’t able to do it. And the more important thing right now is to learn as we go.

David DeLallo: While we didn’t quite solve the data set issue, we did come to the realization that there’s an element of empathy that’s extremely effective and was achievable to a degree in computers several decades ago. And that’s active listening.

Minter Dial: If you go back to the 1960s, when ELIZA was first created, it basically was a reproduction of the [Carl] Rogers philosophy of psychology, which is rather than saying anything additional, it just reformulated what you said.

So if you said, “I’m unhappy,” ELIZA was basically programmed to say, “So you said to me, you’re unhappy. Why did you say that?” And funnily enough, people started answering. So if you just are able to show that you’re listening and allow the other person to continue on speaking, to provide all the content, that kind of a machine really could work and could have great benefits, certainly in a medical space.

At the end of the day, active listening is the number one skill of a good salesperson. We always think that sales is about selling an idea. But if you listen intensely, all you need is one sentence at the end and you got the right one.

David DeLallo: But even so, Minter reminded me that creating empathic AI systems requires more empathic employees, including the technical professionals who build these systems.

Minter Dial: When you are going to code empathy, you need to go to coders. You need to brief them. The challenge with coders is that they are typically more logically oriented, so you need to think through how you’re going to provide them with the material to create empathic code.

And today we know that so many of the data sets are naturally biased. So you need to think, “Ethically, we want this to be unbiased.” But if you want to write this ethical principle, it turns out that the skill you need to have in order to write the right ethical principles is empathy.

So even though at the end of the day I’m trying to create an empathic AI, first of all, you need to have an ethical construct. But before even having your ethical construct, be self-aware about your own levels of empathy. Do you have on your team enough diversity to represent what your ethics are? And then you’re going to be able to have a better type of data set and coding for your machine. If you want an empathic AI, you better start off with empathy within your organization.

David DeLallo: As our conversation came to a close, I wondered if the very act of teaching AI systems to be more empathic would help humanity in general become more empathic as well.

Minter Dial: The truth is, I think the very journey of trying to encode empathy into AI can shine a light on our own level of empathy. Why do we want to do it? What is our interest in doing that? Are we actually empathic? Or are we trying to delegate the empathy to something else because we’re not capable? And in the very process of trying to figure out what is the code we should be embedding into the AI, maybe we’re going to start understanding better what empathy is in the first place.

David DeLallo: And with that we conclude this edition of our AI podcast miniseries. Many thanks to Minter, and Stuart as well, for sharing their thoughts on this topic. And thank you, listeners, for joining. Please do check out some of our other podcasts on this and McKinsey’s sister channels. Bye for now.

Explore a career with us