Skip to content

    Artificial Intelligence: Can We Say No?

    • 10 mins

    An Interview of Michael Watkins, Professor of Leadership at IMD

    by Patrick Chaboudez, Host of “Tout un Monde” at Swiss National Radio

    Lausanne July 20, 2017


    Patrick:  What we will be talking about is the consequences of artificial intelligence. It’s coming; it’s already there in some sectors. It will be coming in a huge wave in the next 10/20 years. So, we’ll start with that: what’s the medium-term perspective as far as jobs are concerned?

    Michael:  In a report released in 2017, McKinsey estimated that automation will replace about 45 percent of all the work that people do today and that 60 percent of professionals will see at least 30 percent of their jobs automated. The net impact is huge. The projections say that it could result in the loss of more than a billion jobs globally. Now, the timeframes obviously are debated: is it 15 years or is it 20?

     

    Patrick:  A billion jobs – it’s huge. What kind of jobs: low-skill jobs, or mid-level jobs? The kind of job we do?

    Michael:  It will vary broadly depending on the occupation. Some will be more or less completely eliminated –there’s every prospect that many, perhaps even most retail work will disappear over time. Knowledge work, at least to the extent that people are being creative, probably will persist. But when intellectual work involving judgement can be modeled, it can replicated by machine intelligence, it’ll disappear. So it’s going to impact very broad swaths of society – not just people at the low end.

     

    Patrick:  Will it create some other jobs? Work to replace the ones that are lost?

    Michael:  I call this the “nowhere to run problem” because if you think about previous waves of automation – the move from the farm to factory, the move from the factory to the service industry – there were always places for people to go. It took time; there was adjustment, new skills had to be created. But it’s not at all clear that there’s any place to go here. We’ve come to the “last bastion” of human capability; it’s not physical capability anymore – that was long ago replaced by automation – and now we’re replacing mental capability. That’s what we have and it’s not at all clear to me that there’s going to be anything significant that’s going to replace the loss of those jobs.

     

    Patrick:  So, what does it mean for human beings? It means a lot of leisure time – that could be nice – but what about the income? How do we “pay for the leisure?”

    Michael:  If you ask this question of the really high-end tech people that are driving this, they know there’s going to be a problem, and their solution is exactly what you said: we get to have leisure; people are given a basic income to support themselves; the nature of the economy changes. But the problem is that it will have a profound impact on people’s identity. I don’t want to be idle; I don't think you want to be idle. I’m not particularly enchanted by the idea that machines are going to take over the work I do. I love the work I do. Many people love the work they do. So, the notion of an economy where mostly people are idle and they’re being provided some basic income, that’s a technologist’s solution for the problem, but it’s not a real, social solution.

     

    Patrick:  You had a way to put it, which is kind of an “identity crisis.”  Will it lead to such a big identity crisis?

    Michael:  Well, I think it probably will, actually, because if you think about some examples – the recent example of the world’s Go champion being beaten by a machine, or the world’s chess champion, Kasparov, being beaten by a machine, the impact on identity is huge. They spent their entire lives becoming great at something and now a machine replaces them. How do they feel about that? How do we feel about that, when a billion people are experiencing that and they have no place to go in terms of identity?

    And we already see the impact, in the US, for example, of what happens when people are left behind: they become angry; they become polarized. And so we will see the world’s greatest collective identity crisis: “Why do we exist? What are we for?” That’s as big an issue as the economic ones.

     

    Patrick:  Well, the answers at some point in history would have been religion because, okay, that would give meaning – it gives meaning to still a lot of people in some parts of the world, obviously – but religion would be a help?

    Michael:  Religion could be a help. But there are two sides to the coin: it can be a source of comfort, it can be a source of meaning; it can also be a source of division and polarization and a place people go to essentially protest what is happening. So it’s not at all clear to me where that’s going to break. But the question of meaning to me is a very profound one.

    And there are near-term examples of this, by the way. You know, one of the things I find fascinating about this whole situation is no one seems to feel like we can say no. No one seems to feel like we can hold back the tide of this. There’s an assumption that it’s just going to happen. Autonomous driving is a great example: studies show in the US and Europe there’s about six million people today that are employed as heavy-truck drivers; over the next 10 to 15 years, more than half of those jobs will likely vanish. That’s three million people’s jobs that are just going to go away.

    The economic benefits of autonomous driving are put forward as “up to 30 to 40 percent reductions in the cost of transportation.” But no one asks the question, “What is the impact of the loss of that employment? Where are those people going to go? How is that new economy going to work such that those savings mean anything?” But the economic logic is sort of just going to roll us over and it’s like there is nothing we can do to say no to this. But we could – right? And autonomous driving is a simple example of this – right? We could say no to that.

     

    Patrick:  Well, the answer, at least for private driving, it’s your own choice and it would seem that people, most of them, they like to drive, they love to drive, so why would they change? But that’s a private answer. Of course, I mean, autonomous driving in companies is a different question. But the answer could also be – and we see it in some ways where consumers change their ways to buy things, because they are more attentive to ecological consequences – maybe the answer, part of the answer lies in us, in the consumer, in the citizen who would perhaps “wake up.”

    Michael:  There’s what’s known in economics as the “collective action problem,” which that collectively all want a positive outcome, but each of us individually has incentives to kind of cheat on the margin, so we end up with an undesirable outcome. It’s like that in this situation: it would take a collective effort to say no. It’s not enough for any one individual to say no or any one company to say no – perhaps not even for any one country to say no, because the technology has a powerful economic logic. The people who own trucking companies, they’re going to look at that 30 percent cost saving and they’re going to say, “Wow – we want that. Sure, we’ll lose jobs but that’s not our problem.”

    What people aren’t looking at is the collective impact of all those decisions across all those industries. The cost savings don’t mean anything if there’s not a functioning economy anymore. They don’t mean anything if there’s not a stable society anymore. And at some point we’ve got to look at the big picture here and make some hard decisions. And what I see that’s really fascinating is no one is asking the question really, “Can we say no?” And if you do ask the question, they’re going, “Well, there’s nothing we can do about it. It’s inevitable.”

     

    Patrick:  You’re a professor in a management school here; you meet executives, plenty of them. Is that a question that bothers them, that they think about it – or kind of “blindness”?

    Michael:  The thinking is mostly about “How do we adjust to this?” This is true even at the level of policymakers. There was just a big meeting of central bankers in Portugal talking about AI – but the way it’s framed is not, “Do we go forward?”; the way it’s framed is, “How do we adjust.” And it’s understandable that people look at it that way, but it’s not clear to me that this is something we can adjust to. And I do believe we should be asking the question, “Is this a road that we want to go down?” – in a similar way that we ask questions about that around nuclear power and nuclear weapons. There may be technologies that are just so disruptive and so dangerous that we need to stop and say, “Do we really want to do this?”

     

    Patrick:  On the other hand, we have some very famous people and very knowledgeable in that field – Bill Gates, Elon Musk, Stephen Hawking and some others – who are really quite frightened about the perspective, the threat of AI. Elon Musk I think is saying that it’s probably the biggest threat facing humankind. They are not listened to? They do not have any impact? Or are they delusional?

    Michael:  They’re not delusional at all. But they’re focused on an even longer-term problem, which is what happens when true general intelligence emerges in machines, when we have machines that literally function as consciousnesses but are far more intelligent than we can be, and probably will become exponentially more intelligent. That’s an existential problem. That’s a true “future of humanity/will we survive?” problem.

    But even closer in, we’ve got machine learning and the replacement of expertise that people have taken decades to develop, and that’s not going to threaten us existentially; it’s going to threaten us socially, economically and politically. So, I’m looking at the next 20 to 30 years; they’re looking at the next 50. I think they’ve certainly got a point. And, again, I think it’s a question of, “Is that a road we want to travel?”

     

    Patrick:  So, it’s a huge policy problem. Citizens should be worried. Politicians should take the problem. So, what kind of regulation could we put in place? And a lot of people would be adverse to it because you have economic gains in the short term.

    Michael:  That’s why I used the example of autonomous driving. Can we imagine saying no to driverless vehicles? Can we imagine saying, “We’re not going to go down—” if you’ll pardon the pun—that road”? And how would that look and how would that be enacted? Because it’s decisions like that, that are going to determine our future. If we leave it to evolution, technologically and economically it’s going to happen, without question. It will just be a matter of the timeframe for implementation. But there won’t be anything that will stop it. There’s got to be a conscious decision politically and socially that we just don’t want to do this, that the impact is too large. And if we wait until even bigger impacts happen, it will be too late.

    To me, autonomous driving is kind of the thin end of the wedge. It’s here, it’s now; we look at it and we say, “Well, why do we need that exactly? There’s huge economic benefits for some, but it’s going to destroy a lot of jobs in the process and it’s going to help accelerate the whole process of making it acceptable for AI replace people.” Can we imagine saying no – individually, collectively, from a governmental point of view, from a societal point of view, can we imagine saying, “That’s not a road we want to go down?"

     

    Patrick:  And, again, is it a problem for the business executives that you meet?

    Michael:  It’s not a question they ask because I don't think it’s a question they think they can ask, and I don't think it’s a question they think they can do anything about. I think, like so many people, they feel like, “There’s this wave coming and it’s just going to roll us over, and we’ve got to try and protect ourselves – but we can’t imagine erecting a barrier to that wave.”

     

    Patrick:  Last question: of course we think about A Space Odyssey – it’s a ’68, ’69, a quite present movie in some ways; sometimes it was possible to pull the plug. Is it possible now, with AI?

    Michael:  That’s the question that really sits at the bottom of this. If you look at some technologies where we’ve done something similar in the past, like nuclear power or nuclear weapons, they were relatively easy to contain. They’re highly technical; you need very specialized materials and radioactive materials and so on to do it – so, you can contain them. Making AI happen is as simple these days as having computers and having networks and having the right algorithms, so containing it is going to be much more difficult. I don't think it’s necessarily impossible; for autonomous driving to go forward, governments need to approve it – if they decide not to, then it doesn’t. Can we imagine them doing that? Can we? I’m not sure.