Science

U, Robot?

| Michaela Nesvarova , Rense Kuipers

Imagine this: Clyde is your new office assistant. Clyde is a robot. He brings you coffee and spills the hot drink all over your new shirt. You are angry and scream at Clyde. He gets upset. He seems so sad, in fact, that you decide to go and apologize to him...

Wait a minute, though. Should you really be apologizing to a machine? Can AI, such as Clyde, truly be sad? Where is the dividing line between us and AI and is this line disappearing? Let’s explore the grey area where a robot meets a human.

‘Artificial intelligence will be either the best, or the worst thing, ever to happen to humanity. … AI could mean the end of mankind,’ Stephen Hawking warned us during his recent speech at the opening of the Leverhulme Centre for the Future of Intelligence. Elon Musk and many other prominent innovators and scientists have also stressed the importance of making sure that AI ‘goes the right way’. At the beginning of this year, The Future of Life Institute even presented AI Principles, a guideline to ensure safe and beneficial AI. In other words, it seems that this groundbreaking technology is just around the corner and there is a collective fear of its arrival.

When AI is discussed or portrayed in films or books, it often leads to scary descriptions of war between machines and humanity, war that doesn’t tend to end well for humankind. These worries of AI making people obsolete are, however, not shared by many UT researchers, including Professor Dirk Heylen of the Human Media Interaction group: ‘Yes, machines will outperform us in certain tasks, but does that mean that they will dominate us? That fear is an expression of our primitive minds. In a way, we are still cave people who think in terms of myths. AI somehow always turns evil in our imagination, but I see AI as a useful tool, not as a potential rival.’

The Future of Life Institute

The Future of Life Institute (FLI) is a research and outreach organization that, in its own words, works to ‘catalyze and support research and initiatives for safeguarding life’. Founded in 2014, the institute is particularly focused on existential risks related to advanced artificial intelligence (AI). FLI advisory board includes, among others, Stephen Hawking and Elon Musk, but also Morgan Freeman and Alan Alda. According to the organization’s website, FLI is concerned that AI will trigger a major change and therefore wants to ensure that this technology remains beneficial. In 2017, the institute published the Asilomar AI Principles document, consisting of 23 principles which offer a framework that can help to ensure that as many people as possible can benefit from AI.

Artificial by nature

‘We don’t always have to frame the topic of AI in the sense of war and struggle. We can coexist,’ says Peter-Paul Verbeek, Professor of Philosophy of Technology. ‘Of course AI will be better than us, otherwise we wouldn’t be making it. The whole point of technology is to outdate us. Humans are often seen as animals with something extra, but you could also view us as animals that lack something. We lack the physical attributes to survive, we don’t have any large claws and we can’t even walk the first year of our life, but we have our brain, which allows us to add something to us. Our default setting is that something is missing. Our nature is to be outdated by technology. We are artificial by nature.’

'AI is an extension of our mind.' - Dirk Heylen

In Verbeek’s point of view, AI could therefore be seen as a natural part of us, a mere extension. ‘Technology is what makes us human,’ he says. Dirk Heylen agrees that there is no ‘us versus them’ if it comes to people and intelligent robots. He explains that we don’t have to feel threatened by AI, because all technology comes from within us: ‘We are social animals and we have developed a system of collective intelligence. We have invented a complicated language, which allows us to communicate, make copies of our information and share it with others. Think of speech, writing, print, internet. Our intelligence is much wider than our brain and computers are a part of it. Computer – AI - is not an enemy, but an extension of our mind.’

Success?

Extension or not, they can already beat us at chess, at GO, and even the best poker players should keep their cards closer to the chest. It seems that AI has come a long way and is surpassing us in several ways. However, there is a nuance to all of the success stories, says UT Professor of Technical Cognition Frank van der Velde. Our human ability to decompose a situation, for instance, is something AI could only dream of doing at the moment – well, if it will ever be able to…

Professor Van der Velde specializes in using robots and artificial intelligence to understand how human cognition works. It’s a paradox, he royally admits. But a valuable one. ‘Our human way of sequential information processing is one of the most difficult things to understand. When you implement this concept into a robot who can perform tasks in a logical way and show the process to us, it could teach us how things are learned.’

The odd thing out

Take a moment to reflect on your own motor skills. You just turned a page, without ripping it apart. You may be taking a sip of your coffee – while reading this – hopefully without spilling it. You’re doing all of these ‘basic’ moves without excessive thought. Basic moves that are the most difficult challenge for the current generation of robots, Van der Velde explains. ‘Especially when a robot has to perform multiple tasks at once, in a busy environment, it’ll have trouble making sense of the situation.’

‘Existing AI is still very limited,’ agrees Heylen. ‘There has been big advancements – we have a lot more data available and our algorithms are much better. If it comes to speech recognition, the IBM Watson system has the ability to understand conversational speech as accurately as humans do. However, the real challenge is not for the machine to understand the individual words, but to understand the meaning of the words, the emotions expressed in them.’

'Spongebob is easy for children to process, but hard for AI.' - Frank van der Velde

There are plenty of examples of AI not doing that well, as Van der Velde points out: ‘Just recently, I was at a conference where they presented a robot that could recognize cars. It learned that by getting tons and tons of data thrown at it – scenes where cars are in their natural habitat: the road. Problem is, when it was shown a picture of a car upside down in a pool, it couldn’t recognize that it was indeed a car. That’s a shame, because that’s what you really want: AI recognizing the odd thing out.’

SpongeBob

You could call it a gift – our human capacity of understanding a particular setting. We can identify it, analyze it, even notice when something’s wrong. As Professor Van der Velde says: ‘Our way of decomposing a situation is the key to human cognition.’ Even small children understand that something is wrong when a car is upside down in a pool and that it’s normal for a car to be driving on a road. Van der Velde adds another layer to that, one living in a pineapple at the bottom of the sea. ‘Children have no issue with the concept of SpongeBob SquarePants, no matter how absurd the image is. It’s easy for them to process, but very hard for AI.’

Could it be as simple as human instinct, evolved over years and years? The Technical Cognition Professor thinks we have to look deeper. ‘What does our brain do when we decompose a situation? The question is not if it’s human instinct, but if so – where does instinct come from? Because something like that requires a different kind of computing architecture. Then it’s not just rerunning data in a huge deep learning network.’

Very emotional

Emotion detection is another big obstacle in developing AI. The challenge doesn’t lie only in training machines to recognize various emotions, but also in classifying what types of emotions we even have. ‘We still don’t know what emotions are,’ says Dirk Heylen. ‘Do we use the right categories to describe them? After all, there are many ways of being happy or sad. Some scientists try to solve this problem by using more data and see if the machine can classify it by itself; then they wait what classes the machine will come up with. In this way, AI teaches us about what makes us human.’ Based on these words, it’s once again showing that we could see AI as an expansion of our humanity, an extra brain we are developing, rather than an independent ‘creature’ that might fight us.

If we accept this premise, we have to ask ourselves: Where does a human stop? Where does AI begin? If technology is a part of us, where is the dividing line, or is there none? ‘It is true that, unlike AI, humans are not subjected to any designer. Our starting point means there is a fundamental difference, but the boundary between humans and technology is shifting rapidly,’ answers Professor Verbeek. ‘How we live is completely influenced by technology, and at the same time we put more and more humanity into technology. It is possible that the two will meet at some point, at a point where there is no difference between the two anymore.’

'We are leading and machines are the ones that have to adjust.' - Gwenn Englebienne

Fake humans 

Can AI and humans truly become the same? After all, AI is defined as something that simulates human intelligence and intelligence is one of the key elements of being human. Even Dirk Heylen admits that UT research in this field is focused on creating ‘fake humans’, machines with human abilities. ‘Many people say that if AI exists, we should grant it the same rights and obligations as humans have, but I think that AI will be quite different from us – it is still only a simulation of a human. AI doesn’t grow up, it doesn’t have the same social environment, it lacks the richness of experience that humans have. There is therefore a huge difference,’ adds Professor Heylen, and his colleague Gwenn Englebienne feels the same way: ‘Technology is becoming more social and intellectual distinction is disappearing, but physically machines are completely different beasts from us and this difference will probably always be there. We’re still leading and machines are the ones that have to help us and adjust to our way of life.’

Will my job still exist in 40 years?

By 2055, robots can take over half the work we people do. That’s one of the conclusions of the global management consulting firm McKinsey. It may sound frightening, but luckily there is a nuance behind these seemingly harsh numbers. The researchers distinguished between ‘jobs’ and ‘tasks’. So it’s not so that jobs are disappearing; it’s that tasks within jobs disappear.

McKinsey also differentiated between different job sectors. Especially food services, manufacturing and agriculture sectors are prone to losing tasks to robots. Generally speaking, fifty percent of a food service employee’s tasks exists of predictable physical work, so you can imagine that a robot is also up to those tasks. Do you work in education? Then your tasks are relatively safe. Transferring knowledge is – of course – easier said than done.

No competition

HMI researcher Gwenn Englebienne believes an evolutionary perspective will help us make sense of the dividing line. ‘Biological evolution started with very simple organisms competing with each other, which in this day and age wouldn’t stand a chance. Now, introducing a new lifeform means interaction between enormously complex individuals and enormously complex systems.’

The world is filled with incredibly aggressive and competitive biological agents – humans, animals, you name it. ‘The only thing stronger than our hardwired will to survive is the will to have our species survive. That has grown evolutionary. We don’t usually ask ourselves why we want to live – we just want it. AI doesn’t have this desire,’ says Englebienne. ‘In addition, the resources we need are vastly different. We need food and water to operate. AI just needs electricity. Robots are not our competition.’

Emoticon phase

If we’re not competing with AI, what will the societal impact be? How will it affect us as social beings? Englebienne thinks the ‘rules’ of the social games we play are very flexible, but we can’t force human nature to change. ‘Our social norms originate from different factors. Some are evolutionary, like our fear of loneliness and our body language. Others are quite rational. You don’t cough into someone’s face, for instance. There are also conventions that only help to distinguish specific in-group and out-group situations – those are usually origin-related. There are a lot of unwritten rules to our social behavior and we need to analyze it more, if we want AI to mingle in our very own dynamic social game.’

Considering that many people could be uneasy when interacting with AI, it might not even be desirable to delete the distinction between us and machines. ‘If a robot is too humanlike, people don’t like it, it scares them,’ Prof. Verbeek elaborates on this topic. ‘It has been shown that it is better for the technology not to be extremely humanlike. It seems people prefer robots to stay in ‘an emoticon phase’ in which they don’t resemble humans too much.’

AI winter is coming?

Besides decomposing situations, emotion detection and refined motor skills, there is another challenge for AI to overcome: power. ‘While the human brain can function on only 20 watts, the amount of power a robot needs is staggering. The standard way of computing is a back and forth of information to get to the right answer. The more complex the information, the more energy it costs. I don’t believe that’s how we humans process information,’ clarifies Van der Velde.

The way we’ve always done it has a bottleneck, and we’re pushing the limits of AI towards it,’ he states. So, are we heading towards an ‘AI winter’, as he calls it? ‘We could be. On the one hand, we see AI beating humans; that is phenomenal. On the other hand, there is still a lot of failure.’ The professor suggests we may even have to take one step back to make a leap forward. ‘First, we need to fully understand how we, humans, do it. Robot hardware and software should be more similar to the human brain.’

Even if that is achieved, none of the interviewed UT scientists is worried about a robot uprising and AI taking over the world. However, most of them agree that caution is necessary. ‘We need to build responsible AI,’ says Professor Heylen. 'We should think about what level of autonomy the AI applications should have, so it’s still helpful for us, and we have to think about who has the responsibility over AI, like self-driving cars, for instance.’

Moral machines

Given the choice, would you kill a sweet old lady, a sweet little puppy or maybe even yourself? Imagine that a self-driving car will have to make that terrible decision in the near future, in case of an imminent car crash. Therefore, MIT researchers created a ‘game’ called the Moral Machine, a platform for gathering a human perspective on machine ethics. It’s up to you to make the decisions.

Safety measures

‘We need to engage with the technology and point out that there is a risk. We have to take safety measures, but it would be a pity to reduce ethics of AI to the danger of becoming extinct,’ thinks Peter-Paul Verbeek. ‘My worry isn’t if there are robot teachers or robot nurses, but rather how will teaching and hospital care change through this technology. We have to think about our ideals and values and see how robots fit into them, because technology can be deeply disruptive.’

'It would be a pity to reduce ethics of AI to the danger of becoming extinct.' - Peter-Paul Verbeek

No matter how you look at it, AI comes with a societal impact. In a sense, the AI revolution has parallels with the industrial revolution, Englebienne believes. ‘I don’t see singularity happening. What I do see happening is AI taking over more and more tasks, and that presents deep challenges to society.’ Van der Velde adds: ‘The impact AI will have depends on how people react to it. AI could threaten jobs, it could also create jobs. Take one of the very first videogames – Pong – for example. And look at how big the gaming industry is nowadays. You see stores disappearing from streets and moving online. Instead of people plowing fields, farmers now use a tractor. Point is: we humans have always found a way to adapt, only the pace is much more rapid nowadays. What it comes down to is the ability to work with AI, to cooperate with it and have it add value to our lives.’

Will we coexist in harmony? Will ‘Clyde’ be our ally or our enemy? We honestly don’t know. Yet. Researchers will keep pushing boundaries and, in the meantime, it is vital to never stop asking questions, because ‘life’ will always find a way.

Experts who contributed to this article

  • Dirk Heylen, Professor of Socially Intelligent Computing, EWI faculty.
  • Peter-Paul Verbeek, Professor of Philosophy of Technology, BMS faculty.
  • Frank van der Velde, Professor of Technical Cognition, BMS faculty.
  • Gwenn Englebienne, assistant Professor with focus on automatic perception of human behavior, EWI Faculty.

 

Stay tuned

Sign up for our weekly newsletter.