Tuesday, October 23, 2018

How to Make a Robot Use Theory of Mind

From Scientific American.com (Aug. 17):

Imagine standing in an elevator as the doors begin to close and suddenly seeing a couple at the end of the corridor running toward you. Even before they call out, you know from their pace and body language they are rushing to get the same elevator. Being a charitable person, you put your hand out to hold the doors. In that split second you interpreted other people’s intent and took action to assist; these are instinctive behaviors that designers of artificially intelligent machines can only envy. But that could eventually change as researchers experiment with ways to create artificial intelligence (AI) with predictive social skills that will help it better interact with people.

A bellhop robot of the future, for example, would ideally be able to anticipate hotel guests’ needs and intentions based on subtle or even unintentional cues, not just respond to a stock list of verbal commands. In effect it would “understand”—to the extent that an unconscious machine can—what is going on around it, says Alan Winfield, professor of robot ethics at the University of West England in Bristol .

Winfield wants to develop that understanding through “simulation theory of mind,” an approach to AI that lets robots internally simulate the anticipated needs and actions of people, things and other robots—and use the results (in conjunction with pre programmed instructions) to determine an appropriate response. In other words, such robots would run an on-board program that models their own behavior in combination with that of other objects and people.

“I build robots that have simulations of themselves and other robots inside themselves,” Winfield says. “The idea of putting a simulation inside a robot… is a really neat way of allowing it to actually predict the future.”

“Theory of mind” is the term philosophers and psychologists use for the ability to predict the actions of self and others by imagining ourselves in the position of something or someone else. Winfield thinks enabling robots to do this will help them infer the goals and desires of agents around them—like realizing that the running couple really wanted to get that elevator.

This differentiates Winfield’s approach from machine learning, in which an AI system may use, for example, an artificial neural network that can train itself to carry out desired actions in a manner that satisfies the expectations of its users. An increasingly common form of this is deep learning, which involves building a large neural network that can, to some degree, automatically learn how to interpret information and choose appropriate responses.  [read more]

No comments: