Monday, December 15, 2014

Speech - Kevin LaZur

If you were to visit the UK Chandler Hospital or the Kentucky clinic, you might see a machine named “Bot 1” or “Bot 2” rolling down the hallway. These box-shaped robots serve the purpose of carrying lab materials from one building to the other. They can call elevators, automatically direct themselves through the hallways, and would stop and give a polite “excuse me” if they detected someone is in their way. Should we be scared that robots already behave exactly like the common, orderly office worker? We shouldn’t; we should instead get comfortable with this growing idea of giving robots a place in our human society, because Bot 1 and Bot 2 are just the beginning.
Robots have more potential than transporting items from point a to point b with added politeness. For decades humans have dreamed of sharing a world with androids that can function autonomously. This vision is expressed by directors and authors, for countless books, TV Shows, and movies have been made in which robots can be important, righteous characters such as in Star Wars and Wall-E. In addition, movies have also been made about dystopian societies in which self-aware robots ultimately lead to humans’ demise such as iRobot and Terminator. Both of these predictions about the future of robotics are a bit extreme, but the main idea is that eventually we will live with robots, and not above them. And right now we are on the cusp of human-robot interaction.
Robots can interact with humans by exhibiting responses to our actions. Dr. Cynthia Breazeal from MIT created a bot named Kismet, which showcases the possibility of emotional responses in machines. It is structured to have all the basic human facial features, eyes, mouth, nose, etc. Kismet can recognize a variety of social cues from visual and auditory inputs, and responds by moving its gaze and making a fitting expression, so if you were to yell at Kismet, he would shift his gaze downward and make a pouty face.
So why would we want to research how to make robots respond to humans? This isn’t just about the specific act of responding. Smiling when someone acts nice, or raising your eyebrows when you are surprised, these responses are dictated by emotion. Our emotion has 3 components which are cognitive, physiological, then behavioral. We detect stimuli or “sensory input”, and route this information through complex neural circuits in the brain, which then influences our behavior and our speech. Research on sociable machines such as Kismet is about programming robots to have certain behavioral and verbal responses to different stimuli. The stimuli is the cognitive component of emotion, and the response from the android is the behavioral component. But consider that the way robots process this sensory input might bear resemblance to human physiological processes. In order to know more, we would need a more complex robot than is possible right now.
How are we going to create this complex robot that can respond to any and all of the limitless scenarios one can experience in our world? Hod Lipson and his team at Cornell have an interesting idea. In an experiment, they gave a task to a small android with 4 legs, that resembled a robotic spider. The task was to develop a way to move itself. First, the spider bot created a self model by seeing what it was physically capable of. Once it created this model, and realized what it looked like, its goal became to move its parts in tandem with one another to create movement. Through trial and error, the robot can efficiently learn the best possible way to move itself.  As humans, we spend many of our infant years learning how to do this very same thing.
What Lipson and his team have proved is that robots don’t need to be controlled in order to adopt new behaviors. They instead need to have self awareness in order to control itself. If you let a machine make mistakes, but give it a goal, it will be conditioned to eventually perform the best possible way of reaching that goal. Now let’s say our goal is to create a robot that can have social interaction with other humans. We can give the robot every response we know, but we cannot control what is said to it. Through this same concept of reinforcement we can allow the robot to learn how to interact with humans through trial and error. When they detect an emotional response from whomever they interact with, the android will learn that that is the effect of his behavior on the human. Like a child, a robot can learn correct social behavior through enough exposure to other people. We just need to allow it to be self aware.
Some people do not want robots to reach this point. There exists a selfish fear of robots, where people do not want intelligent robots to be able to act fundamentally the same as humans. Philosopher Emil Cioran addresses this fear beautifully by stating that “Man is a robot with defects.” Robots can never learn the drive to be human. The biological motivations that tell us that life is our privilege and death is to be avoided at all costs. The social motivations like the need to excel and be proud of our accomplishments, and the need to care for others and to also be cared for. Robots do not possess life. They have no need for self-actualization in society. So we shouldn’t be afraid of robots becoming intellectual equals to humans. Instead, we should be afraid of robots surpassing us.
But do not fear the progress of artificial intelligence, instead embrace the opportunity it presents. Automatons will adapt faster than us, and we want to cooperate with them in order to reap the knowledge from these machines we bore into the world. By working on sociable machines this early, we are already creating a degree of communication between robots and ourselves. We are giving them the social tools they need to maintain societal bonds with humanity once they can compete with our physiological complexity. Do not be afraid of these machines which are unburdened by mortality.
   

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.