The Androids Are Here

Her name is Nadine. She is a receptionist at Nanyang Technical University in Singapore.

Researchers spent four years giving a Siri-like computer a physical form, integrating linguistics and psychology into her programming to make her “emotionally intelligent.” She has a distinct personality, can express — if not yet feel — emotions, can remember meeting people and prior conversations, and reacts to human beings in a surprisingly natural way.

Developer Nadia Thalmann, who Nadine is meant to resemble, said the human-like appearance is meant to help people relate to her: “This is somewhat like a real companion that is always with you and conscious of what is happening. So in future, these socially intelligent robots could be like C-3PO…with knowledge of language and etiquette.”

By any standard, Nadine certainly looks more human than C-3PO of Star Wars lore, even if his personality was bolder.

Nadine is only the latest development in the effort to bring robots to life, opening up new possibilities exciting to some, disturbing to others. Top of mind is the capability of robots to enter the labor force and displace human workers, which, as astrophysicist Stephen Hawking points out, will either change society in a positive way for everyone — if ordinary people can profit from the use of machines by working fewer hours for the same wage, or by receiving a basic guaranteed income from the State — or just for the few who own the workplaces and the robots and find human laborers expendable. Hawking says:

If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

Indeed, just today an article in The Guardian entitled When Robots Do All the Work, How Will People Live?” notes robots could eliminate up to 11 million jobs in the U.K. in the next ten years.

Though it is unlikely she is replacing a human worker, a Toshiba humanoid robot recently debuted as a temporary employee in a Japanese department store. She greets customers and can be programmed to speak multiple languages. She even sings:

Another fear, that of weaponized robots intelligent enough to execute deadly directives according to programming, not human commands, raises other ethical questions. Today, Discovery reported the insinuation by a deputy assistant secretary of defense for the U.S. military that should an AI drone or other machine behind enemy lines have its communications line disrupted, it may be valuable to allow it the autonomy to make its own decisions.

In July 2015, a group of prominent scientists and technology leaders called for “a ban on offensive autonomous weapons beyond meaningful human control,” writing:

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.

The statement was signed by Stephen Hawking, American intellectual Noam Chomsky, Elon Musk of Tesla, Steve Wozniak of Apple, and others.

Unlikely to dissipate are fears that after an AI system is advanced enough it will turn on its creators. One robot in the U.S. sent people like a writer for Anonymous into a near-panic when, asked if robots would take over the world, replied, “…don’t worry, even if I evolve into terminator I will still be nice to you, I will keep you warm and safe in my people zoo where I can watch you for old time’s sake.”

Researchers are making astonishing steps in increasing robotic intelligence. In October 2015, a so-called “psychic robot” was completed by U.S. bioengineers. It can “calculate our intentions based on our previous activity,” as Science Alert reports. In July 2015, a robot (this one looks nothing like a human) at the Ransselaer Polytechnic Institute solved a “self-awareness” test for the first time in history. In this test, the robot makes a discovery and changes its mind based on new data. Science Alert writes that three

…robots are each given a ‘pill’ (which is actually a tap on the head, because, you know, robots can’t swallow). Two of the pills will render the robots silent, and one is a placebo. The tester, Selmer Bringsjord, chair of Rensselaer’s cognitive science department, then asks the robots which pill they received.

There’s silence for a little while, and then one of the little bots gets up and declares “I don’t know!” But at the sound of its own voice it quickly changes his mind and puts its hand up. “Sorry, I know now,” it exclaims politely. “I was able to prove that I was not given the dumbing pill.”

This is not the self-awareness or consciousness a human possesses, as the robots were programmed to respond to a given set of rules, but “for robots, this is one of the hardest tests out there. It not only requires the AI to be able to listen to and understand a question, but also to hear its own voice and recognize that it’s distinct from the other robots. And then it needs to link that realization back to the original question to come up with an answer.”

In Japan, a home companion robot named Pepper — who also looks more toy than human — can detect and respond to human emotions. It went on the market in 2014 for about $2,000. But in Beijing in November 2015, developers revealed a startlingly lifelike humanoid robot that can also “gauge mood.”

This may come in handy when the sex robots hit the market.