Advocates of the concept called singularity envision a future in which humans and technology fully converge, but a keynote speaker at the World Future Society conference voiced skepticism about the idea, citing the complexities of the human mind.
Proponents of singularity claim that in 20 years, nanotechnology implanted in people will repair wounds and advanced robots will assist with daily tasks. The concept ultimately calls for people to transcend the limits of biology by using technology to develop into something more advanced and intelligent than human genetics allows.
Wendell Wallach, a scholar at Yale University’s Interdisciplinary Center for Bioethics, supports technology but labels himself a “friendly skeptic” on this marriage of people and machines.
While he is “excited by where the science will take us,” Wallach, who spoke Thursday at the World Future Society in Boston, is a “skeptic because we don’t know enough about humans to pull it off.”
Wallach’s critique of singularity focused on areas including understanding the intricacies of the mind, the complexities of developing robots with morals and the question of who is responsible when a robot’s morals prove problematic.
The singularity movement holds that the evolution of the computer will lead to further development of the human mind, since that is also a computer.
Wallach countered that the brain is engaged in massive parallel thinking and that researchers do not fully grasp how this part of the body operates. He compared this ability to a computer, in which “one bit is out of place and Windows locks up,” he said.
He also said that computers face barriers in dealing with vision, language and locomotion.
“We don’t know which of these challenges we’ll master in 20 years. Some will be ceilings,” he said.
Even if the body’s detailed biological interactions can be replicated in a machine, computers may require a consciousness to complete tasks, Wallach said. However, we do not fully appreciate the complexities of humans so we don’t know how difficult it will be to instill consciousness in a computer, he said.
In addition, as robots handle more autonomous tasks they may require morals and social skills, Wallach said.
Introducing morals into machines raises the issue of whose morals are used and how machines learn their ethos. Programming morals may make robots inflexible, while allowing a machine to learn on its own by experience could overwhelm the device when it needs to make a decision.
Social skills would prove valuable if a robot was assigned to deliver medicine to a patient and could tell if the person was scared, for instance, Wallach said.
Creating a robot with social skills and morals raises the prospect of creating robots that are essentially human. If a robot looks human and holds values, does it have the same rights as humans?
“How do you punish a robot? What do you do? Pull out its plug? Its battery?” he said.
And when a robot’s morals fail it, resulting in injury or death, who is responsible for the lapse? Wallach is clear that the onus should lie with the device’s originator.
“The complexity of computers doesn’t absolve creators from the effects of the technology,” he said.
Despite the issues that accompany technology, society cannot stop its development, Wallach said. However, using technology to give people abilities not found in their genes raises the question of whether the process will lead to enhanced evolution or de-evolution, he said.
“Are we inventing the human species as we know it out of existence?” Wallach said.
An assessment of technology can determine risks and rewards, Wallach said. However, “risk assessment tools are very weak” and he proposed the creation of a system that determines when “near dangers are on the horizon.”
While robots with morals present long-term technology issues, in the near future Wallach calls for a deeper look at how humans and technology are developing.
“Everyone acknowledges that we are in the midst of a huge technology and human shift,” he said. “We have no one looking at this comprehensively. We need to think about the various terms of how this technological development will take place, near, future, long term.”