AI Is Here
State-of-the-art artificial intelligence (AI) recently took the spotlight when IBM’s Watson supercomputer routed human competitors on the game show Jeopardy.
But when it comes to AI, Watson is just the tip of the virtual frontal lobe. Labs across the United States and around the world are exploring much more than just ways to outwit Ken Jennings. Scientists are teaching robots to explore extraterrestrial planets and serve you coffee, cars are learning to drive themselves, computers are trying to assist doctors with medical diagnoses, and video game soldiers are training to do battle in a virtual theatre of war.
Check out this look at some of the latest uses and breakthroughs in artificial intelligence–including a few AI toys that you can try out for free at home.
Smart Robot

You may not know its name, but you are probably familiar with Honda’s humanoid ASIMO (Advanced Step in Innovative MObility), which debuted in 2000. ASIMO stands 4.26 feet tall, weighs 119 pounds, has about 1 hour of battery life, and can run as fast as 3.72 miles per hour.
But ASIMO’s most impressive features are its AI capabilities, first unveiled in 2005 and upgraded in 2007. ASIMO can identify moving objects, which allows it to follow and greet people, move around stationary objects, and avoid people or other moving objects that cross its path. You can point, and ASIMO will go to that location. It can wave back at someone and shake hands with you if you raise your hand to greet it. ASIMO also has facial-recognition technology, which allows it to identify and greet people by name. Finally, a group of ASIMO robots can work together to complete tasks as a team, and the robots can deliver coffee to people sitting at a table. ASIMO is still in the research and development phase, but Honda hopes to one day have a robot that can help people in their everyday lives. Of course, ASIMO isn’t perfect, as this video from 2006 demonstrates.
Wall-E on Mars

Mars may not harbor intelligent life, but the Red Planet does have some artificial intelligence. NASA scientists in 2010 uploaded AI software to the only operational Mars Rover, “Opportunity,” a robot exploring the planet’s surface. Opportunity can now decide whether to stop and analyze rocks found on the planet’s surface if the objects meet predefined criteria such as shape and color. The software, called Autonomous Exploration for Gathering Increased Science, has been in development since 2004. NASA hopes to use AEGIS in future space missions.
Watson in Medicine

IBM’s Deep Question and Answer (DQA) supercomputer, Watson, amazed the world when it defeated two human beings at the trivia game show Jeopardy. Watson isn’t slowing down now that it has retired from the game show circuit, though. The supercomputer’s technology will be used as a data analytics engine to help doctors sift through medical information and find the best treatments for illnesses. IBM is working with Nuance Technologies to add voice-recognition technology to the system, and hopes to have the new feature operational by late 2012. IBM has partnered with the University of Maryland and the Columbia University Medical Center to test Watson’s capabilities in the medical field.
ER photo: NBC
NELL

Researchers at Carnegie Mellon University are hoping to teach a computer how to understand the world by “reading” the Web. The Never-Ending Language Learner is a computer that has been running nonstop since January 2010 and has built up a database of over 500,000 facts so far. The basic idea is that it crawls the Web every day and tries to extract facts from text. To do so it analyzes Web pages and pulls out statements it believes to be factually true, such as “Manila is the capital city of the country Philippines.” Then it adds those beliefs to its database to improve its comprehension when it goes back out into the Web the next day.
NELL is running on a Dell R710 server with two quad-core 2.67GHz Xeon 5550 processors. However, the software is capable of running on as little as one quad-core processor, 8GB RAM, and 100GB free disk space, says Bryan Kiesel, a research programmer for the NELL team. NELL also depends on supercomputer clusters to process and filter the data that it collects during its daily Web outings. If you’d like to follow NELL’s progress, you can see its most recent beliefs by following NELL on Twitter or by visiting NELL’s homepage.
Shared Control

Researchers at the Federal Institute of Technology in Lausanne, Switzerland, have created a new wheelchair technology for quadriplegic patients. Called “shared control,” this technology uses artificial intelligence to let users control the wheelchair with just their thoughts. The technology requires the user to wear a skullcap that translates brain signals into wheelchair commands, allowing a person to move forward, turn left or right, stop, and so on.
That can get mentally tiring, however–you don’t want to let your thoughts drift off and end up running out into the middle of the road. To get past that limitation, the wheelchair uses AI (this is where the “shared” part of shared control comes in) to take care of so-called low-level details such as maintaining a certain speed or direction. The chair’s AI will also help people avoid running into stationary objects, but the user can override that command to approach a table, counter, or other stationary object.
Knight Rider

Driverless cars are still in the early prototype phase; nevertheless, these vehicles do amazingly well using AI to navigate roads. Google made headlines when it revealed a driverless-car system that has logged over 140,000 miles on California roads as of October 2010. The cars use a combination of video cameras, radar sensors, a laser range finder, and AI to navigate the road.
Sebastian Thrun, who heads Google’s driverless-car research team, has also done some impressive work with autonomous cars at Stanford University. In 2007 the Stanford racing team won second place for its driverless car during the DARPA Grand Challenge put on by the Defense Advanced Research Projects Agency. The 2007 Grand Challenge required driverless cars to navigate simulated traffic conditions and perform tasks such as merging, passing, parking, and negotiating intersections. Stanford’s autonomous-car program also won DARPA’s 2005 challenge, which required the cars to navigate 132 miles of desert terrain without human intervention.
Train a Robot Army

If you want some hands-on time with artificial intelligence from the comfort of your own home, try a game called NERO (Neuro Evolving Robotics Operatives) developed by the computer science department at the University of Texas at Austin.
NERO requires you to train a robot army and then deploy that army to take control of Gliese-581-c, an Earth-like planet rich in natural resources, from a competing army controlled by an intelligent machine. Your robots have no built-in skills, so training them is essential; as you train these virtual robots, they improve their skills and become a more capable and battle-ready army. Your soldiers cannot learn on the battlefield, but only through training exercises, says Kenneth Stanley, a researcher who worked on NERO and is now an assistant professor at the University of Central Florida. You can download NERO 2.0; an open-source version of the game available at Google Code is being developed as a successor to NERO 2.0.
It’s Watching You

Vitamin D, a California-based video security company, uses smart technology to figure out when your security cameras record people and other moving objects. Then, it distills your daily security feed into a highlight reel showing moments when the software captured moving objects. This way you don’t have to spend hours scanning security recordings for suspicious activity.
Vitamin D says its technology is designed to disregard unimportant movements such as a tree branch waving in the wind.
The security technology is based on a form of AI called Hierarchical Temporal Memory developed by Numenta, a company founded by Jeff Hawkins, creator of the Palm Pilot and Palm Treo. HTM is a software system modeled on the human neocortex; instead of responding to rules-based programming (as normal software does), it requires feeds of large amounts of data. An HTM program can then find patterns, such as when a human being is moving in a video frame, and learn to carry out the task you want it to do in response. If you’d like to try a practical use of HTM, you can download a free version of Vitamin D Video for Windows or Mac. The software works with most off-the-shelf Webcams, and in my brief tests Vitamin D Video worked as promised.
Virtual Outcomes

A company called Simulex combines gaming technology with artificial intelligence to create a virtual reflection of the world that predicts possible outcomes from natural disasters, wars, and business decisions.
Simulex’s program is based on a project developed by the University of Purdue and the Department of Defense called the Synthetic Environment for Analysis and Simulation. The software lets you interact with artificially intelligent populations inside a virtual world. Simulex used SEAS, for example, to help the United States Army Accessions Command test out recruiting techniques on an artificial labor market. According to the company, the virtual world mirrored “the US population down to the ZIP code level, [and created] the traits of each potential recruit, such as education, intelligence, and work experience” based on census studies, social science, and psychological theories.
SEAS has also been used for planning emergency responses to terrorist attacks, and for improving air traffic control systems. Simulex clients include the U.S. Joint Forces Command, the United States Army Recruiting Command, the Crane Naval Warfare Surface Center, Eli Lilly, and Lockheed Martin.
Brain Car

A brain-controlled wheelchair is cool, but this recently revealed brain-controlled car gives new meaning to the term “hands-free driving.”
Similar to the brain wheelchair, the brain car requires a special headset; this one has 16 sensors that measure the electromagnetic signals from the human brain and translate those signals into driving commands. Training the car to recognize an individual’s commands requires the person to go through several sessions with a computer before jumping into the vehicle.
So far this proof-of-concept vehicle has limited functionality and can respond only to basic commands such as turn, accelerate, and decelerate. After responding to a brain command, such as “turn right,” the autonomous car takes over until a human decision is required again. The car responds to brain commands only after a slight delay, so it can’t receive an order to brake immediately in an emergency situation. The project was developed by AutoNOMOS labs, part of the Artificial Intelligence Group at the Freie Universität in Berlin. For more, check out a YouTube video of the brain-driven car in action.