“We’re interested in machine learning algorithms where you don’t need to necessarily be an expert in machine learning to interact,” said Andrea Thomaz. She and her team of researchers are working on robots that could be installed in the home and don’t need to be preprogrammed with a set of tasks.
At the CHI conference Simon’s job was identifying a few colorful items: a blue book, a green plastic case, a red flower and several others, and putting them into the corresponding color’s bin.
To begin teaching Simon, a researcher asks him, “Simon, can you hear me?” He responds in the ubiquitous text-to-speech voice, “Yes.” The researcher then asks him if he wants to learn something and he reaches out his robotic arm and grabs whatever the researcher is holding. He brings it towards his face and looks at it. In one demonstration he was given a blue book and told to put the book in the blue bin. He rotated, dropped the book in the bin and said, “There you go.”
Thomaz said one of the biggest challenges in developing Simon was system integration.
“We have Linux boxes, Macs, Windows PCs and basically any kind of machine, we’re using it,” she said. “We use OpenCV [Open Source Computer Vision] and lots of off-the-shelf things for speech recognition.” She said that Simon does “blob detection,” facial recognition and sound localization.
For now Simon only has a “body” from the waist up and can pivot on the platform he rests on. Behind him is a bank of four computer monitors displaying various information about the robot’s status.
In case Simon goes haywire, there’s a large red button next to him to turn him off.
As with many of the projects at CHI, there are no current plans for commercialization.