For instance, this month a Massachusetts company called BI² Technologies will roll out a handheld facial recognition add-on for the iPhone to 40 law enforcement agencies. The device will allow police to conduct a quick check to see whether a suspect has a criminal record–either by scanning the suspect’s iris or taking a photo of the individual’s face.
Earlier this week, reports surfaced that the military and Georgia Tech Research Institute had started testing on autonomous aerial drones that could use facial recognition software to identify and attack human targets–in effect, the software performs the assessment that determines who gets killed.
And in yet another development, the Federal Trade Commission announced earlier this week that it will hold a free public workshop on December 8, 2011, to examine various issues related to personal privacy, consumer protection, and facial recognition technology.
[Read: “Facebook Photo Tagging: A Privacy Guide“]
Of course, the government and large private companies have had access to facial recognition software for years. The pressing question today is what happens to privacy when everyone has access to the technology? Already smaller businesses–and even private individuals–are developing sometimes amazing, sometimes very creepy uses for security-focused software.
Meanwhile, in Chicago, a startup called SceneTap links facial recognition technology to cameras in bars and clubs so that users can figure out which bars have the most desirable (in their opinion) ratio of women to men–before they even arrive.
If you think the corporate implications are unsettling, wait until the general population gets deeply involved in using facial recognition technology. One recent instance: In the wake of the August London riots, a Google group of private citizens called London Riots Facial Recognition emerged with the aim of using publicly available records and facial recognition software to identify rioters for the police as a form of citizen activism (or vigilante justice, depending on how you feel about it). The group finally abandoned its efforts when its experimental facial recognition app yielded disappointing results.
Though the members of London Riots Facial Recognition undoubtedly believed that they were working for the greater good, what happens when people other than concerned citizens get their hands on the technology? It shouldn’t take too long for us to find out.
Present-Day Reality Check
The use of facial recognition software by governments and online social networks continues to provide headline fodder. A Boston-area man had his driver’s license revoked because when the U.S. Department of Homeland Security ran a facial recognition scan of a database containing the photos of Massachusetts drivers, it flagged the man’s license as a possible phony. Afterward it emerged that the system had confused the man’s face with someone else’s.
And of course Facebook endured a hailstorm of criticism in June when it announced its plans be roll out a facial recognition feature for its members to provide semiautomatic tagging of photos uploaded to the social network.
[Read: “Facebook Facial Recognition: Its Quiet Rise and Dangerous Future“]
One Facebook critic was Eric Schmidt, executive chairman of Google, who said earlier this year that the “surprising accuracy” of existing facial recognition software was “very concerning” to his company and that Google was “unlikely” to build a facial-recognition search system in the future.
Indeed, Google seems to have been so concerned by the technology that Schmidt declined to implement it even though his company already had the know-how to make it. “We built that technology and withheld it,” Schmidt said. “People could use it in a very bad way.”
Next: Off-the-Shelf Efforts, Watch Out for Little Brother, and more
Off-the-Shelf Facial Recognition
The team took photos of people’s faces and pushed those images through an off-the-shelf facial recognition program called PittPatt (which Google recently acquired). In the demonstration, in less than 3 seconds, the program compared the CMU researchers’ photos to images publicly available on Facebook and returned 10 possible matches, along with the names of the matches. The process proved to be accurate more than 30 percent of the time.
The team then used information gleaned from Facebook profiles to guess the birth dates or birthplaces of the people that the software had accurately identified. With that information, they predicted the first five digits of each person’s Social Security number and were accurate about 27 percent of the time.
“The bigger picture here was to show that we’re getting closer to a world where online and offline data blend seamlessly, where you can start with an anonymous face in the street and you can end up identifying something extremely sensitive about the person by combining these different technologies,” says the leader of the team, Carnegie Mellon assistant professor Alessandro Acquisti.
It’s Not Just Big Brother–Watch Out for Little Brother
That’s because the off-the-shelf system that the researchers used won’t scale to a task of that magnitude. “If you wanted to identify anyone in the street of a large city, you’d need a database of hundreds of millions of people, and–given the computational power available now–it’s still not possible to do these face match-ups in real time,” Acquisti explains.
Still, because so much facial information is available online at places like Facebook and Flickr, preventing that information from being used to intrude on individual privacy is almost impossible, according to Harry Lewis, a computer science professor at Harvard University. Lewis told PCWorld: “A private individual in a public–but what was previously thought of as anonymous–place is no longer going to find themselves anonymous.”
People are quick to express concern about technologies like facial recognition in the hands of Big Brother, Lewis acknowledges. “But let’s not get so worried about Big Brother that we forget about the fact that Little Brother is going to able to do exactly the same thing,” he says.
Lewis also points out that, in principle, Big Brother can be controlled through regulation and legislation, but “we can’t regulate what Little Brother does about public information, unless we want to surrender our civil rights of freedom of speech.”
Closed-Circuit Cameras: A Precedent
Sensible Vision makes facial recognition software designed for authenticating a person’s identity. When users install Sensible Vision’s software–Fast Access–on their computer and then sit in front of that PC, the software recognizes their face and logs them in automatically. If a user leaves the computer, the software detects his or her absence and prevents anyone else from using the unit. The company sells both personal and enterprise versions of the software.
In the long run, many problems involving potentially invasive technologies such as facial recognition simply work themselves out, according to Stewart Hefferman, CEO of OmniPerception, of Guilford in the UK, which makes object and facial recognition software.
“There are ways, through technology and legislation, of making sure that people’s privacy is protected while deriving the benefits of a technology,” Hefferman says.
Staff Editor David Daw of PCWorld contributed to this story.
Follow freelance technology writer John P. Mello Jr. and Today@PCWorld on Twitter.