For instance, this month a Massachusetts company called BI² Technologies will roll out a handheld facial recognition add-on for the iPhone to 40 law enforcement agencies. The device will allow police to conduct a quick check to see whether a suspect has a criminal record--either by scanning the suspect's iris or taking a photo of the individual's face.
Earlier this week, reports surfaced that the military and Georgia Tech Research Institute had started testing on autonomous aerial drones that could use facial recognition software to identify and attack human targets--in effect, the software performs the assessment that determines who gets killed.
And in yet another development, the Federal Trade Commission announced earlier this week that it will hold a free public workshop on December 8, 2011, to examine various issues related to personal privacy, consumer protection, and facial recognition technology.
[Read: "Facebook Photo Tagging: A Privacy Guide"]
Of course, the government and large private companies have had access to facial recognition software for years. The pressing question today is what happens to privacy when everyone has access to the technology? Already smaller businesses--and even private individuals--are developing sometimes amazing, sometimes very creepy uses for security-focused software.
Meanwhile, in Chicago, a startup called SceneTap links facial recognition technology to cameras in bars and clubs so that users can figure out which bars have the most desirable (in their opinion) ratio of women to men--before they even arrive.
If you think the corporate implications are unsettling, wait until the general population gets deeply involved in using facial recognition technology. One recent instance: In the wake of the August London riots, a Google group of private citizens called London Riots Facial Recognition emerged with the aim of using publicly available records and facial recognition software to identify rioters for the police as a form of citizen activism (or vigilante justice, depending on how you feel about it). The group finally abandoned its efforts when its experimental facial recognition app yielded disappointing results.
Though the members of London Riots Facial Recognition undoubtedly believed that they were working for the greater good, what happens when people other than concerned citizens get their hands on the technology? It shouldn't take too long for us to find out.
Present-Day Reality Check
The use of facial recognition software by governments and online social networks continues to provide headline fodder. A Boston-area man had his driver's license revoked because when the U.S. Department of Homeland Security ran a facial recognition scan of a database containing the photos of Massachusetts drivers, it flagged the man's license as a possible phony. Afterward it emerged that the system had confused the man's face with someone else's.
And of course Facebook endured a hailstorm of criticism in June when it announced its plans be roll out a facial recognition feature for its members to provide semiautomatic tagging of photos uploaded to the social network.
[Read: "Facebook Facial Recognition: Its Quiet Rise and Dangerous Future"]
One Facebook critic was Eric Schmidt, executive chairman of Google, who said earlier this year that the "surprising accuracy" of existing facial recognition software was "very concerning" to his company and that Google was "unlikely" to build a facial-recognition search system in the future.
Indeed, Google seems to have been so concerned by the technology that Schmidt declined to implement it even though his company already had the know-how to make it. “We built that technology and withheld it,” Schmidt said. “People could use it in a very bad way.”
Next: Off-the-Shelf Efforts, Watch Out for Little Brother, and more


