The Aurora shooter certainly left a long trail of transactions. In the two months leading up to the crime he bought more than 6,000 rounds of ammunition, several guns, head-to-toe ballistic protective gear and accelerants and other chemicals used to build homemade explosives. These purchases were made from both online ecommerce sites and brick and mortar stores, and more than 50 packages were sent to his apartment, according to news reports.
The tantalizing prospect of preventing such incidents by monitoring suspicious transactions lead Wall Street Journal columnist Holman Jenkins to ask if the government's ill-fated Total Information Awareness data mining project could have stopped the killing. We'll never know, he says, because "political blowback" from the ACLU and other privacy advocates squelched the public discussion before it could get started, even though the data mining process would have used anonymized data. Only when the models produced a "red flag" would a warrant be issued for personalized data, he says.
But would it even work?
Certainly any law enforcement officer who knew of the pattern of activity for this individual would have flagged it as suspicious and investigated. But data mining, also called machine learning, doesn't work the same way.
Dean Abbott, President, Abbott Analytics, Inc., is an expert in data mining and creating predictive models. He has worked on DoD projects, including the development of a mathematical model that identifies which Navy SEAL candidates will be most likely to succeed. Putting privacy matters aside for the moment, I asked him if data mining could identify potential mass murderers before they can commit a crime.
The technical issues of pulling daily transactional data for all 400 million-plus people in the US would not be a problem, he says. Rather, the challenge lies in determining whether there is any connection between volumes of purchases and criminal behavior.
"While it certainly was the case here that [the shooter] purchased a lot of stuff and that there didn't appear to be a good, law-abiding reason for him to purchase the gear, it is unclear if his pattern of purchases is unusual" when examined in the context of the purchases of hundreds of millions of other citizens, he says. For example, given the universe of more than 400 million people, it might very well be possible that 20,000 people made similar volumes of purchases in the same time period. But how many of those are exhibiting risky behavior?
Even if someone is one hundred times more likely to commit a crime when a specified type of purchase behavior is identified, Abbott asks, is it cost effective -- that is, does law enforcement have the resources -- to investigate all of the identified individuals to find the needle-in-a-haystack? Or are there more effective ways to use those resources?
"The best scenario I can imagine here is that a very very high-end, select group can be identified that should be scrutinized." But it's also possible that individuals, such as the perpetrator in this case, wouldn't fall into that top-tier, highest-scoring risk group.
Much as we'd like to think we can solve the problem with technology, it turns out that there is no magic bullet. "Something like this could be valuable," Abbott says. "I just don't think it's obvious that it would be fruitful."
This story, "Predictive Analytics Might Not have Predicted the Aurora Shooter" was originally published by Computerworld.