Let's say you're in charge of the security of an online app store -- any app store will do, whether it be Apple's App Store, Android's Market, or even one of the many Linux app repositories. Your customers' computing safety depends to a large degree on the work you do.
And if your app store has built its reputation on being rigorous about how well it vets the apps it makes available, your customers have an implicit, if not explicit, expectation that the apps they get from your store meet some basic security criteria.
What kind of security criteria? Excellent question. Let's consider that a bit. At the very least, the apps should do what they're advertised to do, and they should contain no back doors, malicious features, viruses, spyware and so on.
What's that you say? All the app vetting you've been doing to date consists only of verifying that the apps play by the rules? That is, that they use only published APIs and such? Well, then, you really have your work cut out for you, because that's not all that your customers expect.
Let's seriously consider what it would take to do what we're talking about: vet all the apps for a set of reasonable security criteria.
You could start by looking for common coding errors: memory leaks, file openings without closing, that sort of thing. Indeed, such a set of (mostly quality-related) reviews is already built into Apple's Xcode, and is readily available on other platforms as well.
You could move on to look for API conformance, to ensure that all apps use only published APIs. That's already being done at Apple, and presumably at other app stores.
But then we start to move into two difficult areas. The first is looking for secure features of the app. The second, which is the really problematic one, is to look for deliberately malicious features in the apps.
By looking for secure features, I mean reviewing the apps for strong authentication, access control, the storage and transmission of sensitive information, and that sort of thing. They're the sorts of things that software security folks spend a great deal of time on in enterprise application environments. The difficulty here is that such reviews require the reviewer to really understand the app in detail. Take the issue of sensitive information, for example. What you find acceptable will depend on what you deem to be sensitive and what you don't. Storing a file without encrypting it isn't a big deal in most cases. But if the file contains, say, usernames, passwords, credit card numbers or Social Security numbers, storing it without encryption is indeed a really big deal -- and may well even violate various regulatory and standards requirements.
Do any of you think your app store of choice is doing this sort of review? Think again. Just look, for example, at the recent disclosure of the Citigroup iPad app. According to published reports, it stored in plain text such things as account numbers, bill payments and security access codes on iOS devices. This was a big enough issue for Citigroup to issue a patch and publicly disclose the security exposure to its customers.
So reviewing apps for their secure features is tough to do, but there are plenty of tools to help with that, aren't there? Not so much. While the static code analysis community has been making great strides in their products over the past several years, there's still a dearth of tools for scanning many popular mobile app platforms, including Apple's Objective C-based apps. To be fair, though, not all is lost. There are excellent static code analysis tools available for scanning Java, C/C++, and other languages.
But then we're still left with the really tough problem to solve: reviewing an app for deliberately malicious "features." Why is this so tough? Because from the perspective of a static code review tool, a bit of code that is deliberately inserted into an app is nonetheless a feature of the app. What's to find? What's to point out?
In my own experience with doing scans of software that contained features that most users probably wouldn't want -- some (presumably) lawful interception features in some ISP-provided Java code, for example -- any e-mail interception features of the software went blissfully unnoticed by the review tools. Only manual code inspection could make such interception features evident.
Now, do you think your app store is going to do this type of application review? Chances are extremely high that it isn't. Putting some deliberately hidden features into an app has been shown to be quite easy -- witness the flashlight app that included rogue tethering features a couple of weeks ago.
These are the two areas that are likely to be the most problematic for app stores. These are the two most labor-intensive and hence costly processes when it comes to doing the reviews. But this is the sort of thing that consumers are going to be most interested in.
Vetting apps for things like API compliance and memory leaks is the easy stuff. For an app store to really succeed in the long run, it is going to have to set the bar quite high for reviewing the security of the apps it approves.
No one said it would be easy.
With more than 20 years in the information security field, Kenneth van Wyk has worked at Carnegie Mellon University's CERT/CC, the U.S. Deptartment of Defense, Para-Protect and others. He has published two books on information security and is working on a third. He is the president and principal consultant at KRvW Associates LLC in Alexandria, Va.
Read more about security in Computerworld's Security Topic Center.
This story, "Making Apps Safe Is Hard Work" was originally published by Computerworld.