The emerging OAuth 2.0 Web API authorization protocol, already deployed by Facebook, Salesforce.com and others, is coming under increased criticism for being too easy to use, and therefore to spoof by malicious hackers.
“The OAuth community has made a big mistake about the future direction of the protocol,” wrote Yahoo director of standards development Eran Hammer-Lahav in a blog post last week. Hammer-Lahav’s criticism may carry more weight than those from the usual naysayer, because he is actually one of the creators of OAuth.
“What makes this more frustrating is that the people behind [OAUTH 2.0] are some of the brightest security minds on the Web. These guys know exactly what they are doing, and it’s not like they don’t care,” Hammer-Lahav wrote. “They just gave up and decided that the best they can do is maintain the status quo. They are also representing a large and powerful coalition of big companies too lazy to work a little harder.”
Hammer-Lahav’s words may strike an ominous chord, given how both public and enterprise-based Web services are rapidly adopting the draft IETF (Internet Engineering Task Force) standard as a way for Web services to share data. The final version of the specification, which has been authored by engineers at Google, Microsoft, Yahoo, Facebook and others, is expected this fall.
On Sunday, a Salesforce.com engineer announced on the OAuth developer mailing list that the cloud-based enterprise software service was rolling out support for OAuth 2.0. In August, Microsoft added OAuth 2.0 as one of the options for access control for its Azure cloud platform.
Facebook now uses OAuth 2.0 as the preferred method for third-party apps to draw user information from the service. The open source Drupal content management system is building out support for OAuth 2.0 as swell.
Another potential user may be Twitter, which just converted to OAuth version 1.0a at the beginning of the month. Rumors have abounded that the service will move to version 2.0 though thus far engineers have remained quiet on the possibility.
OAuth provides a method of third party authentication that allows Web services to share data through their APIs (application programming interfaces). A user establishes an account with one service, and a server from that service can provide others services with tokens that can be used to access that data. So a user with a Facebook account, for instance, can approve a third-party application to access some of his or her Facebook information, without actually providing the service with a log-in name and password.
The current version of OAuth, version 1.0a, has been criticized for weaknesses, most notably that it is difficult for developers to implement.
OAuth 2.0 aims to simplify this complexity. “The challenge for the developers is that they have to go and write all this signing code,” noted OAuth 2.0 co-editor Dick Hardt, in an interview with the IDG News Service earlier this year. Version 2.0, originally called WRAP (Web Resource Authorization Protocol), also helps cut down administration on the server side, since the server doesn’t need a directory of all the parties that have been issued keys.
According to Eric Sachs, a product manager for the Google security team and one of the co-authors of the draft, OAuth 2.0 was intended for designers of Web applications.
“These guys are not experts in security. They can have a hard time understanding security models” he said, noting that other proposed Web AP security models such as SAML (Security Assertion Markup Language) and WS-Security and even the current version of OAuth can be difficult for a Web developer to understand.
In terms of simplicity, OAuth 2.0 does away with the need to digitally sign the tokens needed for authentication. Instead it encourages the use SSL (Secure Socket Layer) encrypted communications over HTTP (Hypertext Transfer Protocol), a combination called HTTPS. Most seasoned Web developers already understand how to work with HTTPS.
For Hammer-Lahav, OAuth 2.0 tips too far in favor of ease-of-use. SSL isn’t enough of a security precaution by itself, he argued. It doesn’t prevent phishing attacks. “The fact you are using a secure channel doesn’t mean the entity on the other side is good. It just means that no one else can listen in on it (just the bad guys). If a client sends their bearer token to the wrong place, even over HTTPS, it’s game over,” he wrote.
Hammer-Lahav is not alone in his criticisms.
Yahoo social and developer platform chief architect Subbu Allamaraju noted in his own blog post that while OAuth 2.0 is a step in the right direction, the approach would be susceptible to man-in-the-middle attacks where a malicious party could gain access if it intercepts a token.
“There is nothing in the OAuth 2.0 protocol to guide the client to say what [Uniform Resource Identifiers ] the client is allowed to send the access token to,” he wrote.
“This might actually be worse than passwords,” security expert Ben Adida noted last December in a blog post. It’s “very hard for users to gauge whether web applications are doing the right thing with respect to SSL certs when the SSL calls are all made by the backend.”
Hammer-Lahav admitted that this weakness may not be so severe now that Web applications share data on a one-by-one basis. But as we move to a world where we use more composite applications that cobble together data from multiple sources, OAuth 2.0 will be seen as the weak link.
“It is clear that once discovery is used, clients will be manipulated to send their tokens to the wrong place, just like people are phished. Any solution based solely on a policy enforced by the client is doomed,” he wrote.
Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab’s e-mail address is Joab_Jackson@idg.com