Don't overlook URL fetching agents when fixing Heartbleed flaw on servers, researchers say
Website operators should assess their whole Web infrastructure when patching the critical Heartbleed flaw in OpenSSL, otherwise they risk leaving important components open to remote attacks, despite fixing the problem on their publicly facing servers.
The development team at Meldium, a cloud account management and monitoring service, warned that some URL parsing agents that are functionally important for websites and support TLS (Transport Layer Security) connections can also be attacked through the Heartbleed vulnerability to extract potentially sensitive data from their memory space. That’s because the flaw doesn’t affect just TLS servers, but also TLS clients that use vulnerable versions of OpenSSL.
A lot of the attention has been given to the primary Heartbleed attack scenario where a malicious client attacks a TLS-enabled server to extract passwords, certificate private keys, cookies and other sensitive information, but the vulnerability also enables servers to attack clients and steal information from their memory. The Meldium team refers to this as a “reverse Heartbleed” attack.
TLS clients can be obvious things like browsers or other desktop and mobile applications, but can also be any server-side application or script that establishes connections to HTTPS URLs. If attackers are able to force those agent-type applications to fetch URLs from servers they control, they can launch reverse Heartbleed attacks against them.
In a complex Web infrastructure, URL fetching agents could run on internal servers that are behind the usual security perimeter and are treated as less of a priority by administrators in the patch deployment process. The problem is that if they access URLs supplied by users, such applications can be attacked remotely, regardless of where they run inside the infrastructure.
“If you can direct some remote application to fetch a URL on your behalf, then you could theoretically attack that application,” the Meldium team said in a blog post Thursday. “The web is full of applications that accept URLs and do something with them.”
Some examples include: agents that parse URLs to generate previews or image thumbnails; scripts that allow users to upload files from remote URLs; Web spiders like Googlebot that index pages for search; API (application programming interface) agents that facilitate interaction and interoperability between different services; code implementing identity federation protocols like OpenID and WebFinger; or webhooks and callback scripts that ping user-specified URLs when certain events happen.
“The surface of exposed clients is potentially very broad—any code in an application that makes outbound HTTP requests must be checked against reverse Heartbleed attacks,” the Meldium team said.
Depending on what functionality the URL-fetching agents are designed to support, their memory might contain sensitive information.
The Meldium team created a reverse Heartbleed exploit and tested various sites that had already patched the vulnerability on their perimeter servers.
They claim to have found a vulnerable Web agent on one of the top five social networks that was fetching URLs to generate previews. The exploit allowed them to extract internal API call results and Python source code snippets from the application’s memory. The social network was not named because a fix has yet to be confirmed.
In another case, Reddit used a vulnerable agent that parsed URLs to suggest names for new posts, the Meldium team said. “The memory we were able to extract from this agent was less sensitive, but we didn’t get as many samples because they patched so quickly.”
The team also managed to register a malicious webhook on rubygems.org, a website that hosts Ruby programs and libraries known as gems, that called back their exploit URL when a new package was published.
“Within a few minutes, we captured chunks of [Amazon] S3 API calls that the Rubygems servers were making,” the team said. “After the disclosure, they quickly updated OpenSSL and are now protected.”
Meldium created an online tool to generate custom URLs that can be fed into any Web agent to test if it’s vulnerable to reverse Heartbleed attacks.
“The important takeaway is that it’s not enough to patch your perimeter hosts—you need to purge bad OpenSSL versions from your entire infrastructure,” the Meldium team said. “And you should keep a healthy distance between agent code that fetches user-provided URLs and sensitive parts of your systems.”
While the threat is not as broad as for traditional clients and servers, many sites do access user-controlled URLs and create a valid Heartbleed attack vector that needs to be pointed out, said Carsten Eiram, the chief research officer at vulnerability intelligence firm Risk Based Security, via email. “I’m sure some companies providing such features acting as TLS clients forget about them when patching their servers.”
OpenSSL is also used by Web services, the programmatic interfaces that provide data feeds for machine to machine communication as well as auxiliary data to both Web clients and servers, said Philip Lieberman, president of Lieberman Software, via email. “Protocols such as SOAP, REST and JSON can be potentially attacked in variations of the Heartbleed scenario.”
“Administrators are currently in triage mode—addressing the problems that are most obvious and most under public scrutiny,” said Brendan Rizzo, technical director for EMEA at Voltage Security, via email. “Attackers, on the other hand, generally avoid the ‘front door’ and will be shifting their focus to these secondary attack vectors.”
OpenSSL versions 1.0.1 through 1.0.1f are seriously broken and should be removed from all code as soon as possible, said Lamar Bailey, director of security R&D at Tripwire, via email. “We will see malicious servers popping up to exploit ‘reverse Heartbleed’ any minute now but people should also beware of all of these ‘public test servers’ for Heartbleed because they can easily log vulnerable targets and use this as an attack map.”