Handling a software flaw can be messy, both for a security researcher who found it and for the company it affects. But a new set of guidelines aims to make that interaction less mysterious and confrontational.
Large companies such as Facebook, Google and Yahoo have well defined “responsible disclosure” policies that lay out what is expected of researchers if they find a vulnerability and often the terms under which a reward will be paid.
But many companies don’t, which can lead to problems and confusion. Security researchers have occasionally been referred to law enforcement even when they have been up front about the issue with a company.
The guidelines were developed by Bugcrowd, which has a platform companies can use to have their applications analyzed by independent researchers in a safe way and in some cases, reward them. Bugcrowd worked on the framework with CipherLaw, a legal firm specializing in technology.
They’ve released a short and lucid document on Github describing how companies should approach setting up a responsible disclosure program as well a boilerplate disclosure policy that can be included on a company’s website.
The framework “is designed to quickly and smoothly prepare your organization to work with the independent security researcher community while reducing the legal risks to researchers and companies,” according to an introduction on Github.