Rebooting Responsible Disclosure: a focus on protecting end users
July 20th, 2010 | Published in Google Online Security
A competing philosophy, "full disclosure", involves the researcher making full details of a vulnerability available to everybody simultaneously, giving no preferential treatment to any single party.
The argument for responsible disclosure goes briefly thus: by giving the vendor the chance to patch the vulnerability before details are public, end users of the affected software are not put at undue risk, and are safer. Conversely, the argument for full disclosure proceeds: because a given bug may be under active exploitation, full disclosure enables immediate preventative action, and pressures vendors for fast fixes. Speedy fixes, in turn, make users safer by reducing the number of vulnerabilities available to attackers at any given time.
Note that there's no particular consensus on which disclosure policy is safer for users. Although responsible disclosure is more common, we recommend this 2001 post by Bruce Schneier as background reading on some of the advantages and disadvantages of both approaches.
So, is the current take on responsible disclosure working to best protect end users in 2010? Not in all cases, no. The emotionally loaded name suggests that it is the most responsible way to conduct vulnerability research - but if we define being responsible as doing whatever it best takes to make end users safer, we will find a disconnect. We’ve seen an increase in vendors invoking the principles of “responsible” disclosure to delay fixing vulnerabilities indefinitely, sometimes for years; in that timeframe, these flaws are often rediscovered and used by rogue parties using the same tools and methodologies used by ethical researchers. The important implication of referring to this process as "responsible" is that researchers who do not comply are seen as behaving improperly. However, the inverse situation is often true: it can be irresponsible to permit a flaw to remain live for such an extended period of time.
Skilled attackers are using 0-day vulnerabilities in the wild, and there are increasing instances of:
- 0-day attacks that rely on vulnerabilities known to the vendor for a long while.
- Situations where it became clear that a vulnerability was being actively exploited in the wild, subsequent to the bug being fixed or disclosed.
As software engineers, we understand the pain of trying to fix, test and release a product rapidly; this especially applies to widely-deployed and complicated client software. Recognizing this, we put a lot of effort into keeping our release processes agile so that security fixes can be pushed out to users as quickly as possible.
A lot of talented security researchers work at Google. These researchers discover many vulnerabilities in products from vendors across the board, and they share a detailed analysis of their findings with vendors to help them get started on patch development. We will be supportive of the following practices by our researchers:
- Placing a disclosure deadline on any serious vulnerability they report, consistent with complexity of the fix. (For example, a design error needs more time to address than a simple memory corruption bug).
- Responding to a missed disclosure deadline or refusal to address the problem by publishing an analysis of the vulnerability, along with any suggested workarounds.
- Setting an aggressive disclosure deadline where there exists evidence that blackhats already have knowledge of a given bug.
We would invite other researchers to join us in using the proposed disclosure deadlines to drive faster security response efforts. Creating pressure towards more reasonably-timed fixes will result in smaller windows of opportunity for blackhats to abuse vulnerabilities. In our opinion, this small tweak to the rules of engagement will result in greater overall safety for users of the Internet.
Update September 10, 2010: We'd like to clarify a few of the points above about how we approach the issue of vulnerability disclosure. While we believe vendors have an obligation to be responsive, the 60 day period before public notification about critical bugs is not intended to be a punishment for unresponsive vendors. We understand that not all bugs can be fixed in 60 days, although many can and should be. Rather, we thought of 60 days when considering how large the window of exposure for a critical vulnerability should be permitted to grow before users are best served by hearing enough details to make a decision about implementing possible mitigations, such as disabling a service, restricting access, setting a killbit, or contacting the vendor for more information. In most cases, we don't feel it's in people's best interest to be kept in the dark about critical vulnerabilities affecting their software for any longer period.