Brown Hat Security - In Defense of Hard Deadlines

Security researchers inevitably have a conundrum when it comes to disclosure of vulnerabilities. On the one hand, if they try to do the right thing, they may suffer any number of legal penalties as uncooperative vendors or operators, resentful of the researcher's activities, seek to prosecute rather than patch. On the other hand, immediate release brings recriminations and finger-pointing, with vendors and some security persons insisting that such releases are irresponsible and help adversaries more than they help defense personnel.

Vendor notification has always proved difficult. Most vendors view security researchers with, at best, resentment - reports of vulnerabilities entail an expenditure of time and resources that takes time away from needful, profitable business. Business leaders have no real impetus to address these issues if they aren't in active exploitation and actively harming paying customers.

Thus, when vulnerabilities are reported to vendors, they have every impetus to conceal and deny that these vulnerabilities exist. They may insist that the vulnerabilities reported are not exploitable in the real world, or that there will be a patch "soon."

There is adequate past history that patches promised "soon" will often not materialize. The recent case with 'snapchat' adequately demonstrated this: the vulnerabilities were reported to the company in July, and were entirely unaddressed until the wide-spread compromise of user information at the end of December. This is five months from report to active exploitation; during this time, Snapchat promised a patch 'soon' but none was forthcoming until after active exploitation had occurred. Snapchat is hardly the only instance where this has happened in the past - many other vendors have behaved in precisely the same manner.

This led in the past to the practice of full disclosure. It was standard practice for some years for security researchers, upon finding a vulnerability, to disclose it in full across a wide swathe of the internet, thus forcing the companies in question to patch their products immediately. There are still many researchers who favor this style of disclosure, fearing, rightly, that delaying public disclosure until a patch is available leads inevitably to delaying public disclosure until after public exploitation has already happened.

Invariably, some security researchers have chosen to sell exploits rather than report them to the vendors or to disclose them to the community at large. These practices led directly to the creation of bug bounties - rewards for researchers who disclosed vulnerabilities to the company (with some level of proof of exploitability) rather than selling them on the open market.

The situation changed to some degree with the rise of Project Zero, a Google division dedicated to security research. This marked an interesting turning point in the security dialogue, as most research had been done through various security related groups or companies, or via government agencies such as the NSA, up until this time - major tech companies had not yet begun to look at other companies' projects, and any security research tended to be either within the same organization or as a hobbyist activity for some personnel.

Project Zero implemented a hard 90-day deadline for bug reports: 90 days after filing the report, they would fully disclose the nature of the vulnerability.

The hard-90-day deadline has raised some controversy as well. The release of certain vulnerabilities only a couple days before a standard patch cycle. There was significant outcry about how the Project Zero folk ought to have waited until after the patch was fully available before releasing the disclosure about the vulnerability - about giving the vendor more time to address and patch the vulnerability.

There is some legitimate concern involved here in that releasing full descriptions of vulnerabilities and proof-of-concept exploits (the POCs that make vendors sit up and take notice, and regard the exploit as worthy of patching rather than ignorable) will be of more use to adverseries than to defenders. Adverseries, after all, can put such releases into almost immediate use, while defenders need to wait until patches are available for deployment.

Except this logic is somewhat faulty for a number of reasons. First, those defending systems are able to take measures other than patching to mitigate problems in various application or OS components - there are many such measures available to a competent administrator, from adjusting firewalls to temporarily disabling applications. A full disclosure of the effects of a vulnerability thus assists these administrators in preventing problems from any exploits that might be in active use.

In the past, mitigations have been a first line of defense against malicious incidents and have been highly successful in stopping malicious activity. In the case of the Morris Worm, for instance, administrators who were able to analyze the worm's effects realized quickly that the worm spread via certain services only, and that the presence of a certain file prevented infection. Prompt sharing of information amongst affected administrators enabled others to put these measures into place, and allowed for some breathing room to get and apply the necessary patches.

Second, as past history has shown time and time again, allowing leeway invariably ends up with longer and longer deadlines, with promises made and missed by vendors. Even when months are allowed, vendors have no reason to address a vulnerability unless there is a clear and present danger to their business.

This was shown clearly by the '12 Snapchat debacle, the '12 Ning issue, and the '10 AT&T leakage amongst a whole history of other instances.

Vendors, as the information security community has learned to its cost time and time again, cannot be trusted to responsibly remediate holes in their security unless they are facing a severe financial impact from failing to do so.

Thus, a firm and immutable deadline after which a vulnerability will be disclosed provides the best of both worlds. Vendors have a very clear timeline to address the vulnerability - and if the deadline happens to fall outside of their normal patch day, then they know that they will have to make sure to either issue it in the patch cycle prior to that date or issue an out-of-band patch by that time. By allowing the same immutable deadline for everyone, the organization issuing the disclosures can ensure that all vendors are granted the same opportunity for remediating their vulnerabilities as their competitors.

This is the flip side of demands for responsible disclosure, after all: responsible disclosure, to be effective, absolutely requires responsible remediation on the part of vendors. Researchers will only choose to disclose to vendors if they can be reasonably assured that, in doing so, they are providing an effect that is more positive than fully disclosing to the world at large. With vendors delaying their remediation and demanding more and more time before disclosure, the risk of some other group finding the vulnerability goes up - and not all groups are interested in bug bounties or 'responsible disclosure' when they are able to make a profit in black markets.

Firm deadlines are a compromise. They allow for responsible disclosure, and they enforce responsible remediation. Provided that each side behaves in a responsible manner, the benefit to the computing public is maintained.

Indeed, 90 days may be too long a time period for responsible remediation, especially given the rapid development cycles that have been put into place in major companies. In many cases, even a single patch cycle might be too long, due to the fast-moving nature of exploit development.

The pace of remediation is dictated, effectively, by the adversary: those who are using software vulnerabilities in criminal enterprises, and thereby damaging the customer, are the ones who set the tempo of remediation. Vendors who wait until they have already been exploited before they issue remediation instructions have already lost; their customers are being harmed while they work on developing a patch. Providing them with any time at all before their faulty code is being actively exploited is a win for them, as it gives them time to remediate their problems before they are exploited.

Keeping vulnerabilities secret is not sustainable. If one researcher found and reported it, there is nothing to stop another researcher from finding it and exploiting it, nor is there much to stop a malicious actor from breaking into the systems of the original researcher or (more likely) the vendor and finding out about the vulnerability.

Rather than fighting for longer deadlines-increasing the chances that the problem will be discovered by another researcher and put into active exploitation-vendors must instead prioritize remediation of reported issues to ensure that researchers continue to report vulnerabilities to them rather than immediately disclose them or, worse, sell them. If vendors want responsible disclosure, they must demonstrate that they are fully and totally committed to responsible remediation.