Monday, August 30, 2010

Vulnerability trends: how are companies really doing?



Quite a few security companies and organizations produce vulnerability databases, cataloguing bugs and reporting trends across the industry based on the data they compile. There is value in this exercise; specifically, getting a look at examples across a range of companies and industries gives us information about the most common types of threats, as well as how they are distributed.

Unfortunately, the data behind these reports is commonly inaccurate or outdated to some degree. The truth is that maintaining an accurate and reliable database of this type of information is a significant challenge. We most recently saw this reality play out last week after the appearance of the IBM X-Force® 2010 Mid-Year Trend and Risk Report. We questioned a number of surprising findings concerning Google’s vulnerability rate and response record, and after discussions with IBM, we discovered a number of errors that had important implications for the report’s conclusions. IBM worked together with us and promptly issued a correction to address the inaccuracies.

Google maintains a Product Security Response Team that prioritizes bug reports and coordinates their handling across relevant engineering groups. Unsurprisingly, particular attention is paid to high-risk and critical vulnerabilities. For this reason, we were confused by a claim that 33% of critical and high-risk bugs uncovered in our services in the first half of 2010 were left unpatched. We learned after investigating that the 33% figure referred to a single unpatched vulnerability out of a total of three — and importantly, the one item that was considered unpatched was only mistakenly considered a security vulnerability due to a terminology mix-up. As a result, the true unpatched rate for these high-risk bugs is 0 out of 2, or 0%.

How do these types of errors occur? Maintainers of vulnerability databases have a number of factors working against them:
  • Vendors disclose their vulnerabilities in inconsistent formats, using different severity classifications. This makes the process of measuring the number of total vulnerabilities assigned to a given vendor much more difficult.
  • Assessing the severity, scope, and nature of a bug sometimes requires intimate knowledge of a product or technology, and this can lead to errors and misinterpretation.
  • Keeping the fix status updated for thousands of entries is no small task, and we’ve consistently seen long-fixed errors marked as unfixed in a number of databases.
  • Not all compilers of vulnerability databases perform their own independent verification of bugs they find reported from other sources. As a result, errors in one source can be replicated to others.
To make these databases more useful for the industry and less likely to spread misinformation, we feel there must be more frequent collaboration between vendors and compilers. As a first step, database compilers should reach out to vendors they plan to cover in order to devise a sustainable solution for both parties that will allow for a more consistent flow of information. Another big improvement would be increased transparency on the part of the compilers — for example, the inclusion of more hard data, the methodology behind the data gathering, and caveat language acknowledging the limitations of the presented data. We hope to see these common research practices employed more broadly to increase the quality and usefulness of vulnerability trend reports.