Run as part of the VB100 comparative and certification programme, Virus Bulletin's unique RAP tests measure simple static detection rates using the freshest samples available at the time products are submitted to the test, as well as samples not seen until after product databases are frozen. This provides a measure of both the vendors' ability to handle newly emerging malware and their accuracy in detecting previously unknown malware.
Below is the latest RAP aggregate quadrant, showing average scores for vendors participating in more than one of the last four tests. More details of the RAP system and how the aggregated quadrant is compiled can be found beneath.
The RAP system measures simple static detection rates, testing against common malware samples first seen by the VB lab team within ten days of running each stage of the test.
The "Reactive" measure is the average of three test runs against samples seen in the ten days before the test date, allowing the products to use the latest updates and with full access to any cloud-based resources and reputation systems. For the "Proactive" measure, products and updates are frozen, then products are run offline, without access to cloud systems, against samples seen in the ten days following freezing.
The RAP test aims to give an indication of how well product developers are able to keep up with the incoming flood of new malware using their standard file detection methods (including heuristic rules), and should also give some idea as to how much different products rely on cloud-based systems to supplement client-side technologies.
All samples used in the test should be commonly shared between malware researchers and thus accessible to all participating vendors; we expect high scores. The prevalence of samples for inclusion in the test is based on telemetry from multiple sources including major anti-malware labs; in the near future we hope to replace or supplement the sets used for the RAP test with data from the Real-Time Threat List being developed by AMTSO.