The VB100 comparative and certification system is a regular independent comparison of anti-malware solutions.
Each test report combines the unique VB100 certification scheme with in-depth analysis of product performance on a range of scales.
The VB100 award is a certification of products which meet the basic standards required to be recognised as legitimate and properly functioning anti-malware solutions.
To display a VB100 logo, a product must:
All solutions tested are submitted for testing by their developers. Full procedures of the certification scheme can be found on the procedures page, with detailed test methodology on the methodology page.
The full schedule of upcoming comparatives can be found on the VB100 Schedule page.
Any developers interested in submitting anti-malware products for review should contact the test team by email using firstname.lastname@example.org.
An archive of VB100 test results is available, allowing users to view the performance of a vendor or solution over time. While a single test can only show a snapshot of a product's capabilities at a specific moment in time, this long-term view can be used as a guide to how vendors are likely to perform in future. Listings of results by product can also be found in our test archives.
The VB100 comparative test report includes a range of additional data alongside the simple pass/fail of the certification programme. A summary of the main areas covered can be found below.
Each VB100 comparative review is run on a different platform, allowing us to measure the performance of solutions in a wide range of environments. Our main performance indicator uses a battery of standard, everyday activities which might be performed by an ordinary user - the time taken to complete this set of tasks with a security product in place is measured and compared to a baseline time recorded on an unprotected system to find how much the protection slows down normal operations.
We continue to expand and refine our measures of scanning speed, file access overheads, system resource usage and more, to provide a complete picture of how solutions impact the systems they are protecting as well as the quality of the protection provided. Some of these measures are considered of more academic interest and are only included in our in-depth tests on server platforms, where detailed information on system impacts can be vital for sysadmins.
Full details of performance measures can be found in each comparative review.
Our comparatives now include a stability rating, based on how many bugs and issues were observed during the standard VB100 test process. Any errors or other problems encountered are rated on a severity scale and assigned a points value, and the points gathered by each product during the testing process are added up to produce an overall stability categorization.
Products which exhibit no bugs or errors at all throughout our high-stress test suite earn the coveted "Solid" rating. For more details on our stability rating system, see the VB100 Methodology page.
The unique RAP (Reactive and Proactive) tests measure simple static detection rates using the freshest samples available at the time products are submitted to the test, as well as samples not seen until after product databases are frozen. This provides a measure of both the vendors' ability to handle newly emerging malware and their accuracy in detecting previously unknown malware.
The four-test RAP averages quadrant allows at-a-glance comparison of detection rates by these criteria. Full details of the RAP scheme can also be found on the VB100 Methodology page.