The VB100 award is granted to any product that passes the test criteria under test conditions in the VB lab as part of the formal VB comparative review process.
The basic requirements are that a product detects all malware listed as 'In the Wild' by the WildList Organization during the review period, and generates no false positives when scanning a set of clean files.
Various other tests are also carried out as part of the comparative review process, including speed and performance measurements, and results and conclusions are included with the review. The results of these secondary tests do not affect a product's qualification for VB100 certification.
Products must be submitted for review by their developers. Deadlines for product submission, along with details of platforms and the WildLists from which the test sets will be compiled, will be announced in advance of the test.
Submissions must be received by VB by the deadline set, with all required components including updates to virus definitions and detection engines. All software should, where possible, be made available ready to install, update and run in an 'offline' situation, as some testing is performed in a sealed environment without access to external networks. The core certification tests are run with full internet access, using the latest updates available at the time each test is run and with access to 'cloud' lookup systems where applicable, and those wishing to opt out of the offline components may do so by giving advance notice of such an intention along with their submission.
Submissions are accepted in the form of web downloads (the preferred format), email attachments or hard copies sent by post or courier, as long as they arrive before the deadline. Full licences or activation codes are preferred where applicable, but trial versions are also acceptable.
Testing and certification is open to any product vendor. In our tests on desktop platforms, we accept up to one product per vendor free of charge. Additional products or entries for our server tests, which include more detailed performance measurements, are subject to a testing fee.
By submitting a product for testing vendors agree to have their product reviewed and analysed as VB sees fit, within the scope of the testing methodology presented below. Once a product has been accepted for testing, it may not be withdrawn from the review by the vendor. However, VB reserves the right to refuse to test any product without further explanation.
The requirements for VB100 certification are:
The WildList to be used for each test will be the latest available at the time of the test set deadline. This deadline will be communicated to potential participants, and publicized on the VB website, approximately two weeks prior to the deadline for each test. All samples in the WildList collection are verified and, where applicable, replicated from originals provided by the WildList Organization.
The WildList set is tested both on demand and on access. 'Detection' in this case is accepted if the product clearly marks a file as infected in its log or on-screen display, or denies access to it during on-access testing. If such logging or blocking is unavailable or deemed unusable for test purposes, deletion or attempted disinfection of samples will also be an accepted indicator of detection.
The collection of known-clean files includes the test sets used for speed measurements, and is subject to regular and unannounced updating and enlargement. A false positive will be counted if a product is considered to have clearly flagged a file as infected in its log or on-screen display.
A false positive will not be recorded if a file is labelled as something other than malware, such as adware, or a legitimate item of software with potentially dangerous uses. All other alerts on clean files will be counted as false positives.
Flags will be adjudged to mark either a detection, in which case any files marked thus will be counted as detections in the infected set or as false positives in the clean sets, or mere suspicion, in which case no detection or false positive will be recorded. There will be no overlap between the two.
Testing will take place at three distinct points during the testing period, and will use the latest updates available at those three points. Any sample in the official set not detected during any of these test runs will be considered enough to deny a product certification; 100% detection must be maintained throughout the test. The clean set is divided into three parts and one part is scanned on demand at each of the three points during the test; any false positive recorded in any of these scans will result in a product failing to qualify for a VB100 award.
Results of a range of additional tests will be included in comparative test reports, with the nature and design of these tests subject to change without notice. Detection tests may include testing with live internet access and latest updates, or may be run offline with frozen updates in the case of retrospective tests. In this latter case vendors may choose to withdraw from such tests, to accommodate those products which require online connectivity to function properly. Speed and performance measures will be taken based on a range of metrics, depending on the requirements of individual platforms and product types, and the way the data gathered in all these additional tests is presented and interpreted may be subject to modification and adjustment as appropriate. The results of these additional tests are provided for information only and do not affect certification.
A product's default installation settings will be used for all tests, with the following exceptions:
In the event of a sample appearing to be missed due to a file type not being scanned by default (a common occurrence with, for example, archives not being scanned on access by some products), such samples may be rescanned with altered settings to verify this, in order to inform VB readers of the cause of such misses. Some adjustments to the file types scanned may be made during parts of our speed tests, in order to present more informative comparative data. Any false positive raised as a result of such alterations to default settings will not count against a product for certification, but may be recorded in the review text.
Should the reviewer be unable to make a product function adequately, either wholly or in part, or should any event occur which appears to be the result of a problem with the installation and operation of a product, tests may be repeated up to a maximum of three times, on three different test machines, using clean system images for each attempt.
Depending on the nature of the issue, the test team may contact the developers of the product for advice and guidance, and may provide additional information on the issue noted to help diagnose the root cause and find a fix or workaround. Where applicable, this process will be carried out prior to rerunning any affected part of the test suite.
Each product submitted for testing will be described to some extent in the text of the comparative review published on the Virus Bulletin website, with attention paid to design, usability, features and other criteria considered by the reviewer to be of interest, as well as the product's performance in the various standard tests carried out.
For the purposes of this analysis, product settings may be adjusted and additional testing carried out at the discretion of the reviewer. Any comments thus made are the opinion of the reviewer at the time of the review.
Should any vendor have any queries concerning the results of the tests, they are encouraged to contact VB for clarification and further analysis where necessary. Please direct all testing-related messages to firstname.lastname@example.org.
A VB100 award means that a product has passed our core certification tests, no more and no less. The failure to attain a VB100 award is not a declaration that a product cannot provide adequate protection in the real world if configured and administered by a professional. VB urges any potential customer, when looking at the VB100 record of any software, not simply to consider passes and fails, but to read the small print in the reviews.