The following FAQs are designed to help you interpret VB100 certification results and to give you insight into how the VB100 certification programme is set up and runs.
Any product that meets the VB100 certification criteria can be assumed to have a good ability to detect the most common variety of threats, and to do so without many false alarms.
To understand the figures, consider our testing process. We expose each tested product to various threats and non-threats to measure its malware detection capabilities, along with its ability to avoid false alarms. These test cases are assigned to three case sets: the Certification Set and the Diversity Set contain threats; the Clean Set contains non-threat cases.
In the context of VB100, 'static detection' means that we don’t actually execute malicious or clean test cases, but rather we 'show them' to the tested products. While this has several benefits, the coverage such testing provides is limited.
In practical terms, VB100 static testing offers greater statistical relevance than dynamic tests.
Simply put, the better the approximation of a real-world infection chain, the more resource-intensive each test case becomes. This often proves to be a limiting factor in the number of test cases a lab can evaluate.
Through focusing on the static detection layer only, VB100 can evaluate a much greater number of test cases and therefore provide better resolution. This means that blind chance plays a lesser role than it would with fewer samples. Resolution is particularly important for false positives, which generally represent well less than 0.1% of the cases. With just 100 test cases, you are almost guaranteed to miss any false positives. At 100,000 test cases, not only do we have a good chance of detecting false positives, but we can reliably tell the difference between a product that generates a prohibitively high 1% false positive rate (that is, 1 out of 100 programs on average triggering a false alarm) and one that has more modest 0.01% rate (1 out of 10,000).
Since VB100 only focuses on static detection of Windows executables (PE files), the coverage it provides is limited to that detection layer of products only. Many products employ several additional advanced technologies that simply won’t kick in during the VB100 test process, such as URL reputation, exploit protection, behavioural/runtime analysis, sandboxing, contextual analysis, and many more.
While not fully comprehensive, we believe that the base VB100 covers – static detection – is the most important one. This is because the vast majority of users face the common, 'garden variety' of threats, and because virtually all endpoint security products will detect common threats statically, without invoking more advanced security features.
In a certain way, VB100 will only give you part of the story, however it’s the part of the story that we think is the most relevant in an average case.
We encourage you to read the test reports from other labs as well. AMTSO – an industry organization for good anti-malware testing – is a great starting point. Since all tests are an opinionated approximation of real life, the best you can do is synthesize information from multiple sources to form the most complete picture you can.
No, you shouldn’t do that, because VB100 is not designed to tell you if Product A or Product B performs better. Such comparative tests require certain guarantees (for example, concurrent evaluation of test cases) that VB100 does not provide.
Our testing methodology is documented in detail on the VB100 Methodology page. It’s a bit of a dry read, but ultimately this is the most complete guide you can find to VB100.
We don’t: VB100 testing is voluntary, and is paid for by the vendor of the tested product.
Any conflict of interest here is resolved with ease: the test ethics framework always prevails, because our ultimate responsibility is to the consumer of the report.
No. Any test that starts out as a public test must be followed through. The test report is published, regardless of whether or not it’s favourable for the vendor.
Under certain circumstances, we might refrain from publishing a test report if we believe that the data we collected is not relevant to the reader, e.g. it’s not healthy (tainted by technical issues, operator errors, etc.) or it was produced under circumstances that didn’t give the vendor a fair chance to succeed. We carefully consider the public interest in such cases, and if we invalidate the test results, we aim to repeat the test as soon as possible.
We don’t, because we believe that in all but special cases, tested vendors require representation in the process to effect sound and well-founded testing, and it’s something quite difficult to secure if you are testing a product against the vendor’s will.
Absolutely, email us at [email protected].