AV testing practices questioned

Posted by   Virus Bulletin on   Aug 14, 2007

Professional and amateur tests criticised.

Last week, IT industry commentator and renowned anti-anti-virus writer Robin Bloor released a typically inflammatory article implying widespread corruption in anti-virus testing, suggesting that testing organisations within the industry are complicit in rigging test results to show AV products in a good light. This week, results of a supposedly independent test run live at a conference have been widely criticised for inaccuracies and sloppy methodology.

Bloor's article, entitled 'Is AV product testing corrupt?', quotes an anonymous contact 'high up in one of the IT security companies' as being 'suspicious' of results released by highly respected testing centre AV-test.org, and goes on to quote at length from another anonymous source, discussing the problem of test collections containing samples of dubious provenance and uncertain relevance. The article suggests that AV companies, providing samples to testing bodies for inclusion in test sets, routinely bias results by sending items only detected by their own products, and that test sets are riddled with corrupt, non-functional or simply non-malware samples.

At the LinuxWorld Expo in San Francisco last week, open-source gateway software producer Untangle presented an 'anti-virus Fight Club', testing a selection of products live on stage against a set of 35 samples gathered from the expo audience as well as the organisation's own inboxes. Kaspersky, Norton and open-source ClamAV were found to detect 100% of the samples, while others did less well, including one which scored less than 6% overall.

Since the release of the results, several commentators have pointed out flaws in the running of the test, not least the small sample set, the comparison of wildly different product types and errors in the settings used. The company running the test, as a vendor of a product using ClamAV, also appears to have an interest in the results, and having made its test set freely available online, risks charges of distributing malware.

'Testing AV products is an enormously complex and difficult business,' said John Hawes, Technical Consultant at Virus Bulletin. 'Amateur tests always run the risk of producing erroneous results, due to lack of experience in designing proper test procedures, in operating a wide range of products and, most importantly, in creating and maintaining a malware collection. The serious testing organisations spend enormous amounts of time and energy ensuring sets contain only valid samples, a process which was the central focus of the recent testing symposium in Iceland. The likelihood that the massive collections used by bodies like AV-Test or AV-Comparatives could be biased by individual vendors providing samples favourable to their own products seems pretty remote.'

'Of course, these days there is a lot more to security software than the ability to detect known malware in on-demand scans,' continued Hawes. 'The problems presented by testing heuristic and behavioural detection, as well as properly comparing protection offered by multi-level suites, are currently being worked on with great urgency across the industry, and hopefully firm and comprehensive methodologies for a wide range of testing will be agreed on soon. The question remains as to whether the average user can be persuaded to understand and take an interest in more complex and detailed results, as opposed to the simple percentage scores which the popular press currently demands.'

Bloor's story is here and is also carried in The Register here. Details of Untangle's LinuxWorld test start here and continue over several pages. A McAfee blog entry detailing problems spotted with the Untangle test, and linking to several in-depth articles on the difficulties of testing, is here, with some less sober commentary from Randy Abrams of ESET here and further analysis from independent security researcher David Harley available (in PDF format) here.

The latest comparative review carried out by a magazine's own in-house lab, testing the suitability of products for use in governmental institutions and focussing mainly on usability, price and scanning speed rather than detection rates, is in Government Computer News here.

Posted on 14 August 2007 by Virus Bulletin

twitter.png
fb.png
linkedin.png
hackernews.png
reddit.png

 

Latest posts:

First 11 partners of VB2019 announced

We are excited to announce the first 11 companies to partner with VB2019, whose support will help ensure a great event.

VB2018 paper: Fake News, Inc.

A former reporter by profession, Andrew Brandt's curiosity was piqued when he came across what appeared at first glance to be the website of a small-town newspaper based in Illinois, but under scrutiny, things didn’t add up. At VB2018 he presented a…

Paper: Alternative communication channel over NTP

In a new paper published today, independent researcher Nikolaos Tsapakis writes about the possibilities of malware using NTP as a covert communication channel and how to stop this.

VB2019 conference programme announced

VB is excited to reveal the details of an interesting and diverse programme for VB2019, the 29th Virus Bulletin International Conference, which takes place 2-4 October in London, UK.

VB2018 paper: Under the hood - the automotive challenge

Car hacking has become a hot subject in recent years, and at VB2018 in Montreal, Argus Cyber Security's Inbar Raz presented a paper that provides an introduction to the subject, looking at the complex problem, examples of car hacks, and the…

We have placed cookies on your device in order to improve the functionality of this site, as outlined in our cookies policy. However, you may delete and block all cookies from this site and your use of the site will be unaffected. By continuing to browse this site, you are agreeing to Virus Bulletin's use of data as outlined in our privacy policy.