Respecting the testing

2006-09-01

John Hawes

Virus Bulletin, UK
Editor: Helen Martin

Abstract

'Competition for good test results, and so for respect, trust and strong sales, feeds development and innovation.' John Hawes, Virus Bulletin, UK


Last month, ConsumerReports.org (CR), the online arm of the US Consumers' Union society, announced proudly to the world its decision to create 5,500 virus 'variants', as part of an extensive test of anti-virus products. No details were provided as to what these were variants of, how they were created or verified, or how they were put to use in the tests. All that is known is that the malware was provided by ISE (Independent Security Evaluators), whose president Avi Rubin disassociated himself from the tests.

The initial reaction within the AV industry was a slow, sad shaking of heads, and raised eyebrows of disbelief. Sensible voices pointed out the flawed methodology of the tests and the availability of similar test results from specialized and respected test centres running retrospective testing. These organizations focus on the same issues as those raised by CR, but they use existing, real-world viruses. Pessimistic members of the AV community feared the escape of the new variants into the wild, and pondered the legal implications for their creators should they cause any damage. Analysts complained at the volume of extra work that looms, once the fat chunk of new malware finally reaches their desks for inspection and identity creation (one of the few statements issued by CR has suggested that handing the malware over to industry experts would be 'a good idea'). Minds were cast back to similar scandals of the past, to the tests carried out by CNET using the Rosenthal 'simulated' viruses, and to the infamous University of Calgary 'virus-writing' course.

Since then, the issue has mushroomed into a fizzing cloud of counter-accusations. Backlash against the virus creators' critics has mainly taken the form of accusations against the AV industry of hiding the reliance of AV products on signature-based detection and of hyping the efficacy of heuristic methods (of course, the old 'they-write-all-the-viruses-themselves' chestnut has cropped up here and there too). 'They're telling you they have all this heuristic capability, but the best they can do is 50 per cent. That's nothing; that's terrible.' So cried Peter Firstbrook, head of the cyber security division of marketing palmist Gartner, fresh from compiling the highly lucrative 'magic quadrant' report on the AV industry. User faith in virus protection is being battered by a hail of abuse.

Testing is important. Competition for good test results, and so for respect, trust and strong sales, feeds development and innovation. Flashy logos and catchy slogans may capture the eye and the ear, but without the credibility given by proven effectiveness, no product can hope to thrive. Dissemination of test results also helps users, allowing them to judge the performance of their product, and to demand better where it is available. To achieve these goals, testing must be credible, it must be transparent, verifiable and accountable.

VB, along with several other specialized and recognized testing organizations, provides a vital service, both to those within the industry and to their customers, ensuring that security software performs as well as it can. To provide these services the testing organizations rely on the community they serve, and abide by its ideologies and beliefs. One of the most strongly held convictions throughout the industry is that creating viruses is never justified. Those who do so are forever beyond the pale, barred from employment, as they are from respect and trust, by their fellows. In conniving in the creation of viruses, CR has damaged its credibility as surely as if it had been involved in any other criminal activity.

Here at VB we hope, in the near future, to improve and expand our own testing, to include an ever-growing variety of threats; with spyware on the horizon, the shadowy territory of rootkits and the legal minefield of adware lie ahead. We would like, at some point, to be able to include an element of retrospective testing into our service. In order to do all this successfully and properly, we rely on the trust and respect of our readers and of the industry we study and report on, and we will certainly not be hiring any virus writers.

twitter.png
fb.png
linkedin.png
googleplus.png
reddit.png

 

Latest articles:

Powering the distribution of Tesla stealer with PowerShell and VBA macros

Since their return more than four years ago, Office macros have been one of the most common ways to spread malware. In this paper, Aditya K Sood and Rohit Bansal analyse a campaign in which VBA macros are used to execute PowerShell code, which in…

VB2017 paper: Android reverse engineering tools: not the usual suspects

In the Android security field, all reverse engineers will probably have used some of the most well-known analysis tools such as apktool, smali, baksmali, dex2jar, etc. These tools are indeed must‑haves for Android application analysis. However, there…

VB2017 paper: Exploring the virtual worlds of advergaming

As adverts in gaming (‘advergaming’) ecosystems continue to become more sophisticated, so the potential complications grow for parents, children and gamers, who just want to play without having to worry about where their data is going (and how it is…

Distinguishing between malicious app collusion and benign app collaboration: a machine-learning approach

Two or more mobile apps, viewed independently, may not appear to be malicious - but in combination, they could become harmful by exchanging information with one another and by performing malicious activities together. In this paper we look at how…

VB2016 paper: Wild Android collusions

Mobile operating systems support multiple communication methods between apps. Unfortunately, these handy inter-app communication mechanisms also make it possible to carry out harmful actions in a collaborative fashion. Two or more mobile apps, viewed…


Bulletin Archive