Signature updates vs BCP

2006-02-01

Aleksander Czarnowski

AVET Information and Network Security, Poland
Editor: Helen Martin

Abstract

Would you ever expect a well established anti-virus company to fail to provide you with signature updates? Would your company's security policy be ready to deal with such a situation?


Introduction

From time to time we witness events that seem so unlikely or unwanted that they almost defy belief. Occasionally, such an event occurs within the IT security business. This article has been inspired by events that have been described by many as 'unbelievable'.

The story

The story is short and simple: recently, a local AV vendor had some serious problems with producing signature updates for its product, and failed to update its scanning engine for as long as two weeks. (Unfortunately it seems that, at the time of writing, the problems have yet to be resolved and updates are still sporadic and infrequent.)

Would you ever expect a well established anti-virus company to fail to provide you with signature updates while the company was still operational? Many security officers probably did not. We have seen buyouts of anti-virus companies, and even bankruptcy in the past, but in these cases measures have usually been put in place to ensure continuity in malware protection for customers.

Broken or invalid signature updates are also something with which we are familiar, but this situation is something new and worrying – especially considering that the ie_xp_pfv_metafile [1] exploit was used widely and Microsoft security bulletins MS06-001 [2], MS06-002 [3] and MS06-003 [4] were all released during the period in which no updates were provided.

Learning from the past

This got me thinking about the past. Historically, security policies have been shaped by critical events. Consider, for example, corporate security policies dated prior to 2001. In how many would you find reference to scenarios involving terrorist attacks or BCP (business continuity planning)?

Despite the various mathematical models we use for risk analysis, we always seem to learn the hard way in the security area. So what we can learn from this story? I think the following are the most important points:

  • The failure of a safeguard may not always be the result of a direct, easily foreseen technical issue. Even risk management-driven security policy can be flawed simply due to incomplete threat and risk catalogs. This might pose an even more important question: is risk management the right approach? After all, in evaluating risks and threats we rely partly on historical data. If a particular event has a very rare occurrence, then we might wrongly ignore it.

  • The defence-in-depth strategy suggests that we should never rely on one safeguard to protect a particular asset. This may be tricky to implement in the case of malware protection as many organizations use a single product that operates at different levels of the network.

  • The use of a multiple-engine product won't necessarily provide continuity in malware protection if, for example, the vendor of that product encounters problems.

Some might say that the situation described above is unlikely to happen where the well established vendors who operate worldwide (the 'big players') are concerned. Try telling that to Arthur Andersen or Enron shareholders.

We have to ask whether our security policies and BCPs are ready to deal with such a situation. It seems that using two different products from different vendors (based on different engines from different vendors) could be a wise move.

The introduction of stack protection mechanisms and IDS/IPS systems might seem like a good solution too. But we could be very wrong – for example, the MS06-001 [2] vulnerability is not stack overflow-based – so we should remember that DEP and MS/GS mechanisms are not the final solution to system security. While it's easy to filter out well known attack web servers that contain exploits, it's far from being the final solution – even in the case of this particular vulnerability. Not every vulnerability exploitation process is easy to detect using a signature-based approach – even methods based on code emulation can have serious problems. Along with Dave Aitel [5], I'm curious as to how IDS/IPS vendors will approach this problem.

So as you can see, we have entered the new year with new vulnerabilities and new challenges. I wonder what the maximum length of time is that an AV vendor can stay operational without providing updates. I hope that none of VB's readers will ever have to find out.

twitter.png
fb.png
linkedin.png
googleplus.png
reddit.png

 

Latest articles:

A review of the evolution of Andromeda over the years before we say goodbye

Andromeda, also known as Gamaru and Wauchos, is a modular and HTTP-based botnet that was discovered in late 2011. From that point on, it managed to survive and continue hardening by evolving in different ways. This paper describes the evolution of…

VB2012 paper: Malware taking a bit(coin) more than we bargained for

When a new system of currency gains acceptance and widespread adoption in a computer-mediated population, it is only a matter of time before malware authors attempt to exploit it. As of halfway through 2011, we started seeing another means of…

VB2017 paper: VirusTotal tips, tricks and myths

Outside of the anti-malware industry, users of VirusTotal generally believe it is simply a virus-scanning service. Most users quickly reach erroneous conclusions about the meaning of various scanning results. At the same time, many very technical…

The threat and security product landscape in 2017

VB Editor Martijn Grooten looks at the state of the threat and security product landscape in 2017.

VB2017 paper: Nine circles of Cerber

The Cerber ransomware was mentioned for the first time in March 2016 on some Russian underground forums, on which it was offered for rent in an affiliate program. Since then, it has been spread massively via exploit kits, infecting more and more…


Bulletin Archive