There is a place for unauthenticated key exchange, but don't tell anyone

Posted by   Virus Bulletin on   Nov 21, 2013

Making dragnet surveillance harder justifies using weak form of encryption.

Discussions on how to make the Internet more secure have been going on ever since the first two computers were connected. Recently, however, Snowden's revelations about surveillance on a scale that was hitherto only imagined by the most paranoid have made some of these discussions more urgent.

One such discussion concerns HTTP and revolves around the question of whether in the next version of HTTP, traffic should be encrypted by default. As Trend Micro's Ben April points out, there are quite a few challenges that need to be considered.

Some of these concern authentication. In most cases, authentication of the key exchange is as important as encryption, perhaps even more important.

When I use my desktop PC to browse to the website of my bank, the likelihood of someone intercepting that traffic isn't particularly great. Of course, it is still a big enough chance for me not to want to use anything but secure end-to-end encryption - but someone listening in on the line isn't my biggest worry.

I am more concerned about being able to be absolutely sure I am actually connecting to my bank's website - and not to a site owned by a malicious third party that has been able to alter my traffic, or my DNS requests.

That is why it is vital that I only connect using HTTPS. Of course, the system currently used to authenticate HTTPS connections, using dozens of certificate authorities and root certificates that are hard-coded into browsers, has been much criticised in recent years, and a few high-profile cases (such as the hack at Diginotar) suggest that this criticism is justified. But for anyone using HTTPS to pay a utility bill rather than share top secret information, it probably works well enough.

But does that mean that encryption of network traffic without the connection being authenticated "adds little value", as Ben suggests in his blog post? Half a year ago I might have agreed. I don't anymore.

From the Snowden files, we know that intelligence agencies engage heavily in 'dragnet surveillance', where simply all the traffic they have access to is intercepted and stored. If the traffic isn't encrypted, as is the case with both plain HTTP and the majority of SMTP, they read it.

A lot of information sent over the Internet isn't particularly sensitive. Even in the most totalitarian of states I would want to be able to email my mother to say, yes, I am fine, and my, hasn't the weather been horrible lately? And I would still want to be able to visit Wikipedia's list of enclaves and exclaves.

But that doesn't mean we should make it easy for anyone to read and store that traffic - and that is why I am in favour of the next generation of HTTP using encryption by default, as I have also argued should be the case for SMTP.

However, there is one important caveat: it shouldn't in any way be presented to the end-user as more secure, and definitely not as content being sent encrypted. That should only be done for traffic that is properly authenticated (and, in the case of email, where end-to-end encryption rather than hop-by-hop encryption takes place).

Encryption using unauthenticated key exchange is the network equivalent of putting our letters in envelopes: it doesn't mean no-one can read them, but it does make it significantly harder for the postman to read the content of all the letters he carries.

We should still strive to make all traffic encrypted and authenticated. But seeing as we don't want to break the Internet as it is, as a first step, unauthenticated encryption should give network traffic the very basic security Internet users ought to be able to expect. As long as we don't say it is actually secure.

Update This blog post was updated on 22 November to make it clear that the authentication concerns the exchange of public keys, not that of the actual data that is being transmitted. Thanks to Matthew Green for pointing this out.

Posted on 22 November 2013 by Martijn Grooten



Latest posts:

The Virus Bulletin conference returns home: VB2019 to take place in London

In 2019, the Virus Bulletin conference is set to return home, with VB2019 taking place in London, UK.

Guest blog: The case for increasing transparency in cybersecurity

In a guest blog post, Kaspersky Lab's Anton Shingarev considers the case for increasing transparency in cybersecurity.

VB2018 preview: Workshops

Workshops make their VB Conference debut during VB2018, giving delegates the opportunity to learn the basics of kernel-level malware analysis, Android reverse-engineering and artificial intelligence.

New article: Through the looking glass: webcam interception and protection in kernel mode

Today we publish a short article by Ronen Slavin and Michael Maltsev, researchers at Reason Software Company, who dive into the video capturing internals on Windows, and explain how this can be used by a malicious actor to steal images recorded by a…

VB2018 preview: The botnet landscape - live threats and steps for mitigation (Small Talk)

In a Small Talk at VB2018, Spamhaus's Simon Forster will present the organization's research into the botnet landscape and will discuss with the audience topics such as how the rise of anonymzation techniques and the hosting of botnets on…

We have placed cookies on your device in order to improve the functionality of this site, as outlined in our cookies policy. However, you may delete and block all cookies from this site and your use of the site will be unaffected. By continuing to browse this site, you are agreeing to Virus Bulletin's use of data as outlined in our privacy policy.