Showing posts with label ietf. Show all posts
Showing posts with label ietf. Show all posts

Monday, February 8, 2016

Why do we need SSL VPNs today?

One question that has been bothering me for quite a while, is why do we need SSL VPNs? There is an IETF standardized VPN type, IPSec, and given that, why do SSL VPNs still get deployed? Why not just switch everything to IPSec? Moreover, another important question is, since we have IPSec since around 1998, why IPSec hasn't took over the whole market of VPNs? Note that, I'll be using the term SSL even though today it has been replaced by Transport Layer Security (TLS) because the former is widely used to describe this type of VPNs.

These are valid questions, but depending on who you ask you are very likely to get a different answer. I'll try to answer from an SSL VPN developer standpoint.


In the VPN world there are two main types of deployment, the 'site-to-site' and the 'remote access' types. To put it simply, the first is about securing lines between two offices, and the latter is about securing the connection between your remote users and the office. The former type may rely on some minimal PKI deployment or pre-shared keys, but the latter requires integration with some user database, credentials, as well as settings which may be applied individually for each user. In addition the 'remote access' type is often associated with accounting such as keeping track how long a user is connected, how much data has been transferred and so on. That may remind you the kind of accounting used in ppp and dial-up connections, and indeed the same radius-based accounting methods are being used for that purpose.

Both of the 'site-to-site' and 'remote access' setups can be handled by either SSL or IPSec VPNs. However, there are some facts that make some VPNs more suitable for one purpose than the other. In particular, it is believed that SSL VPNs are more suitable for the 'remote access' type of VPNs, while IPSec is unquestionably the solution one would deploy on site-to-site connections. In the next paragraphs I focus on the SSL VPNs and try to list their competitive advantage for the purpose of 'remote access'.
  1. Application level. In SSL VPNs the software is at the application level, and that  means that it can provided by the software distributor, or even by the administrator of the server. These VPN applications can be customized for the particular service the user connects to (e.g., include logos, or adjust to the environment the user is used to, or even integrate VPN connectivity with an application). For example the 12vpn.net VPN provider customizes the openconnect-gui application (which is free software) to provide it with a pre-loaded list of the servers they offer to their customers. Several other proprietary solutions use a similar practice, and the server provides the software for the end users.
  2. Custom interfaces for authentication. The fact that (most) SSL VPNs run over HTTPS, it provides them with an inherent feature of having complete control over the authentication interface they can display to users. For example in Openconnect VPN we provide the client with XML forms that the user is presented and must fill in, in order to authenticate. That usually covers typical password authentication, one time passwords, group selections, and so on. Other SSL VPN solutions use entirely free form HTML authentication and often only require a browser to log to the network. Others integrate certificate issuing on the first user connection using SCEP, and so on.
  3. Enforcing a security policy. Another reason (which I don't quite like or endorse - but happens quite often) is that the VPN client applications, enforce a particular company-wide security policy; e.g., ensure that anti-virus software is running and up to date, prior to connecting to the company LAN. This often is implemented with server provided executables being run by the clients, but that is also a double-edged sword as a VPN server compromise will allow for a compromise of all the clients. In fact the bypass of this "feature" was one of the driving reasons behind the openconnect client.
  4. Server side user restrictions. On the server-side the available freedom is comparable with the client side. Because SSL VPNs are on the application layer protocol, they are more flexible in what the connecting client can be restricted to. For example, in openconnect VPN server invidual users, or groups of them can be set into a specific kernel cgroup, i.e., limiting their available CPU time, or can be restricted to a fixed bandwidth in a much more easy way than in any IPSec server.
  5. Reliability, i.e., operation over any network. In my opinion, the major reason of existance of SSL VPN applications and servers is that they can operate under any environment. You can be restricted by firewalls, broken networks which block ESP or UDP packets and still be able to connect to your network. That is, because the HTTPS protocol which they rely on, cannot be blocked without having a major part of the Internet go down. That's not something to overlook; a VPN service which works most of the times but not always because the user is within some misconfigured network is unreliable. Reliability is something you need when you want to communicate with colleagues when being on the field, and that's the real problem SSL VPN solve (and the main reason companies and IT administrators usually pay extra to have these features enabled). Furthermore, solutions like Openconnect VPN utilize a combination of HTTPS (TCP) and UDP when available to provide the best possible user experience. It utilizes Datagram TLS over UDP when it detects that this is allowed by network policy (and thus avoiding the TCP over TCP tunneling issues), and falls back to tunneling over HTTPS when the establishment of the DTLS channel is not possible.

That doesn't of course mean that IPSec VPNs are obsolete or not needed for remote access. We are far from that. IPSec VPNs are very well suited for site-to-site links --which are typically on networks under the full control of the deployer-- and are cross platform (if we ignore the IKEv1 vs IKEv2 issues), in the sense that you are very likely to find native servers and clients offered by the operating system. In addition, they possess a significant advantage; because they are integrated with the operating system's IP stack, they utilize the kernel for encryption which removes the need for userspace to kernel space switches. That allows them to serve high bandwidths and spend less CPU time. A kernel side TLS stack, would of course provide SSL VPNs a similar advantage but currently that is work in progress.

As a bottom line, you should chose the best tool for the job at hand based on your requirements and network limitations. I made the case for SSL VPNs, and provided the reasons of why I believe they are still widely deployed and why they'll continue to. If I have already convinced you for the need for SSL VPNs, and you are an administrator working with VPN deployments I'd like to refer you to my FOSDEM 2016 talk about the OpenConnect (SSL) VPN server, on which I describe the reasons I believe it provides a significant advantage over any existing solutions in Linux systems.


Wednesday, October 15, 2014

What about POODLE?

Yesterday POODLE was announced, a fancy named new attack on the SSL 3.0 protocol, which relies on applications using a non-standard fallback mechanism, typically found in browsers. The attack takes advantage of
  • a vulnerability in the CBC mode in SSL 3.0 which is known since a decade
  • a non-standard fallback mechanism (often called as downgrade dance)
So the novel and crucial part of the attack is the exploitation of the non-standard fallback mechanism.What is that, you may ask. I'll try to explain it in the next paragraph. Note that in the next paragraphs I'll use the term SSL protocol to cover TLS as well, since TLS is simply a newer version of SSL.

The SSL protocol, has a protocol negotiation mechanism that wouldn't allow a fallback to SSL 3.0 from clients and servers that both support a newer variant (e.g, TLS 1.1). That detects modifications by man-in-the-middle attackers and the POODLE attack would have been thwarted. However, a limited set of clients, perform a custom protocol fallback, the downgrade dance, which is straightforward but insecure. That set of clients seem to be most of the browsers; those in order to negotiate an acceptable TLS version follow something along the lines:
  1. Connect using the highest SSL version (e.g., TLS 1.2)
  2. If that fails set the version to be TLS 1.1 and reconnect
  3. ...
  4. until there are no options and SSL 3.0 is used.
That's a non-standard way to negotiate TLS and as the POODLE attack demonstrates, it is insecure. Any attacker can interrupt the first connection and make it seem like failure to force a fallback to a weaker protocol. The good news is, that mostly browsers use this construct, and few other applications should be affected.

Why do browsers use this construct then? To their defence, there have been serious bugs in SSL and TLS standard protocol fallback implementation, in widespread software. For example, when TLS 1.2 was out, we realized that our TLS 1.2-enabled client in GnuTLS couldn't connect to a large part of the internet. Few large sites would refuse to talk to the GnuTLS client because it advertised TLS 1.2 as its highest supported protocol. The bug was on the server, that closed the connection when encountered with a newer protocol than its own, instead of negotiating its highest supported (in accordance with the TLS protocol). It took few years before TLS 1.2 was enabled by default in GnuTLS, and still then we had a hard time convincing our users that encountered connection failures, that it was a server bug. The truth is that users don't care who's bug it is, they will simply use software that just works.

There has been long time since then (TLS 1.2 was published in 2008), and today almost all public servers follow the TLS protocol negotiation. So that may be the time for browsers to get rid of that relic of the past. Unfortunately, that isn't the case. The IETF TLS working group is now trying to standardize counter-measures for the browser negotiation trickery. Even though I have become more pragmatist since 2008, I believe that forcing counter measures in every TLS implementation just because there used to (or may still be) be broken servers on the Internet, not only prolongs the life of an insecure out of protocol work-around, but creates a waste. That is, it creates a code dump called TLS protocol implementations which get filled with hacks and work-arounds, just because of few broken implementations. As Florian Weimer puts it, all applications pay a tax of extra code, potentially introducing new bugs, and even more scary potentially introducing more compatibility issues, just because some servers on the Internet have chosen not to follow the protocol.

Are there, however, any counter-measures that one can use to avoid the attack, without introducing an additional fallback signalling mechanism? As previously mentioned, if you are using the SSL protocol the recommended way, no work around is needed, you are safe. If for any reason you want to use the insecure non-standard protocol negotiation, make sure that no insecure protocols like SSL 3.0 are in the negotiated set or if disabling SSL 3.0 isn't an option, ensure that it is only allowed when negotiated as a fallback (e.g., offer TLS 1.0 + SSL 3.0, and only then accept SSL 3.0).

In any case, that attack has provided the incentive to remove SSL 3.0 from public servers on the Internet. Given that, and its known vulnerabilities, it will no longer be included by default in the upcoming GnuTLS 3.4.0.

[Last update 2014-10-20]

PS. There are some recommendations to work around the issue by using RC4 instead of a block cipher in SSL 3.0. That would defeat the current attack and it closes a door, by opening another; RC4 is a broken cipher and there are known attacks which recover plaintext for it.

Thursday, May 16, 2013

Salsa20 and UMAC in TLS

Lately while I was implementing and deploying an SSL VPN server, I realized  that even for a peer-to-peer connections the resources taken for encryption on the two ARM systems I used were quite excessive. These ARM processors do not have instructions to speed-up AES and SHA1, and were spending most of their resources to encrypt and authenticate the exchanged packets.

What can be done in such a case? The SSL VPN server utilized DTLS which runs over UDP and restricts the packet size to the path MTU size (typically 1400 bytes if we want to avoid fragmentation and reassembly), thus wastes quite some resources on packetization of long data. Since the packet size cannot be modified we could possibly improve the encryption and authentication speed.  Unfortunately using a more lightweight cipher available in TLS, such as RC4, is not an option as it is not available in DTLS (while TLS and DTLS mostly share the same set of ciphersuites, some ciphers like RC4 due to constraints cannot be used in DTLS). Overall, we cannot do much with the currently defined algorithms in DTLS, we need to move outside the TLS protocol box.

Some time ago there was an EU-sponsored competition on stream ciphers (which are typically characterized by their performance) and Salsa20, one of the winners, was recently added in nettle (the library GnuTLS uses) by Simon Josefsson who conceived the idea of such a fast stream cipher being added to TLS. While modifying GnuTLS to take advantage of Salsa20, I also considered moving away from HMAC (the slow message authentication mechanism TLS uses) and use the UMAC construction which provides a security proof and impressive performance. My initial attempt to port the UMAC reference code (which was not ideal code), motivated the author of nettle, Niels Moeller, to reimplement UMAC in a cleaner way. As such Salsa20 with UMAC is now included in nettle and are used by GnuTLS 3.2.0. The results are quite impressive.

Salsa20 with UMAC96 ciphersuites were 2-3 times faster than any AES variant used in TLS, and outperformed even RC4-SHA1, the fastest ciphersuite defined in the TLS protocol. The results as seen on an Intel i3 are shown below (they are reproducible using gnutls-cli --benchmark-tls-ciphers). Note that SHA1 in the ciphersuite name means HMAC-SHA1 and Salsa20/12 is the variant of Salsa20 that was among the eStream competition winners.

Performance on 1400-byte packets
CiphersuiteMbyte/sec
SALSA20-UMAC96107.82
SALSA20-SHA168.97
SALSA20/12-UMAC96130.13
SALSA20/12-SHA177.01
AES-128-CBC-SHA144.70
AES-128-GCM44.33
RC4-SHA161.14

The results as seen on the openconnect VPN performance on two PCs, connected over a 100-Mbit ethernet, are as follows.

Performance of a VPN transfer over ethernet
CiphersuiteMbits/secCPU load (top)
None (plain transfer)948%
SALSA20/12-UMAC968957%
AES-128-CBC-SHA18676%

While the performance difference of SALSA20 and AES-128-CBC isn't impressive (AES was already not too low), the difference in the load of the server CPU is significant.

Would such ciphersuites be also useful to a wider set of applications than VPN? I believe the answer is positive, and not only for performance reasons. This year new attacks were devised on AES-128-CBC-SHA1 and RC4-SHA1 ciphersuites in TLS that cannot be easily worked around. For AES-128-CBC-SHA1 there are some hacks that reduce the impact of the known attacks, but they are hacks not a solution. As such TLS will benefit from a new set of ciphersuites that replace the old ones with known issues. Moreover, even if we consider RC4 as a viable solution today (which is not), the DTLS protocol cannot take advantage of it, and datagram applications such as VPNs need to rely on the much slower AES-128-GCM.

So we see several advantages in this new list of ciphersuites and for that, with Simon Josefsson and Joachim Strombergson we plan to propose to the IETF TLS Working Group the adoption of a set of Salsa20-based ciphersuites. We were asked by the WG chairs to present our work in the IETF 87 meeting in Berlin. For that I plan to travel to the meeting in Berlin to present our current Internet-Draft.

For that, if you support defining these ciphersuites in TLS, we need your help. If you are an IETF participant please join the TLS Working Group meeting and indicate your support. Also if you have any feedback on the approach or suggest another field of work that this could be useful please drop me a mail or leave a comment below mentioning your name and any association.

Moreover, as it is now, a lightning trip to Berlin on these dates would cost at minimum 800 euros including the IETF single day registration. As this is not part of our day job any contribution that would help to partially cover those expenses is welcome.