Thursday, May 16, 2013

Salsa20 and UMAC in TLS

Lately while I was implementing and deploying an SSL VPN server, I realized  that even for a peer-to-peer connections the resources taken for encryption on the two ARM systems I used were quite excessive. These ARM processors do not have instructions to speed-up AES and SHA1, and were spending most of their resources to encrypt and authenticate the exchanged packets.

What can be done in such a case? The SSL VPN server utilized DTLS which runs over UDP and restricts the packet size to the path MTU size (typically 1400 bytes if we want to avoid fragmentation and reassembly), thus wastes quite some resources on packetization of long data. Since the packet size cannot be modified we could possibly improve the encryption and authentication speed.  Unfortunately using a more lightweight cipher available in TLS, such as RC4, is not an option as it is not available in DTLS (while TLS and DTLS mostly share the same set of ciphersuites, some ciphers like RC4 due to constraints cannot be used in DTLS). Overall, we cannot do much with the currently defined algorithms in DTLS, we need to move outside the TLS protocol box.

Some time ago there was an EU-sponsored competition on stream ciphers (which are typically characterized by their performance) and Salsa20, one of the winners, was recently added in nettle (the library GnuTLS uses) by Simon Josefsson who conceived the idea of such a fast stream cipher being added to TLS. While modifying GnuTLS to take advantage of Salsa20, I also considered moving away from HMAC (the slow message authentication mechanism TLS uses) and use the UMAC construction which provides a security proof and impressive performance. My initial attempt to port the UMAC reference code (which was not ideal code), motivated the author of nettle, Niels Moeller, to reimplement UMAC in a cleaner way. As such Salsa20 with UMAC is now included in nettle and are used by GnuTLS 3.2.0. The results are quite impressive.

Salsa20 with UMAC96 ciphersuites were 2-3 times faster than any AES variant used in TLS, and outperformed even RC4-SHA1, the fastest ciphersuite defined in the TLS protocol. The results as seen on an Intel i3 are shown below (they are reproducible using gnutls-cli --benchmark-tls-ciphers). Note that SHA1 in the ciphersuite name means HMAC-SHA1 and Salsa20/12 is the variant of Salsa20 that was among the eStream competition winners.

Performance on 1400-byte packets
CiphersuiteMbyte/sec
SALSA20-UMAC96107.82
SALSA20-SHA168.97
SALSA20/12-UMAC96130.13
SALSA20/12-SHA177.01
AES-128-CBC-SHA144.70
AES-128-GCM44.33
RC4-SHA161.14

The results as seen on the openconnect VPN performance on two PCs, connected over a 100-Mbit ethernet, are as follows.

Performance of a VPN transfer over ethernet
CiphersuiteMbits/secCPU load (top)
None (plain transfer)948%
SALSA20/12-UMAC968957%
AES-128-CBC-SHA18676%

While the performance difference of SALSA20 and AES-128-CBC isn't impressive (AES was already not too low), the difference in the load of the server CPU is significant.

Would such ciphersuites be also useful to a wider set of applications than VPN? I believe the answer is positive, and not only for performance reasons. This year new attacks were devised on AES-128-CBC-SHA1 and RC4-SHA1 ciphersuites in TLS that cannot be easily worked around. For AES-128-CBC-SHA1 there are some hacks that reduce the impact of the known attacks, but they are hacks not a solution. As such TLS will benefit from a new set of ciphersuites that replace the old ones with known issues. Moreover, even if we consider RC4 as a viable solution today (which is not), the DTLS protocol cannot take advantage of it, and datagram applications such as VPNs need to rely on the much slower AES-128-GCM.

So we see several advantages in this new list of ciphersuites and for that, with Simon Josefsson and Joachim Strombergson we plan to propose to the IETF TLS Working Group the adoption of a set of Salsa20-based ciphersuites. We were asked by the WG chairs to present our work in the IETF 87 meeting in Berlin. For that I plan to travel to the meeting in Berlin to present our current Internet-Draft.

For that, if you support defining these ciphersuites in TLS, we need your help. If you are an IETF participant please join the TLS Working Group meeting and indicate your support. Also if you have any feedback on the approach or suggest another field of work that this could be useful please drop me a mail or leave a comment below mentioning your name and any association.

Moreover, as it is now, a lightning trip to Berlin on these dates would cost at minimum 800 euros including the IETF single day registration. As this is not part of our day job any contribution that would help to partially cover those expenses is welcome.

Tuesday, March 26, 2013

The perils of LGPLv3

LGPLv3 is the latest version of the GNU Lesser General Public License. It follows the successful LGPLv2.1 license, and was released by Free Software Foundation as a counterpart to its GNU General Public License version 3.

The goal of the GNU Lesser General Public Licenses is to provide software that can be used by both proprietary and free software. This goal has been successfully handled so far by LGPLv2.1, and there is a multitude of libraries using that license. Now we have LGPLv3 as the latest, and the question is how successful is LGPLv3 on this goal?

In my opinion, very little. If we assume that its primary goal is to be used by free software, then it blatantly fails that. LGPLv3 has serious issues when used with free software, and especially with the GNU GPL version 2. Projects under the GPLv2 license violate its terms if they use an LGPLv3 library (because LGPLv3 adds additional restrictions).

What does FSF suggest on that case? It suggests upgrading GPLv2 projects to GPLv3. That's a viable solution, if you actually can and want to upgrade to GPLv3. At the GnuTLS project, after we switched to LGPLv3, we realized that in practice we forced all of our GPLv2 (or later) users to distribute their binaries under GPLv3. Moreover, we also realized that several GPLv2-only projects (i.e., projects that did not have the GPLv2 or later clause in their license), could no longer use the library at all.

The same incompatibility issue exists with LGPLv2.1 projects that want to use an LGPLv3 library. They must be upgraded to LGPLv3.

In discussions within FSF, Stallman had suggested using the dual license LGPLv3 or GPLv2, in libraries to overcome these issues. That although it does not solve the issue with the LGPLv2.1 libraries, is not a bad suggestion, but it has a major drawback. The projects under the dual LGPLv3 or GPLv2 license, cannot re-use code from other LGPLv3 projects nor from GPLv2 projects, creating a rather awkward situation for the project.

So it seems we have a "lesser" kind of license for use by libraries, that mandates free software project authors the license they should release their projects with. I find that unacceptable for such a license. It seems to me that with this license, FSF just asks you not to use LGPLv3 for your libraries.

So ok, LGPLv3 has issues with free software licenses... but how does it work with proprietary projects? Surprisingly it works better than with free software licenses; it doesn't require them to change their license.

Nevertheless there is a catch. My understanding of LGPLv3 (it is one of the hardest licenses to read - as it requires you to read first the GPLv3 remove some of its clauses and then add some additional ones) is that it contains the anti-tivoization clause of GPLv3. That is, in a home appliance, if the creator of the appliance has an upgrade mechanism, he has to provide a way to upgrade the library. That doesn't really make sense to me, neither as a consumer, nor as someone who potentially creates software for appliances.

As a consumer, why would I consider the ability to only upgrade the (say) libz library on my router a major advantage? You may say that I have additional privileges (more freedom) on my router. It could be, but my concern is that this option will hardly ever happen.  Will an appliance creator chose a "lesser" GPL library when this clause is present? Will he spend the additional time to create a special upgrade mechanism for a single library to satisfy the LGPLv3 license, or will he chose a non-LGPLv3 license? (remember that the goal of the LGPL licenses according to FSF is to use them in software where proprietary, or free of charge alternatives exist)


So overall, it seems to me that LGPLv3 has so many practical issues that actually make its advantages (e.g. patent clauses) seem irrelevant, and I don't plan to use it as a license of my libraries. I'm back to the good old LGPLv2.1.

Tuesday, February 5, 2013

Time is money (in CBC ciphersuites)

While protocols are not always nicely written, deviating from them has a big disadvantage. You cannot blame someone else if there is a problem. It has a small advantage though, you avoid monoculture and an attack that is applicable to the protocol may not applicable to you. What happened in the following story is something in between.

But let's start from the beginning. Few months ago I received a paper from Nadhem Alfardan and Kenny Paterson about a new timing attack on TLS CBC ciphersuites. That paper got on my low priority queue since I'm not that impressed by timing attacks. They are nicely done on the lab, but fall short in the field. Nevertheless at some point this attack got my attention because it claimed it could recover few bits of the plaintext when using GnuTLS over ethernet.

Let's see what is the actual scenario though. In order for the attack to work the client must operate as follows. It connects to a server, it sends some data (which will be encrypted), the attacker will intercept them, and terminate the client's connection abnormally. The client will then reconnect and resend the same data, again and again.

That is not the typical scenario of TLS, but it is not so unrealistic either (think of a script that does that). The main idea of the attack is that the described attacker can distinguish between different padding values of TLS messages using the available timing information between receipt of a message and server reply (which may just be the TCP tear down messages).

How is that done with GnuTLS? The attacker takes the input ciphertext and forms a message of 20 AES blocks. Then takes the part of the message he's interested at and copies the matching block and its predecessor as the last two encrypted blocks (which contain the MAC and padding).

Then on every different user connection it XORs the last byte of the penultimate block with a byte (say Delta to be consistent with the paper) of his choice. On every client reconnection he repeats that using a different Delta and waits for the server response (a TLS server notifies the client of decryption failure).

In the paper they plot a diagram that shows that certain values of Delta require different time to receive a reply. I re-implemented the attack for verification, and the measurements on the server can be seen in the following diagram. Note that the last plaintext byte is zero (also note that some attack results in the paper are due to a bug the authors discovered which is now fixed, the discussion here refers to the fixed version of GnuTLS).
Figure 1. Median server timings for AES-CBC-SHA1, for a varying Delta, on GnuTLS 3.1.6 or earlier (view on the server)
It is clear that different values of Delta cause different response time. What we see in these timing results is the differences due to the sizes of the data being applied to the hash algorithm. In GnuTLS the bigger the Delta, the larger the pad (remember that the last byte of plaintext was zero), and less time is spent in hashing data.

The small increase in processing time seen on every block, is the correctness check of the pad (which takes more time as the pad increases).  In that particular CPU I used, the differences per block seem to be around 200 nanoseconds, and the difference between the slower and the fastest block is around 1 microsecond.

The question is, can an attacker notice such small differences over ethernet? In the paper the authors claim yes (and present some nice figures). I repeated the test, not over the ethernet, but over unix domain sockets, i.e., only delays due to kernel processing are added. Let's see the result.

Figure 2. Median server timings for AES-CBC-SHA1, for a varying Delta, on GnuTLS 3.1.6 or earlier (attacker view)

Although it is more noisy the pattern is still visible. Could we avoid that pattern from being visible? Let's first follow the RFC padding removal precisely and check again.

Figure 1. Median server timings for AES-CBC-SHA1, for a varying Delta, on GnuTLS 3.1.6 with TLS conformant padding check applied (view on the server)

That's much better. Obvious patterns disappeared. However, there is no happy end yet. In the same paper the authors present another attack on OpenSSL which uses the above padding method. Surprisingly that attack can recover more data than before (but at a slower pace). That's the Full Plaintext recovery attack on the paper and it uses the fact that TLS 1.2 suggests to assume 0 bytes of padding. Zero bytes of padding is an invalid value (padding in TLS is from 1 to 256 bytes) and thus cannot be set by a legitimate message.

This can be exploited and then invalid padding can be distinguished from valid padding, by selecting message size in a way that if 0 is used as pad, the additional byte that will be hashed will result to a full block processed by the hash algorithm. That is we would expect that if a correct pad is guessed less processing would occur. Let's visualize it on an example.

Figure 4. Median server timings for AES-CBC-SHA1, for a varying Delta, on a TLS compliant implementation

Notice at the lonely point on the bottom left. We can see that when Delta is zero the processing is 200ms faster. For the selected message, the zero Delta resulted to a correct pad. How to fix that? That is pretty tricky.

In order to fix this pattern the code that is doing the CBC pad removal has to be aware of the internal block sizes in the hash, as well as any internal padding used by the hash. In a typical layered implementation (or at least in GnuTLS) that isn't easy. The internal hash block size wasn't available, because one shouldn't need to know that. The paper suggests a fix, that assumes a block size of 64, which is correct for all the HMAC algorithms in TLS, but that isn't future proof (e.g. SHA3 hasn't got the same block size).

So what to do there? The fix for GnuTLS is now similar to the hack described in the paper. The hash block size and knowledge of the internal padding are taken into account to achieve a pretty uniform processing time (see below). Any differences are now into the level of 10's of nanoseconds.
Figure 4. Median server timings for AES-CBC-SHA1, for a varying Delta, after work-around is applied


Now we're very close to a happy end. What is important though, is to prevent a similar attack from occurring next year. This isn't a new attack, even the TLS 1.1 protocol acknowledges that timing channel, but does nothing to solve it. TLS implementations now need to have 2-3 pages of code just to remove the CBC padding, and that makes clear that the TLS spec is broken and needs to be updated.

Together with Alfredo Pironti we plan to propose proposed a new padding mechanism which one of its targets is to eliminate that CBC padding issue (the other target is to allow arbitrary padding to any ciphersuite). The way the timing channels in CBC padding are eliminated, is by considering pad as part of the transferred data (i.e., it is properly authenticated). As such the pad check code does not provide any side-channel information because it operates only after the MAC is verified.

PS. Kudos go to Oscar Reparaz for long discussions on side channel attacks.

PS2. I should note that Nadhem and Kenny were very kind to notify me of the bugs and the attack. While this may seem like the obvious thing to do, it is not that common with other academic authors (I'm very tempted to place a link here).


Monday, October 8, 2012

Some thoughts on the DANE protocol

A while ago I was writing on why we need an alternative authentication method in TLS. Then I described the SSH-style authentication and how it was implemented it GnuTLS. Another approach is the DANE protocol. It uses the hierarchic model of DNSSEC to provide an alternative authentication method in TLS.

DNSSEC uses a hierarchical model similar to a single root CA that is delegating CA responsibilities to each individual domain holder for its domain. In DANE a DNSSEC-enabled domain may sign entries for each TLS server in the domain. The entries contain information, such as the hash, of the TLS server's certificate. That's a nice idea and can be used instead, or in addition to the existing commercial CA verification. Let's see two examples that demonstrate DANE. Let's suppose we have an HTTPS server called www.example.com and a CA.

A typical web client after it successfully verifies www.example.com's certificate using the CA's certificate, will check the DANE entries. If they match it's assured that a CA compromise does not affect its current connection. That is, in this scenario, an attacker needs not only to compromise the example.com's CA, but also to compromise the server's DNS domain service.

Another significant example is when there is no CA at all. The server www.example.com is a low-budget one and wants to avoid paying any commercial CA. If it has DNSSEC set up then it just advertises its key on the DNS and clients can verify its certificate using the DNSSEC signature. That is the trust is moved from the commercial CAs to the DNS administrators. Whether they can cope with that, will be seen in time.

Even though the whole idea is attractive, the actual protocol is (IMO) quite bloated. On a recent post in the DANE mailing list I summarized several issues (note that I was wrong on the first). The most important I believe is the fact that DANE separates the certificates to CA signed certificates and to self-signed certificates. That is if you have a CA signed certificate you mark it in the DNS entry as 1, while if you have a self signed one you mark it as 3. The reasoning behind this is unclear but it's effect is that it is harder to move from the CA signed to non-CA signed world or vice-versa. That is if you have a CA signed certificate and it expires, the DANE entry automatically becomes invalid as well. This may be considered a good property for some (especially the CAs), but I see no much point in that. It is also impossible for the server administrator to know whether a client trusts its CA, thus this certificate distinction doesn't always make sense. A work-around is to always have your certificates marked as self-signed and CA signed in two different DNS entries.

Despite this and few other shortcomings that I worry of, this is the best protocol we have so far to divide the trust for TLS certificates verification to more entities than the commercial CAs. For this reason, a companion library implementing DANE will be included in the upcoming GnuTLS 3.1.3 release.

Thursday, August 16, 2012

Using the Trusted Platform Module to protect your keys

There was a big hype when the Trusted Platform Module (TPM) was introduced into computers. Briefly it is a co-processor in your PC that allows it to perform calculations independently of the main processor. This has good and bad side-effects. In this post we focus on the good ones, which are the fact that you can use it to perform cryptographic operations the same way as in a smart-card. What does that mean? It simply means that you can have RSA keys in your TPM chip that you can use them to sign and/or decrypt but you cannot extract them. That way a compromised web server doesn't necessarily mean a compromised private key.

GnuTLS 3.1.0 (when compiled with libtrousers) adds support for keys stored in the TPM chip. This support is transparent, and such keys can be used similarly to keys stored in files. What is required is that TPM keys are specified using a URI of the following forms.

tpmkey:uuid=c0208efa-8fe3-431a-9e0b-b8923bb0cdc4;storage=system
tpmkey:file=/path/to/the/tpmkey


The first URI contains a UUID which is an identifier of the key, and the storage area of the chip (TPM allows for system and user keys). The latter URI is used for TPM keys that are stored outside the TPM storage area, i.e., in an (encrypted by the TPM) file.

Let's see how we can generate a TPM key, and use it for TLS authentication. We'll need to generate a key and the corresponding certificate. The following command generates a key which will be stored in the TPM's user section.

$ tpmtool --generate-rsa --bits 2048 --register --user


The output of the command is the key ID.

tpmkey:uuid=58ad734b-bde6-45c7-89d8-756a55ad1891;storage=user

So now that we have the ID of the key, let's extract the public key from it.

$ tpmtool --pubkey "tpmkey:uuid=58ad734b-bde6-45c7-89d8-756a55ad1891;storage=user" --outfile=pubkey.pem

And given the public key we can easily generate a certificate using the following command.
$ certtool --generate-certificate --outfile cert.pem --load-privkey "tpmkey:uuid=58ad734b-bde6-45c7-89d8-756a55ad1891;storage=user" --load-pubkey pubkey.pem --load-ca-certificate ca-cert.pem --load-ca-privkey ca-key.pem

The generated certificate can now be used with any program using the gnutls library, such as gnutls-cli to connect to a server. For example:
$ gnutls-cli --x509keyfile "tpmkey:uuid=58ad734b-bde6-45c7-89d8-756a55ad1891;storage=user" --x509certfile cert.pem -p 443 my_host.name

An easy to notice issue with TPM keys is that they are not mnemonic. There is only an UUID identifying the key, but no labels making the distinction of multiple keys a troublesome task. Nevertheless, TPM keys provide a cheap way to protect keys used in your system.


Wednesday, April 18, 2012

A flaw in the smart card Kerberos (PKINIT) protocol

Reading security protocols is not always fun nor easy. Protocols like public key Kerberos are hard to read because they just define the packet format and expect the reader to assume a correct message sequence. I read it, nevertheless, because I was interested on the protocol's interaction with smart cards. If you are not aware of the protocol, public key Kerberos or PKINIT is the protocol used in Microsoft Active Directory and described in RFC4556.

The idea of the protocol is to extend the traditional Kerberos, that supports only symmetric ciphers, with digital signatures and public key encryption in order to support stock smart cards. A use-case is, for example, logging in a windows domain using the smart card. The protocol itself doesn't mention smart cards at all, probably because it was thought as a deployment issue. Nevertheless, it was believed to be a secure protocol and several published papers provided proofs of security for all operational modes of the protocol.

However, the protocol has an important flaw. A flaw that makes it insecure if used with smart cards. I wrote a detailed report on the flaw, but the main idea is that if one has access for few minutes to your smart card he can login using your credentials at any time in the future. You may think that an attack like that can be prevented by never lending your smart card to anyone, but how can you prove that no-one borrowed it for a while? Or if you believe the smart card PIN would protect from theft, how could you know that the reader you are inserting your card isn't tampered? And in a protocol with smart cards you'd expect the tampered reader not to be able to use the card after you retrieve it. This is not the case, making the protocol unsuitable for smart cards, its primary use-case.


What is the most interesting issue however, are the security proofs. The protocol was proven secure, but because the protocol never mentioned smart cards, researchers proved its security on a different setting than the actual use cases. So when reading a security proof, always check the assumptions, which are as important as the proof itself.

Few things went bad with the design of this protocol, none of which is actually technical. The protocol is hard to read, and in order to get an overview of it, you have to read the whole RFC. This is just bad. If you check figure 1 in the TLS RFC you get an overview of the protocol immediately. You might not know the actual contents of the messages but the sequence is apparent. This is not possible in the Kerberos protocol, and that discourages anyone who might want to understand the protocol using a high level description of it. Another flaw, is that the protocol doesn't mention smart cards, its primary use-case. Smart cards were treated as a deployment issue and readers of the RFC, would never know about it. The latter issue is occurring in many of the IETF protocols and the readers are expected to know where and how this protocol is used. As it was demonstrated by the security proofs on a different setting, this is not the case.

So, what can it be done to mitigate the flaw? Unfortunately without modifying the protocol, the only advice that can be given is something along the:
  • make sure you always possess the card;
  • make sure you never use a tampered smart card reader.
Which may seem pretty useless in a typical working environment. What makes the attack nasty, is that if an adversary tampers your reader, and you use your card with it now, the adversary can perform a transaction a year later when you'll have no clue on what happened and might be no evidence of the tampered reader.

Sunday, April 1, 2012

TLS in embedded systems

In some embedded systems space may often be a serious constraint. However, there are many such systems that contain several megabytes of flash either as an SD memory card, or as raw NAND, having no real space constraint. For those systems using a TLS implementation such as GnuTLS or OpenSSL would provide performance gains that are not possible with the smaller implementations that target small size. That is because both of the above implementations, unlike the constraint ones, support cryptodev-linux to take advantage of cryptographic accelerators, widely present in several constraint CPUs, and support elliptic curves to optimize performance when perfect forward secrecy is required.

I happened to have an old geode (x86 compatible) CPU which contained an AES accelerator, so here are some benchmarks created using the nxweb/GnuTLS and nginx/OpenSSL web servers and the httpress utility. The figure on the right shows the data transferred per second using AES in CBC mode, with the cryptographic accelerator compared to GnuTLS' and  OpenSSL's software implementations. We can clearly see that download speed almost doubles on a big file transfer when using the accelerator.


The figure on the left shows a comparison of the various key exchange methods in this platform using GnuTLS and OpenSSL. The benchmark measures HTTPS transactions per second and the keys and parameters used are the same for both implementations. The key sizes are selected of equivalent security levels (1776 bits in RSA and DH are equivalent to 192 bits in ECDH according to ECRYPT II recommendations). We can see that the elliptic curve version of Diffie Hellman (ECDHE-RSA) allows 25% more transactions in both implementations comparing to the Diffie-Hellman on a prime field (DHE-RSA). The plain RSA key exchange remains the fastest, at the cost of sacrificing perfect forward secrecy.

As a side-note it is nice to see that at the security level of 192 bits GnuTLS outperforms OpenSSL on this processor. The trend continues on higher security levels for the RSA and DHE-RSA methods but the ECDHE-RSA method is interesting since even though OpenSSL has a more efficient elliptic curve implementation. GnuTLS' usage of nettle and GMP (which provide a faster RSA implementation) compensates, and their performance is almost identical.

Overall, in the few embedded systems that I've worked on, space was not a crucial limiting factor, and it was mainly this work that drove me into updating cryptodev for linux. In those systems the space cost occurred due to the usage of a larger library was compensated by (1) the off-loading to the crypto processor of operations that would otherwise load the CPU and (2) the reduce in processing time due to elliptic curves.
However this balance is system specific and what was important for my needs would not cover everyone elses, so it is important to weigh the advantages and disadvantages of cryptographic implementations on your system alone.