Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

Monday, August 21, 2017

An overview of GnuTLS 3.6.0

The new 3.6.0 GnuTLS release contains several new features, back-end changes and clean ups. This is a release which re-spins the so-called 'stable-next' branch, meaning that once considered stable enough, this branch will replace the current stable branch. The main target of this release was to have a library ready to incorporate new protocol additions such  as TLS 1.3, which is currently in draft specification and is expected to be finalized in the coming months.  That "preparation", spans from introducing new functionality needed for the new protocol features, improving the testing and fuzzying infrastructure of the library to reduce regressions and non-standards compliant behavior, to the removal of features and components which are no longer relevant in today's Internet Public Key Infrastructure.

In short, this release introduces a new lock-free random generator and adds new TLS extensions shared by both TLS 1.2 and 1.3, such as Finite Field Diffie Hellman negotiation, Ed25519 and RSA-PSS signatures. These additions modernize the current TLS 1.2 support and pave the way for TLS 1.3 support in the library. Furthermore, tlsfuzzer is introduced in our continuous integration test suite. Tlsfuzzer, is a meticulous TLS test suite, which tests the behavior of the implementation on various corner (and not) cases, and acts complementary to the internal GnuTLS test suite and its unit testing. This release, also eliminates a lot of legacy code, in order to reduce complexity and improve the manageability of the library, preventing legacy code to be used as a potential attack vector.

Further changes to support TLS 1.3 will be included on this release branch.

The following paragraphs go through the most significant changes of the 3.6.0 release.

Testing and Fuzzying

Fuzzying infrastructure

Fuzzying in the sense of trying arbitrary input to the library and testing its behavior under invalid and valid but rare inputs is not something new. However, in GnuTLS previously, fuzzying of various components of the library was done in a non-systematic way, usually by 3rd parties who then reported any issues found. That as you can imagine is an unreliable process. Without common fuzzying infrastructure, there is no fuzzying code or infrastructure re-use, forcing each and every person attempting to fuzz GnuTLS functionality to re-invent the wheel. Driven by the availability of Google's OSS-Fuzz project, and with the contributions of Alex Gaynor, Tim Ruehsen and yours truly, there is now a common fuzzying base testing several aspects of the library. That fuzzying test suite is run automatically under OSS-Fuzz, uncovering issues, and filling bugs against the library (and that's the part where automation stops).

tlsfuzzer

TLS fuzzer is TLS server implementation testing tool used at Red Hat for testing the quality of TLS implementations. It checks the behavior of an implementation on various corner (and not) cases, providing a tool not only for testing correctness of an implementation but for ensuring that no behavioral regressions are introduced undetected. With this release GnuTLS incorporates this testing tool on its continuous integration infrastructure, ensuring behavioral stability and thorough testing of existing and new functionality. Since the library is now modified for TLS 1.3, tlsfuzzer's use is invaluable as it allows detecting major or minor behavioral changes in the old protocol support, early.

CII best practices

The Core Infrastructure Initiative (CII) of Linux Foundation provides a best practices badge, as a way for Free software to demonstrate they follow best software engineering practices. The practices include change control, quality, static analysis, security, even bug reporting. Although several of these practices were already in use in the project, the process of going through that manual inspection of processes, uncovered several weaknesses and omissions which are now resolved. Taking the time to go through the manual inspection was the most difficult part, as there is always something more important to address; however, I believe that the time spent on it was worthwhile. My impression is that there has been quality work behind the formulation of these practices, and I'd recommend any free software project to seriously consider following them. You can view the result of GnuTLS' inspection here.

 

Random Number Generation

A new lock-free random generator

Versions of GnuTLS 3.3 and later rely on two random generators. The default is based on a combination of Salsa20/12 stream cipher for nonces, and Yarrow/AES for everything else.  The other generator is the AES-CTR-DRBG, which is an AES-based deterministric random bit generator and is used optionally when the library is compiled with FIPS140-2 support and the system is in FIPS140-2 mode. Both of these generators operate under a global lock, making them a performance bottleneck for multi-threaded applications. Having such bottleneck in single-CPU systems or even 2-4 CPU systems in the past may have been an acceptable cost; today however as the number of CPUs in a single system increase past these numbers, such global locks severely harm performance. To address that, in GnuTLS 3.6.0 the random generator component was essentially re-written to address the bottleneck issue, simplify entropy gathering, as well as fix other issues found over the years. The end result, is a separate random generator per-thread, and there is a single default generator, based on the stream cipher CHACHA. The optional generator AES-CTR-DRBG remains the same. I'll not go further in the design of the new random generator, though you can find a detailed description of the changes on this post.

 

TLS Features 

Finite Field Diffie-Hellman parameter negotiation (RFC7919)

If you have setup any TLS server, or have developed an application which uses TLS, most likely you would have seen references to Diffie-Hellman parameter files and their generation. In the early versions of GnuTLS I recommended generating them daily or weekly, provided certtool options to generate them by following specific security levels, and so on. The truth is that there were no available best practices that protocol designers, or implementers could refer to on how these parameters should be used correctly. As it always happens in these cases the burden is pushed to the application writers, and I guess the application writers push that further to the application users. Fortunately, with the publication of RFC7919, the DH parameter handling becomes the responsibility of the TLS protocol itself, and they are now negotiated without any input from the application (maybe except the desired security parameter/level). GnuTLS 3.6.0 implements that feature removing the need for server applications to specify Diffie-Hellman parameters, but in a backwards compatible way. Applications which already specify explicitly the DH parameters, will still function by overriding that negotiation.

In practice this feature introduces the notion of groups, that replace the previous notion of curves. Applications which were setting support for explicit curves via priority strings like "NORMAL:+CURVE-X25519", could now use "NORMAL:+GROUP-X25519" with identical functionality. The groups act as a superset of the curves, and contain the finite field groups for Diffie-Hellman, such as GROUP-FFDHE2048, GROUP-FFDHE3072, etc. 

Digital signatures with Ed25519

Although curve x25519 was already supported for TLS ephemeral key exchange, there was no way to utilize certificates and private keys with the Ed25519 signature algorithm. This is now supported both for certificate signing and verification, as well as for TLS key exchange (following draft-ietf-tls-rfc4492bis-17). In contrast with RSA or even ECDSA, these keys offer an impressive performance, and are notoriously small, even in their PKCS#8 container. That is easily demonstrated with the certtool output below. The key consists of three lines, two of which are the PEM boilerplate. 
certtool --generate-privkey --key-type ed25519
...
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIKVpRaegFppd3pDQ3wpHd4+wBV3gSjhKadEy8S1J4gEd
-----END PRIVATE KEY-----


Given the expected switch to post-quantum resistant algorithms in the not-so-far away future, that may be the last chance to utilize algorithms with such a small key size.

Digital signatures with RSA-PSS

That was the change that required by far the largest amount of code changes in GnuTLS 3.6.0; it required changes both in GnuTLS and nettle, so I'll dedicate few more lines to it. The feature was contributed mainly by Daiki Ueno, and was further extended by me. If you are not aware of the spicy details of cryptographic protocols today, RSA signing today is universally being used with a construction called PKCS#1 v1.5, as a tribute to the document that it was described at. Even though no attacks are known for the PKCS#1 v1.5 signing algorithm, the very similar PKCS#1 RSA decryption construction was successfully attacked in 1996 by Bleichenbacher, generating doubt on its cryptographic properties.

In order to prevent similar issues in the future, another RSA signing construction was defined in a later revision (v2) of the PKCS#1 document, named RSASSA-PSS (referred to as RSA-PSS from now on). That method, involves hash functions, and a salt, in order to provide a primitive with a security proof. The proof guarantees that the longer the salt, the stronger the security properties, with stronger being, unfortunately, undefined in any tangible terms. The best current practice followed by the PKCS#1 2.2 document is to tie the salt size with the size of the hash function involved, and thus associate directly the key security parameter with the hash function used in RSA-PSS signatures.

As mentioned above, RSA-PSS introduces optional parameters to a key or certificate. The parameters help the operator of the key (e.g., the software) sign using the desired security level. These parameters include the following information.

   RSASSA-PSS-params ::= SEQUENCE {
       hashAlgorithm      [0] HashAlgorithm,
       maskGenAlgorithm   [1] MaskGenAlgorithm,
       saltLength         [2] INTEGER,
       trailerField       [3] TrailerField
   }
That is, a key is associated with two different hashes (hashAlgorithm and maskGenAlgorithm), a variable salt size, and an unused for Internet PKI trailerField. To simplify generation and usage of such keys, GnuTLS 3.6.0 generates keys by default with no parameters, that is, keys that can be used with any hash or salt size (except SHA1 which is intentionally not supported). That form of keys is most suitable for servers which typically sign using any algorithm supported by the connected client. For CA keys and keys which require a consistent security level to be used, these parameters can be set, though GnuTLS will require the hash algorithms in hashAlgorithm and maskGenAlgorithm to match. Keys with non-matching algorithms, e.g., a key using SHA256 for hashAlgorithm and SHA512 for maskGenAlgorithm, are rejected as invalid.

To generate an RSA key for use only with RSA-PSS signatures, use the following command.
certtool --generate-privkey --key-type rsa-pss

To generate a key for RSA-PSS with a specific hash algorithm (the salt size will be obtained from it), use the following command:
certtool --generate-privkey --key-type rsa-pss --hash sha384

Note however, that very few applications accept keys intended only for RSA-PSS signatures. A more compatible approach is to generate an RSA key (which works for any purpose), and utilize that one to sign with the RSA-PSS signature algorithm. When that key is used in the context of TLS, it will be used for both RSA-PSS and plain PKCS#1 v1.5 signatures. As any cryptographer would tell you, that usage invalidates the RSA-PSS security proof, and underlines the need to utilize separate keys for the different algorithms.

As such, it is possible with GnuTLS to use separate keys for RSA PKCS#1 v1.5, and RSA-PSS, in order to reduce any risk due to the interaction between these two algorithms. When a GnuTLS server is provided with two keys, RSA and RSA-PSS, the latter will be used for RSA-PSS operations, and the former for the legacy PKCS#1 v1.5 operations.

Removed/Disabled functionality

3DES cipher is no longer enabled by default for TLS

Although the 3DES cipher is the mandatory option for TLS 1.0 and TLS 1.1, the cipher is unfortunately a relic of a different era. It is a 64-bit block cipher, which limits the amount of data ut can safely operate on, it is based on cipher with a 56-bit key size, and operates in encryption-decryption-encryption (EDE) mode to overcome the previous limitation. As such, that cipher provides a performance unacceptable for today and is only being used to interoperate with legacy hardware and software. As such, this cipher will no longer be enabled by default, but applications requiring should provide the end-user the necessary knobs to enable it (e.g., a priority string which includes "+3DES-CBC").

SHA1 is no longer acceptable for certificate signing

SHA1 used to be the de facto algorithm in X.509 certificates, or any other digital signature standards. Given the collision attacks on SHA1 and the fact that it has been phased out from the public web, GnuTLS will not accept SHA1 signatures on certificates as trusted by default. SHA1 will remain acceptable for other types of signatures as it is still widely used. Note, however, that the existing collision attacks do not translate directly to an attack on digital signatures with SHA1. The removal is a precaution and preparation for its complete phasing out. The reason is, that even though direct attacks are not applicable on SHA1-based digital signatures, the experience with the attacks on MD5 the previous decade, shows that there can be clever ways to take advantage of collision attacks in order to forge certificates.

OpenPGP functionality was removed

When I originally started working on GnuTLS I was envisioning a future where OpenPGP certificates will be used instead of X.509. My driver was the perceived simplicity of OpenPGP certificate format, and the fact that obtaining a certificate at the time required the intervention of costly CA, in contrast with OpenPGP where one had to generate a key and manage its web of trust. That never took off as a deployment nor idea, and today none of the original driving factors are valid. OpenPGP certificates are not significantly simpler to parse than X.509, the web-of-trust proved to be a more difficult problem than Internet PKI, and the costly CAs verification issue is no longer relevant after letsencrypt.org.

IDNA2003 is no longer supported

IETF has switched to IDNA2008 for internationalized domain names since long time and as such GnuTLS will no longer provide compatibility code for the older standard. Internationalized domain names may not be widely known in the english speaking world, however, their use varies around the world. Hence, supporting them is necessary in order to be able to properly handle of PKIX (X.509) certificates and verification, with internationalized domain names. See my previous post for a more detailed description of IDNA today.

TLS compression functionality was removed

Compression prior to encryption was always considered a good thing, not because it eliminates correlations in plaintext due to language or file format in use, but also because it reduces the size of the transmitted data, and the latter is a significant performance benefit in restricted by bandwidth lines. Why did we remove it then? The reason is that after compression the ciphertext length, which in TLS 1.2 is in clear, may reveal more information about the data, and that, becomes a confidentiality breach issue when data are partially under the control of the attacker. This property has been exploited in attacks like the fancy-named CRIME attack.

Given the above, the currently held belief in protocol design is to delegate compression to application protocols, e.g., TLS 1.3 will not include support for compression, and for that we believe that there are more benefits in removing that feature completely, reducing the attack surface of the library, rather than keeping it as a legacy feature.

Concluding remarks


I'd like to sincerely thank everyone who has contributed for the GnuTLS 3.6.0 release to be possible. The git shortlog follows; happy hacking!


Alex Gaynor (12):
      Migrated fuzzers from the oss-repo to here.
      Added a server fuzzer
      Move to the devel dir
      Describe the integration
      Added a parser for PKCS7 importing and printing
      Added a fuzzer for OpenPGP cert parsing
      Do not infinite loop if an EOF occurs while skipping a PGP packet
      Attempt to fix a leak in OpenPGP cert parsing.
      Corrected a leak in OpenPGP sub-packet parsing.
      Enforce the max packet length for OpenPGP subpackets as well
      Do not attempt to parse a 32-bit integer if a packet is not 4 bytes.
      Do not attempt to parse a 32-bit integer if a packet is not 4 bytes.

Alexander Kanavin (1):
      Do not add cli-args.h to cli-args.stamp Makefile target

Alon Bar-Lev (19):
      tests: suite: pkcs11: skip if no softhsm
      tests: cert-tests: pkcs12 drop builddir usage
      tests: skip tests that requires tools if tools are disabled
      gitignore: sort()
      gitignore: update [ci skip]
      tests: skip tests that requires tools if tools are disabled
      tests: suite: chain: support separate builddir
      tests: remove bash usage
      tests: skip tests that requires tools if tools are disabled
      configure: remove void statement
      valgrind: support separate builddir for suppressions.valgrind
      .gitlab-ci.yml: add Fedora/x86_64/no-tools
      build: doc: install images also into htmldir
      tests: scripts: suppress which errors
      tests: remove unused suppressions.valgrind
      tests: suppressions.valgrind: supress fillin_rpath
      tests: cert-tests: openpgp-certs: align test redirection
      build: tests: resolve as-needed issue with seccomp
      build: disable valgrind tests by default

Andreas Metzler (3):
      Use NORMAL priority for SSLv23_*_method.
      gnutls-cli: Use CRLF with --starttls-proto=smtp.
      Fix autoconf progress message concerning heartbeat [ci skip]

Daiki Ueno (3):
      build: import files from Nettle for RSA-PSS
      x509: implement RSA-PSS signature scheme
      nettle: ported fix for assertion failure in pss_verify_mgf1

Daniel Kahn Gillmor (1):
      clarify documentation and arguments for psktool

David Caldwell (2):
      Rename uint64 to gnutls_uint64 to avoid conflict with macOS
      gnutls_x509_trust_list_add_system_trust: Add macOS keychain support

Dmitry Eremin-Solenikov (13):
      configure.ac: remove autogen'erated files only if necessary
      Add special MD5+SHA1 digest to simplify TLS signature code
      Rewrite SSL/TLS signing code to use combined MD5+SHA1 digest
      Rewrite SSL/TLS signature verification to use combined MD5+SHA1 digest
      Use MAC_MD5_SHA1 instead of MAC_UNKNOWN to specify TLS 1.0 PRF
      Cache MAC algorithm used for PRF function
      Rework setting next cipher suite
      Rework setting next compression method
      Drop _gnutls_epoch_get_compression
      Don't let GnuTLS headers in NETTLE_CFLAGS override local headers
      Fix two memory leaks in debug output of gnutls tools
      gnutls-serv: allow user to specify multiple x509certile/x509keyfile
      Rework KX -> PK mappings

Karl Tarbe (2):
      certtool: allow multiple certificates in --p7-sign
      tests: add test for signing with certificate list

Marcin Cieślak (1):
       only if HAVE_ALLOCA_H

Martin Storsjo (2):
      Fix a typo in a variable name in an m4 script
      Avoid deprecation warnings when including gnutls/abstract.h

Matt Turner (1):
      tests: Copy template out of ${srcdir}

Nicolas Dufresne (1):
      rsa-psk: Use the correct username datum

Nikos Mavrogiannopoulos (1148):
      ...

Rical Jasan (1):
      tests: Improve port-checking infrastructure.

Robert Scheck (1):
      Add LMTP, POP3, NNTP, Sieve and PostgreSQL support to gnutls-cli

Tim Rühsen (11):
      Add support for libidn2 (IDNA 2008 + TR46)
      lib/system/fastopen: Add TCP Fast Open for OSX
      Fix memleak in gnutls_x509_crl_list_import()
      Fix memleaks in gnutls_x509_trust_list_add_crls()
      fuzzer: Initial check in for improved fuzzing
      fuzzer: Suppress unsigned integer overflow in rnd-fuzzer.c
      fuzzer: Suppress leak in libgmp <= 6.1.2
      fuzzer: Move regression corpora from tests/ to fuzz/
      fuzzer: Add 'make -C fuzz coverage' [ci skip]
      fuzzer: Fix include path in run-clang.sh [skip ci]
      fuzzer: Update base64 fuzzers + corpora

Sunday, November 13, 2016

Using the Nitrokey HSM with GnuTLS applications

The Nitrokey HSM is an open hardware security module, in the form of a smart card token, which is used to isolate a server's private key from the application. That is, if you have an HTTPS server, such a hardware security module will prevent an attacker which temporarily obtained privileged access on the server (e.g., via an exploit like heartbleed), from copying the server's private key, allowing for impersonating it. See my previous post for a more elaborate discussion on that defense mechanism.

The rest of this post will explain how you can initialize this token and utilize it from GnuTLS applications, and in the process explain more about smart card and HSM usage in applications. For the official (and more advanced) Nitrokey setup instructions and tips you can see this OpenSC page, another interesting guide is here.

HSMs and smart cards

 

Nitrokey HSM is something between a smart card and an HSM. However, there is no real distinction between smart cards and Hardware Security Module from a software perspective. Hardware-wise one expects better (in terms of cost to defeat) tamper-resistance on HSMs, and at the same time sufficient performance for server loads. An HSM module is typically installed on PCI slots, USB, while smart cards are mainly USB or via a card reader.

On the software-side both smart cards and HSMs are accessed the same way, over the PKCS#11 API. That is an API which abstracts keys from operations, i.e., the API doesn't require direct access to the private key data to complete the operation. Most crypto libraries today support this API directly as GnuTLS and NSS do, or via an external module like OpenSSL (i.e., via engine_pkcs11).

Each HSM or smart card, comes with a "driver", i.e., a PKCS#11 module, which one had to specify on legacy applications. On modern systems, which have p11-kit, the available drivers are registered with p11-kit and applications can obtain and utilize them on run-time (see below for more information). For Nitrokey the OpenSC driver is being used, a driver for almost every other smart card that is supported on Linux.

If you are familiar with old applications, you would have noticed that objects were referred to as "slot1_1", which meant the first object on the first slot of the driver, or "1:1", and several other obscure methods depending on the application. The "slots" notion is an internal to PKCS#11, which is inherently unstable (re-inserting may change the slot number assignment), thus these methods to refer to objects cannot accommodate easily for multiple cards, or for referring to an object within a specific card if multiple are present, nor to easily utilize cards which are under the different drivers. More recent applications support PKCS#11 URIs, a method to identify tokens, and objects within the token which is unique system-wide; the URI looks like:

pkcs11:token=SmartCard-HSM;object=my-ecc-key;type=private
For GnuTLS applications, only PKCS#11 URIs can be used to refer to objects.


Driver setup and token discovery

 

On a typical Linux system which runs the pcscd server, and has opensc and p11-kit properly installed the following command should list the nitrokey token once inserted.
    $ p11tool --list-tokens

One of the entries printed should be something like the following.
Token 5:
    URL: pkcs11:model=PKCS%2315%20emulated;manufacturer=www.CardContact.de;serial=DENK0100424;token=SmartCard-HSM20%28UserPIN%29
    Type: Hardware token
    Manufacturer: www.CardContact.de
    Model: PKCS#15 emulated
    Serial: DENK0100424
    Module: /usr/lib64/pkcs11/pkcs11/opensc-pkcs11.so

The above information contains the identifying PKCS#11 URI of the token as well as information about the manufacturer and the driver library used. The PKCS#11 URI is a standardized unique identifier of tokens and objects stored within a token. If you do not see that information, verify that you have all of pcsc-lite, pcsc-lite-ccid, opensc, gnutls and p11-kit installed. If that's the case, you will need to register the opensc token to make it known to p11-kit manually (modern distributions take care of this step). This can be done with the following commands as administrator.
    # mkdir -p /etc/pkcs11/modules
    # echo "module: /usr/lib64/pkcs11/opensc-pkcs11.so" >/etc/pkcs11/modules/opensc.conf

It is implied that the your system's libdir for PKCS#11 drivers  should be used instead of the "/usr/lib64/pkcs11" path used above. Alternatively, one could append the --provider parameter on the p11tool command, to explicitly specify the driver, as in the following example. For the rest of this text we assume a properly configured p11-kit and omit the --provider parameter.
    $ p11tool --provider /usr/lib64/pkcs11/opensc-pkcs11.so --list-tokens

Token initialization

 

An HSM token prior to usage needs to be initialized, and be provided two PINs. One PIN is for operations requiring administrative (security officer in PKCS#11 jargon) access, and the second (the user PIN ) is for normal token usage. To initialize use the following command, with the PKCS#11 URL listed by the 'p11tool --list-tokens' command; in the following text we will use $URL to refer to that.
    $ p11tool --initialize "$URL"

Alternatively, when the driver supplied supports a single card, the URL can be specified as "pkcs11:" as shown below.
    $ p11tool --provider  /usr/lib64/pkcs11/opensc-pkcs11.so --initialize "pkcs11:"

The initialization commands above will ask to setup the security officer's PIN, which for nitrokey HSM is by default "3537363231383830". At the initialization process, the user PIN will also be asked. The user PIN is PIN which must be provided by applications and users, in order to use the card. Note that the command above (prior to GnuTLS 3.5.6) will ask for the administrator's PIN twice, once for initialization and once for setting the user PIN.

Key and certificate generation


It is possible to either copy an existing key on the card, or generate a key in it, a key which cannot be extracted. To generate an elliptic curve (ECDSA) key use the following command.
    $ p11tool --label "my-key" --login --generate-ecc "pkcs11:token=SmartCard-HSM20%28UserPIN%29"

The above command will generate an ECDSA key which will be identified by the name set by the label. That key can be then by fully identified by the PKCS#11 URL "pkcs11:token=SmartCard-HSM20%28UserPIN%29;object=my-key;type=private". If the command was successful, the command above will list two objects, the private key and the public key.
    $ p11tool --login --list-all "pkcs11:token=SmartCard-HSM20%28UserPIN%29"

Note that both objects share the same ID but have different type. As this key cannot be extracted from the token, we need to utilize the following commands to generate a Certificate Signing Request (CSR).

    $ certtool --generate-request --load-privkey "pkcs11:token=SmartCard-HSM20%28UserPIN%29;object=my-key;type=private" --outfile cert.csr
After providing the required information to certtool, it will generate a certificate request on cert.csr file. Alternatively, to generate a self-signed certificate, one can replace the '--generate-request' parameter with the '--generate-self-signed'.

The above generated certificate signining request, will allow to get a real certificate to use for the key stored in the token. That can be generated either with letsencrypt or a local PKI. As the details vary, I'm skipping this step, and I'm assuming a certificate is generated somehow.

After the certificate is made available, one can write it in the token. That step is not strictly required, but in several scenarios it simplifies key/cert management by storing them at the same token. One can store the certificate, using the following command.
    $ p11tool --login --write --load-certificate cert.pem --label my-cert --id "PUBKEY-ID" "pkcs11:token=SmartCard-HSM20%28UserPIN%29"
Note that specifying the PUBKEY-ID is not required, but it is generally recommended for certificate objects to match the ID of the public key object listed previously with the --list-all command. If the IDs do not match some (non-GnuTLS) applications may fail to utilize the key. The certificate stored in the token will have the PKCS#11 URL "pkcs11:token=SmartCard-HSM20%28UserPIN%29;object=my-cert;type=cert".


Testing the generated keys


Now that both the key and the certificate are present in the token, one can utilize their PKCS#11 URL in any GnuTLS application in place of filenames. That is if the application is asking for a certificate file, enter "pkcs11:token=SmartCard-HSM20%28UserPIN%29;object=my-cert;type=cert", and for private key "pkcs11:token=SmartCard-HSM20%28UserPIN%29;object=my-key;type=private".

The following example will run a test HTTPS server using the keys above.

    $ gnutls-serv --port 4443 --http --x509certfile "pkcs11:token=SmartCard-HSM20%28UserPIN%29;object=my-cert;type=cert" --x509keyfile "pkcs11:token=SmartCard-HSM20%28UserPIN%29;object=my-key;type=private;pin-value=1234"
That will setup a server which answers on port 4443 and will utilize the certificate and key on the token to perform TLS authentication. Note that the command above, demonstrates the use of the "pin-value" URI element. That element, specifies the object PIN on command line allowing for non-interactive token access.

Applicability and performance


While the performance of this HSM will most likely not allow you to utilize it in busy servers, it may be a sufficient solution for a private server, VPN, a testing environment or demo. On client side, it can certainly provide a sufficient solution to protect the client assigned private keys. The advantage a smart card provides to OTP, is the fact that it is simpler to provision remotely, with the certificate request method shown above. That can be automated, at least in theory, when a protocol implementation of SCEP is around. In practice, SCEP is well established in the proprietary world, but it is hard to find free software applications taking advantage of it.

Converting your application to use PKCS#11


A typical application written to use GnuTLS as TLS back-end library should be able to use smart cards and HSM tokens out of the box. The only requirement is for the applications to use the high-level file loading functions, which can load files or PKCS#11 URIs when provided. The only new requirement is for the application to obtain the PIN required for accessing the token, that can be done interactively using the PIN callbacks, or via the PKCS#11 URI "pin-value" element. For source examples, I'll refer you to GnuTLS documentation.
Some indicative applications which I'm aware they can use tokens via PKCS#11 URIs transparently, and can be used for testing, are mod_gnutls, lighttpd2, and openconnect.



Tuesday, October 25, 2016

A brief look at the Linux-kernel random generator interfaces

Most modern operating systems provide a cryptographic pseudo-random number generator (CPRNG), as part of their OS kernel, intended to be used by applications involving cryptographic operations. Linux is no exception in that, and in fact it was the first operating system that actually introduced a CPRNG into the kernel. However, there is much mystery around these interfaces. The manual page is quite unclear on its suggestions, while there is a web-site dedicated to debunking myths about these interfaces, which on a first read contradicts the manual page.

In this post, triggered by my recent attempt to understand the situation and update the Linux manual page, I'll make a brief overview of these interfaces. Note that, this post will not get into the insights of a cryptographic pseudo-random generator (CPRNG); for that, consider reading this article. I will go through these interfaces, intentionally staying on the high-level, without considering internal details, and discuss their usefulness for an application or library that requires access to such a CPRNG.

  • /dev/random: a file which if read from, will output data from the kernel CPRNG. Reading from this file blocks once the kernel (using some a little arbitrary metric) believes not enough random events have been accumulated since the last use (I know that this is not entirely accurate, but the description is sufficient for this post).
  • /dev/urandom: a file which if read from, will provide data from the kernel CPRNG. Reading from /dev/urandom will never block.
  • getrandom(): A system call which provides random data from the kernel CPRNG. It will block only when the CPRNG is not yet initialized.

A software engineer who would like to seed a PRNG or generate random encryption keys, and reads the manual page random(4) carefully, he will most likely be tempted to use /dev/random, as it is described as "suitable for uses that need very high quality randomness such as ... key generation". In practice /dev/random cannot be relied on, because it requires large amounts of random events to be accumulated in order to provide few bytes of random data to running processes. Using it for key generation (e.g, for ssh keys during first boot) is most likely going to convert the first boot process to a coin flip; heads and system is up, tails and the system is left hanging waiting for random events. This (old) issue with a mail service process hanging for more than 20 minutes prior to doing any action, illustrates the impact of this device to real-world applications which need to generate fresh keys on startup.

On the other hand, the device /dev/urandom provides access to the same random generator, but will never block, nor apply any restrictions to the amount of new random events that must be read in order to provide any output. That is quite natural given that modern random generators when initially seeded can provide enormous amounts of output prior to being considered broken (in an informational-theory sense). So should we use only /dev/urandom today?

There is a catch. Unfortunately /dev/urandom has a quite serious flaw. If used early on the boot process when the random number generator of the kernel is not fully initialized, it will still output data. How random are the output data is system-specific, and in modern platforms, which provide specialized CPU instructions to provide random data, that is less of an issue. However, the situation where ssh keys are generated prior to the kernel pool being initialized, can be observed in virtual machines which have not been given access to the host's random generator.

Another, though not as significant, issue is the fact that both of these interfaces require a file descriptor to operate. That, on a first view, may not seem like a flaw. In that case consider the following scenarios:
  • The application calls chroot() prior to initializing the crypto library; the chroot environment doesn't contain any of /dev/*random.
  • To avoid the issue above, the crypto library opens /dev/urandom on an library constructor and stores the descriptor for later use. The application closes all open file descriptors on startup.
Both are real-world scenarios observed over the years of developing the GnuTLS library. The latter scenario is of particular concern since, if the application opens few files, the crypto library may never realize that the /dev/urandom file descriptor has been closed and replaced by another file. That may result to reading from an arbitrary file to obtain randomness. Even though one can introduce checks to detect such case, that is a particularly hard issue to spot, and requires inefficient and complex code to address.

That's where the system call getrandom() fits. Its operation is very similar to /dev/urandom, that is, it provides non-blocking access to kernel CPRNG. In addition, it requires no file descriptor, and will also block prior to the kernel random generator being initialized. Given that it addresses, the issues of /dev/urandom identified above, that seems indeed like the interface that should be used by modern libraries and applications. In fact, if you use new versions of libgcrypt and GnuTLS today, they take advantage of this API (though that change wasn't exactly a walk in the park).

On the other hand, getrandom() is still a low-level interface, and may not be suitable to be used directly by applications expecting a safe high-level interface. If one carefully reads its manual page, he will notice that the API may return less data than the requested (if interrupted by signal), and today this system call is not even wrapped by glibc. That means that can be used only via the syscall() interface. An illustration of (safe) usage of this system call, is given below.

#include <sys/syscall.h>
#include <errno.h>
#define getrandom(dst,s,flags) syscall(SYS_getrandom, (void*)dst, (size_t)s, (unsigned int)flags)

static int safe_getrandom(void *buf, size_t buflen, unsigned int flags)
{
  ssize_t left = buflen;
  ssize_t ret;
  uint8_t *p = buf;
  while (left > 0) {
   ret = getrandom(p, left, flags);
   if (ret == -1) {
    if (errno != EINTR)
     return ret;
   }
   if (ret > 0) {
    left -= ret;
    p += ret;
   }
  }
  return buflen;
}

The previous example code assumes that the Linux kernel supports this system call. For portable code which may run on kernels without it, a fallback to /dev/urandom should also be included.

From the above, it is apparent that using the Linux-kernel provided interfaces to access the kernel CPRNG, is not easy. The old (/dev/*random) interfaces APIs are difficult to use correctly, and while the getrandom() call eliminates several of their issues, it is not straightforward to use, and is not available in Linux kernels prior to 3.17. Hence, if applications require access to a CPRNG, my recommendation would be to avoid using the kernel interfaces directly, and use any APIs provided by their crypto library of choice. That way the complexity of system-discovery and any other peculiarities of these interfaces will be hidden. Some hints and tips are shown in the Fedora defensive coding guide (which may be a bit out-of-date but still a good source of information).

Thursday, June 2, 2016

Restricting the scope of CA certificates

The granting of an intermediate CA certificate to a surveillance firm generated quite some fuss. Setting theories aside, the main reason behind that outcry, is the fact that any intermediate CA certificate trusted by the browsers has unlimited powers to certify any web site on the Internet. Servers can protect themselves against an arbitrary CA generating a valid certificate for their web site, using certificate pinning, but there is very little end-users can do. In practice, end-users either trust the whole bundled CA list in their browser/system or not.

An option for end-users is to utilize trust on first use, but that is not a widespread practice, and few software, besides for SSH, support it. A way for me as a user to defend against a believed to be rogue CA, is by disabling or removing that CA from my trusted bundle. But what if I trust that CA for a particular web site or domain, but not for the whole Internet?

On this post I'll try to provide more information on some lesser documented aspects of p11-kit, which provide additional control over the CA certificate bundle in a system. That is, I'll explain how we can do better than disabling CAs, and how we can restrict CAs to particular domains. The following instructions are limited to Fedora 22+ which has deployed a shared trust database for certificates based on p11-kit. This database, is not only an archive of trusted certificates, but also provides the option to attach additional attributes to CA certificates in the form of PKIX extensions. These extensions are called stapled extensions in p11-kit jargon and they override any extensions available in the trust certificates. That, allows to enforce additional restrictions to the purpose and scope of a certificate.

I'll attempt to demonstrate this feature using an example. Let's consider the case where your employer's IT department provided you with a CA certificate to trust for communications within the company. Let's also assume that the company's internal domain is called "example.com". In that scenario as a user I'd like to restrict the provided CA certificate to example.com domain to prevent anyone with access to the corporate private key from being able to hijack any connection outside the company scope. This is not only out of paranoia against a potential corporate big-brother but also to keep a good security practice and avoid having master keys. A stolen corporate CA key which is trusted for everything under the sun provides a potential attacker not only with access to company's internal communication, but also with access to Internet communication of any corporate user.

How would we install such certificate in a way that it is restricted only to example.com? Assuming that the CA certificate is provided at the example.com-root.pem file, the following command will add the company's certificate to the trusted list.
$ sudo trust anchor example.com-root.pem

That will create a file in /etc/pki/ca-trust/source containing the CA certificate (for more information on adding and removing CA certificates in Fedora see the update-ca-trust manpage).

If we edit this file we will see something like the following.
[p11-kit-object-v1]
trusted: true
x-distrusted: false
private: false
certificate-category: authority
-----BEGIN CERTIFICATE-----
MIIDsDCCAxmgAwIBAgIBATANBgkqhkiG9w0BAQUFADCBnTELMAkGA1UEBhMCVVMx
...
-----END CERTIFICATE-----

This contains the certificate of the CA as well as various basic flags set to it.
How can we now attach a stapled extension to it?

We need to add another object in that database containing the extension. But let's see the process step by step. First we need to extract the certificate's public key because that's how p11-kit identifies existing objects. A command to achieve that is the following:
$ certool --pubkey-info --infile example.com-root.pem --outfile example.com-pubkey.pem

 The output file will contain a public key in PEM format (identifiable by the "-----BEGIN PUBLIC KEY-----" header). We now edit the p11-kit file in  /etc/pki/ca-trust/source containing our certificate and append the following.
[p11-kit-object-v1]
class: x-certificate-extension
label: "Example.com CA restriction"
object-id: 2.5.29.30
value: "%30%1a%06%03%55%1d%1e%04%13%30%11%a0%0f%30%0d%82%0b%65%78%61%6d%70%6c%65%2e%63%6f%6d"
-----BEGIN PUBLIC KEY-----
...
-----END PUBLIC KEY-----

Where the public key part is copied from the example.com-pubkey.pem file.

This added object, is a stapled extension containing a PKIX name constraints extension which allows this CA to be used for certificates under the "example.com" domain. If you attempt to connect to a host with a certificate of this CA you will get the following error:
$ gnutls-cli www.no-example.com
...
Status: The certificate is NOT trusted. The certificate chain violates the signer's constraints.
*** PKI verification of server certificate failed...

Note that, although NSS and openssl applications check some extensions (such as key purpose) from this trust database, they do not consider the name constraints extension. This may change in the future, but currently only GnuTLS applications under Fedora will honor this extension. The reason it works under Fedora distribution is because GnuTLS is compiled using the --with-default-trust-store-pkcs11="pkcs11:" configuration option which makes it use the p11-kit trust DB directly.

A question at this point, after seeing the p11-kit object format, is how can we generate the "value" listed above containing the desired constraints? The value contains a DER encoded certificate extension which corresponds to the object identifier "object-id" field. In this case the object-id field contains the object identifier for NameConstraints extension (2.5.29.30).

Unfortunately there are no available tools to generate this value, that I'm aware of. I created a sample application which will generate a valid name constraints value to be set above. The tool can be found at this github repository.

After you compile, run:
$ ./nconstraints mydomain.com myotherdomain.com
%30%30%06%03%55%1d%1e%04%29%30%27%a0%25%30%0e%82%0c%6d%79%64%6f%6d%61%69%6e%2e%63%6f%6d%30%13%82%11%6d%79%6f%74%68%65%72%64%6f%6d%61%69%6e%2e%63%6f%6d

and as you see, this command will provide the required string.

Happy hacking!

Monday, June 15, 2015

Software Isolation in Linux

Starting from the assumption that software will always have bugs, we need a way to isolate and neutralize the effects of the bugs. One approach is by isolating components of the software such that a breach in one component doesn't compromise another. That, in the software engineering field became popular with the principle of privilege separation used by the openssh server, and is the application of an old idea to a new field. Isolation for protection of assets is  widespread in several other aspects of human activity; the most prominent example throughout history is the process of quarantine which isolates suspected carriers of epidemic diseases to protect the rest of the population. In this post, we briefly overview the tools available in the Linux kernel to offer isolation between software components. It is based on my FOSDEM 2015 presentation on software isolation in security devroom, slightly enhanced.

Note that this text is intended to developers and software engineers looking for methods to isolate components in their software.

An important aspect of such an analysis is to clarify the threats that it targets. In this text I describe methods which protect against code injection attacks. Code injection provides the attacker a very strong tool to execute code, read arbitrary memory, etc.

In a typical Linux kernel based system, the available tools we can use for such protection, are the following.
  1. fork() + setuid() + exec()
  2. chroot()
  3. seccomp()
  4. prctl()
  5. SELinux
  6. Namespaces

The first allows for memory isolation by using different processes, ensuring that a forked process has no access to parent's memory address space. The second allows a process to restrict itself in a part of the file system available in the operating system. These two are available in almost all POSIX systems, and are the oldest tools available dating to the first UNIX releases. The focus of this post are the methods 3-6 which are fairly recent and Linux kernel specific.

Before proceeding, let's make a brief overview of what is a process in an UNIX-like operating system. It is some code in memory which is scheduled to have a time slot in the CPU. It can access the virtual memory assigned to it, make calculations using the CPU user-mode instructions, and that's pretty much all. To access anything else, e.g., get additional memory, access files, read/write the memory of other processes, system calls from the operating system have to be used.

Let's now proceed and describe the available isolation methods.

Seccomp

After that introduction to processes, seccomp comes as the natural method to list, since it is essentially a filter for the system calls available in a process. For example, a process can install a filter to allow read() and write() but nothing else. After such a filter applied, any code which is potentially injected to that process will not be able to execute any other system calls, reducing the attack impact to the allowed calls. In our particular example with read() and write() only the data written and read by that process will be affected.

The simplest way to access seccomp is via the libseccomp library which has a quite intuitive API. An example using that library, which creates a whitelist of three system calls is shown below.

    
    #include <seccomp.h>

    scmp_filter_ctx ctx;
    ctx = seccomp_init(SCMP_ACT_ERRNO(EPERM))
    assert(ctx == 0);
    assert(seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(read), 0) == 0);
    assert(seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(write), 0) == 0);
    assert (seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(ioctl), 1, SCMP_A1(SCMP_CMP_EQ, (int)SIOCGIFMTU)) == 0);
    assert (seccomp_load(ctx) == 0);

The example above installs a filter which allows the read(), write() and ioctl() system calls. The latter is only allowed if the second argument of ioctl() is SIOCGIFMTU. The first line of filter setup, instructs seccomp to return -1 and set errno to EPERM when an instruction outside this filter is called.

The drawback of seccomp is the tedious process which a developer has to go in order to figure out the used system calls in his application. Manual inspection of the code is required to discover the system calls, as well as inspection of run traces using the 'strace' tool. Manual inspection will allow a making a rough list which will provide a starting point, but will not be entirely accurate. The issue is that calls to libc functions may not correspond to the expected system call. For example, a call exit() often results to an exit_group() system call.

The trace obtained using 'strace' will help clarify, restrict or extend the initial list. Note however, that using traces alone may prevent getting the system calls used in error condition handling, and different versions of libc may use different system calls for the same function. For example, the libc call select() uses the system call select() in x86-64 architecture, but the _newselect() in the x86.


The performance cost of seccomp is the cost of executing the filter, and for most cases it is a fixed cost per system call. In the openconnect VPN server I estimated the impact of seccomp on a worker process to 2% slowdown in transfer speed (from 624 to 607Mbps). The measured server worker process is executing read/send and recv/write in a tight loop.

Prctl

The PR_SET_DUMPABLE flag of the prctl() system call protects a process from other processes accessing its memory. That is, it will prevent processes with the same privilege as the protected to read its memory it via ptrace(). A usage example is shown below.

    #include <prctl.h>

    prctl(PR_SET_DUMPABLE, 0);

While this approach doesn't protect against code injection in general, it may prove a useful tool with low-performance cost in several setups.

SELinux

SELinux is an operating system mechanism to prevent processes from accessing various "objects" of the OS. The objects can be files, other processes, pipes, network interfaces etc. It may be used to enhance seccomp using more fine-grained access control. For example one may setup a rule with seccomp to only allow read(), and enhance the rule with SELinux for read() to only accept certain file descriptors.

On the other hand, a software engineer may not be able to rely too much on  SELinux to provide isolation, because it is often not within the developer's control. It is typically used as an administrative tool, and the administrator may decide to turn it off, set it to non-enforcing mode, etc.

The way for a process to transition to a different SELinux ruleset is via exec() or via the setcon() call, and its cost is perceived to be high. However, I have no performance tests with software relying on it.

The drawbacks of this approach are the centralized nature of the system policy, meaning that individual applications can only apply the existing system policy, not update it, the obscure language (m4) a system policy needs to be written at, and the fact that any policy written will not be portable across different Linux based systems.

Linux Namespaces

One of the most recent additions to the Linux kernel are the Namespaces feature. These allow "virtualizing" certain Linux kernel subsystems in processes, and the result is often referred to as containers. It is available via the clone() and unshare() system calls. It is documented in the unshare(2) and clone(2) man pages, but let's see some examples of the subsystems they can be restricted.

  • NEWPID: Prevents processes to "see" and access process IDs (PIDs) outside their namespace. That is the first isolated process will believe it has PID 1 and see only the processes it has forked.
  • NEWIPC: Prevents processes to access the main IPC subsystem (shared memory segments, messages queues etc.). The processes will have access to their own IPC subsystem.
  • NEWNS: Provides filesystem isolation, in a way as a feature rich chroot(). It allows for example to create isolated mount points which exist only within a process.
  • NEWNET: Isolates processes from the main network subsystem. That is it provides them with a separate networking stack, device interfaces, routing tables etc.
Let's see an example of an isolated process being created on its own PID namespace. The following code operates as fork() would do.

    #if defined(__i386__) || defined(__arm__) || defined(__x86_64__) || defined(__mips__)
        long ret;
        int flags = SIGCHLD|CLONE_NEWPID;
        ret = syscall(SYS_clone, flags, 0, 0, 0);
        if (ret == 0 && syscall(SYS_getpid) != 1)
                return -1;
        return ret;
    #endif

This approach, of course has a performance penalty in the time needed to create a new process. In my experiments with openconnect VPN server the time to create a process with NEWPID, NEWNET and NEWIPC flags increased the process creation time to 10 times more than a call to fork().

Note however, that the isolation subsystems available in clone() are by default reversible using the setns() system call. To ensure that these subsystems remain isolated even after code injection seccomp must be used to eliminate calls setns() (many thanks to the FOSDEM participant who brought that to my attention).

Furthermore, the approach of Namespaces follows the principle of blacklisting, allowing a developer to isolate from certain subsystems but not from every one available in the system. That is, one may enable all the isolation subsystems available in the clone() call, but then he may realize that the kernel keyring is still available to the isolated process. That is because there is no implementation of such an isolation mechanism for the kernel keyring so far.

Conclusions


In the following table I attempt to summarize the protection offerings of each of the described methods.


Prevent killing
other processes
Prevent access to memory
of other processes
Prevent access to
shared memory
Prevent exploitation
of an unused system call bug
Seccomp True True True True
prctl(SET_DUMPABLE) False True False False
SELinux True True True False
Namespaces True True True False

In my opinion, seccomp seems to be the best option to consider as an isolation mechanism when designing new software. Together with Linux Namespaces it can prevent access to other processes, shared memory and filesystem, but the main distinguisher I see, is the fact that it can restrict access to unused system calls.

That is an important point, given the number of available system calls in a modern kernel. It is not only that it reduces the overall attack surface by limiting them, but it will also deny access to functionality that was not intended to --see setns() and the kernel keyring issue above.