Guiding Rules

BearSSL tries to find a reasonable trade-off between several partly conflicting goals:

  • security: defaults should be robust, and using patently insecure algorithms or protocols should be made difficult in the API, or simply not possible;

  • interoperability with existing SSL/TLS servers;

  • allowing lightweight algorithms for CPU-challenged platforms;

  • be extensible with strong and efficient implementations on big systems where code footprint is less important.

One interesting feature of the SSL/TLS handshake is that client and server may support several cipher suites and protocol versions, and the handshake mechanics should ensure that a preference order will be honoured (client’s or server’s preferences, this is configurable). Thus, sub-optimal (i.e. “less safe”) alternatives can still be supported and will not be used unless there is no other choice. For instance, a client or a server may support TLS_RSA_WITH_3DES_EDE_CBC_SHA at the very bottom of its list, thus making sure that it will be used only as a last resort, if nothing better is possible; and, arguably, for all its shortcomings, 3DES is still better than unencrypted communications.

This property holds against attacks: attackers cannot force a client and a server to use a cipher suite that they both support if they also support a “better” suite that they both prefer; this is achieved through the Finished messages at the end of the handshake (the hash of all handshake messages is used to compute the verification messages, so any malicious alteration will be detected at that point). This resistance to downgrade attacks is ensured as long as the two following conditions are met:

  • Clients do not voluntarily downgrade themselves. Some widespread clients (Web browsers…) tend to react to handshake failures by trying again, with a trimmed down list of cipher suites and a lower protocol version. They do that in order to support broken servers that fail if presented with options that, according to the protocol specification, they should ignore.

  • None of the cipher suites supported by the client and server allows the handshake to be thoroughly broken right away. To a large extent, this is what the “FREAK” and “Logjam” attacks are about: a client and server support algorithms (and key sizes) which are so weak that the attacker can see through them before the end of the handshake. Thus, the attacker can alter the ClientHello message to force use of a very weak cipher suite, break the thing, and fix the Finished messages to make the alteration unseen.

Protocol Versions

Supported Versions

BearSSL implements TLS 1.0, 1.1 and 1.2.

Support for 1.0 is included for interoperability with existing, deployed implementations; a TLS-1.2-only library would be slightly smaller, but less applicable in the real world, which would not fulfil the goal of “being useful”. Supporting TLS-1.0 securely requires some care in the implementation (see below) but it can be done.

Note that while the BearSSL code supports all three protocol versions, these can be deactivated easily, from the calling application. Indeed, the br_ssl_engine_set_versions() function allows specifying the minimum and maximum versions that a given client or server will support. The default behaviour (“full” profile) is to support all three versions, but any application can thus change that with a single, simple function call.

TLS-1.3 is not supported (yet) because it does not exist (yet). There are drafts. I do not wish to implement draft protocols, because drafts may change and this may create compatibility dillemmas: if I implement a draft version of the protocol and then something is changed in an incompatible way, then the BearSSL implementation cannot be changed without possibly breaking existing applications1. RFC are immutable, so they can be implemented without fear of such breakage; if TLS-1.3 gets finalised, and a problem is found in it that requires a protocol change, the fixed protocol will be TLS-1.4, declared as such. From the current state of the draft, formal publication should be forthcoming in not too much time, especially since some big actors in the Web industry (including major browsers) have begun deploying TLS-1.3 (so, because of the breakage issues explained above, future changes to the draft are improbable).

SSL-3.0 is not supported because it has an unfixable issue with padding in CBC mode (dubbed “POODLE attack”). Adding an SSL-3.0 support would not be hard (it is very similar to TLS-1.0) but it would be dangerous, and should not be necessary in most practical situations: since TLS-1.0 dates back from 1999, implementations that do not support it are badly in need of updates anyway. There are places where servers that know only SSL-3.0 are still deployed, but I made the decision that BearSSL won’t support them; this is my contribution to the noble goal of killing them off.


There are some partial mechanisms that may help in dealing with voluntary downgrades. When using RSA key exchange, the client sends a 48-byte pre-master secret, encrypted with the server’s public key; the two first bytes of that pre-master secret should match the highest protocol version that the client supports. The server will verify that the value matches what was received in the ClientHello. This is done in BearSSL in ssl_hs_server.t0 using the method recommended by RFC 5246: the two first bytes are overwritten, after decryption, with the expected value; on mismatch, this will result in the computation of a wrong master secret, which will make the handshake fail at the Finished state.

Another, more general mechanism, is the use of a special pseudo cipher suite (TLS_FALLBACK_SCSV), as described in RFC 7507. This has been implemented in version 0.2, on the server side. This cipher suite allows the client to document that it runs with a protocol version that is lower than what it could do; the server should then reject the connection if it could also use a higher version, because that indicates foul play (a previous connection attempt with a higher version failed, possibly because of an intervention from an attacker).

BearSSL being a purely computational library, it does not perform any connection itself, so it won’t downgrade. Arguably, voluntary downgrades are best avoided, and clients should not do that. If you really want to downgrade your client, then make sure that downgraded connection attempts include the TLS_FALLBACK_SCSV pseudo-cipher suite in its list, so that the server may reject such connections in case of undue downgrades.

Key Exchange

Forward Secrecy

BearSSL supports RSA, ECDH and ECDHE key exchanges (in the latter case, both ECDHE_RSA and ECDHE_ECDSA are supported). The ECDHE key exchange provides the desirable property called forward secrecy, but at a cost.

Forward secrecy really makes sense in a context where the server’s “permanent” secret key (the one corresponding to its certificate) might be compromised, but a “transient” secret key (the private multiplier for ECDHE) is immune to such theft. In older times, such a distinction was easily made by assuming that private key compromise was about reading a file (through some Web site vulnerability) but the RAM contents of the server were out of reach of attackers. Such things are less clear now, especially since many servers are virtual machines that are backed by hardware that the guest OS cannot control; what the guest sees as “pure RAM” (forced out of swap with mlock()) may still hit disks somewhere, as part of an overbooking or snapshoting policy.

Possible leakage of transient secrets is compounded by parameter reuse. Some SSL/TLS servers will reuse their DHE or ECDHE private keys over some period of times (for some, they renew every few minutes; others will keep them alive for weeks). This saves a bit of computing power, but goes against forward secrecy. Parameter reuse also implies that misbehaviour on invalid input (e.g. a client sends an invalid curve point in its ClientKeyExchange message and the server fails to validate it) may leak information reusable to attack other connections.

For these reasons, when using ECDHE, BearSSL generates a new secret for every connection (both on client and server side); on the server, it keeps it only as long as necessary.

Still, an ECDHE cipher suite requires both the ECDH code and the signature system (RSA or ECDSA), and the extra cost of generating or verifying that signature. Thus, ECDHE is necessarily more expensive, both in CPU and code footprint, than plain RSA or static ECDH. This is especially true on the client side, where RSA key exchange is very light (since RSA public exponents are typically very small).

This is why BearSSL still offers support for the non-forward secure cipher suites: while forward secrecy is desirable, there are contexts where the extra latency implied by ECDHE (because of the computation cost on a low-power system) is not tolerable. Still, in its default setup, the ECDHE cipher suites are on top of the list of supported cipher suites, so you get all the forward secrecy goodness unless you explicitly refuse it.


RSA key exchange uses PKCS#1 encryption (the “old-style” padding from PKCS#1 v1.5). This has a few known problems, which BearSSL works around:

  • RSA decryption is subject to potential timing attacks. In fact, timing attacks on RSA were the first timing attacks ever published, by Paul Kocher in 1996. Various counter-measures have been described, most based on masking with random values, but randomness is a hard requirement and it was quite inconvenient, in the API, to propagate an access to a RNG in the RSA code. Instead, BearSSL uses a constant-time RSA implementation (e.g. see the modular exponentiation code), which is, by construction, immune to timing attacks.

    (The code may still leak information on the size of the private RSA factor, but with a 2048-bit modulus, everybody knows that the factors will be about 1024-bit each.)

  • In 1998, Bleichenbacher described an attack by which a single decryption could be done by using a server as oracle, based on whether the pre-master secret decryption yielded a proper PKCS#1 “type 2” padding or not. To avoid that issue, BearSSL also generates a random phony pre-master secret and substitutes it for the actual thing with a constant-time conditional copy, in case the padding is not correct. The padding verification is also constant-time. See the br_ssl_rsa_decrypt() function.

  • RSA keys which are too short can be broken through a mixture of complicated mathematics and big computers. The RSA key of the server is normally obtained by the client through validation of the server’s certificate (although it may also be known by some “out of band” mechanism, e.g. hardcoded in the firmware of some embedded system). The minimal X.509 validation engine of BearSSL will by default reject RSA keys shorter than 1017 bits (i.e. 128 bytes). This is configurable. Also, BearSSL does not support the “export” cipher suites.

BearSSL’s current implementation are less than optimal with regards to performance; they are in pure C, with only 32-bit multiplications. Better implementations shall be added in subsequent versions.

Elliptic Curves

BearSSL currently includes eight elliptic curve implementations, plus an extra two “virtual” implementations that aggregate the other ones. These support NIST curves P-256, P-384 and P-521, and Curve25519.

The ec_prime_i31 implementation uses the generic “i31” big integer code, also used for other algorithms (e.g. RSA), to implement the NIST curves. ec_c25519_i31 uses the “i31” code for Curve25519. Using the generic “i31” code saves code space but yields suboptimal performance.

The ec_p256_m31 and ec_curve25519_m31 implementations support P-256 and Curve25519, respectively, with specialised code, including modular reduction routines that leverage the special format of the field modulus, and internal split of data as sequences of 30-bit words, which helps with carry propagation. ec_p256_m31 also includes fixed point optimisations, for the common case of multiplying the conventional generator point. These implementations are faster than the generic “i31” code, but with a larger code footprint.

The ec_all_m31 implementation is simply a wrapper that uses ec_p256_m31, ec_c25519_m31 and ec_prime_i31 to efficiently support the NIST curves and Curve25519.

The ec_prime_i15, ec_c25519_i15, ec_p256_m15, ec_c25519_m15 and ec_all_m15 implementations are similar to the “i31/m31” family, but with an internal representation with smaller words. The “i31/m31” functions sue 32→64 multiplications, but the “i15/m15” code uses only 32→32 multiplications. On the small ARM Cortex M, the “i15/m15” implementations are not only faster, but they are also constant-time, whereas the “i31/m31” functions might not be constant-time because of specificities of the hardware (especially the M0, M0+, M1 and M3).

For Curve25519, the standard Montgomery ladder is used.

For the NIST curves, jacobian coordinates and windows are used; window lookups are done with care in order to remain constant-time, thereby not leaking information on the multiplier bits (see the code). Points are validated upon decoding (by verifying the curve equation). Since multipliers are non-zero and less than the curve order, and the curve order is prime, this ensures that “tricky situations” (e.g. addition of two points that are actually equal or opposite) are controlled.

NIST originally specified 15 curves. Curves smaller than 256 bits are being deprecated because of their perceived weakness against powerful attackers2. Binary curves, despite their nice characteristics, have also become unfashionable because of some theoretical results on asymptotic complexity of discrete logarithm on binary curves at large degrees. These curves are mostly unused, and will probably be even rarer in practice, therefore they are not implemented by BearSSL.

ECDSA is famous for having trouble with the random generation of the per-signature secret value (“k”). BearSSL fixes that by using the deterministic signature scheme described in RFC 6979.

Symmetric Encryption

BearSSL currently implements AES/CBC, AES/GCM and 3DES/CBC cipher suites. BearSSL does not implement RC4, because it has serious biases, and it is explicitly forbidden. BearSSL also implements (as of version 0.2) two ChaCha20+Poly1305 AEAD cipher suites (specified in RFC 7905).

CBC vs GCM vs ChaCha20+Poly1305

GCM and ChaCha20+Poly1305 are nice AEAD modes (authenticated encryption with extra authenticated data) that solve many of the issues related to CBC mode in SSL/TLS. In current implementations, ChaCha20+Poly1305 is both smaller and faster than the AES/GCM modes; however, it has been only recently specified, so not many servers support it. AES/GCM offers wider interoperability. In its default configuration, BearSSL offers both, with ChaCha20+Poly1305 being preferred.

However, even AES/GCM support is not yet generalised (for one, it requires TLS-1.2) so I included CBC cipher suites. These cipher suites can be implemented securely but it requires great care. From about 2010 onward, a number of attacks have been widely publicised, that relate to weaknesses in how CBC encryption is performed in SSL/TLS; the theory behind these attacks was already known (most of them were conceptually described around 2003) but the year 2010 marked the moment people realised that chosen plaintext and chosen ciphertext attacks were actually workable in practical situations, thanks to Web browsers, that will happily run hostile code (in Javascript).

BEAST is a chosen-plaintext attack which works thanks to the attacker being able to predict the IV for CBC encryption of the next record, and choosing the plaintext data for that record accordingly. In TLS-1.0, the IV for a record is the last encrypted block of the previous record, which is easily observable; in TLS-1.1 and later, each record has its own IV.

  • When using TLS-1.0 and a CBC cipher suite, BearSSL does the “1/n-1 split”. Namely, if the record to send contains at least two bytes of plaintext, it is split into two records, the first one containing only one byte of plaintext. This basically leverages the HMAC on that record as an unpredicatable source (from the point of view of the attacker) for the IV used in the next record. This appears to be an effective defence and, to the best of our knowledge, this fixes the issue.

    BearSSL applies the split only for application data records, not handshake records such as the one containing the Finished message, because splitting that record breaks old versions of OpenSSL (in the 0.9.8 era). See the source code to observe the splitting.

  • In TLS-1.1 and later, the per-record “random IV” is obtained by computing HMAC over the sequence number (see the code). This is safe assuming that HMAC with an unknown key behaves like a random oracle (all the “normal” per-record HMAC use an input of at least 13 bytes, so this extra HMAC necessarily operates on an input which does not collide with these other HMAC instances); there are some reasonably good reasons to trust that “random oracle” assumption in practice (see this classical article for details).

The “Lucky Thirteen” attack exploits the fact that, famously, SSL/TLS does things wrong in records, in that it uses MAC-then-encrypt instead of encrypt-then-MAC. In practice, this means that SSL implementations must handle the padding before verifying the MAC, and the padding check can be leveraged into padding oracle attacks. A basic countermeasure is to verify the MAC even if the padding is incorrect, but this implies using a specific input length which may result in a detectable timing difference; that’s what the “Lucky Thirteen attack” is about.

BearSSL implements a thoroughly constant-time processing of incoming CBC records. This implies delving into low-level details of HMAC; the method was nicely described by Adam Langley and BearSSL draws on it, though adding an optimisation (the “MAC rotation” is done in n*log n steps instead of n2 for a n-byte HMAC output, and there is no integer division opcode).

Implementation can be seen there, with the HMAC-specific parts there.

CRIME is maybe the most fundamental of the SSL/TLS attacks in the 2010-2015 years. It expresses the basic incompatiblity of encryption and compression: encryption hides data contents but not data length; compression reduces length based on data contents; thus, compression makes length depend on contents, and thereby secrets leak. There is little option but to avoid compression altogether, and that’s what BearSSL does: the only compression algorithm it implements is the trivial “no compression” algorithm. Note that while BearSSL not doing any compression makes it non-guilty of that specific information theoretic sin, it does not, and cannot, prevent any application from using compression by itself and thus leaking secret information (this has been called “BREACH” for HTTP-level compression).

SWEET32 is an incarnation of a problem which has been known for decades, which is that security properties of CBC encryption begin to break down when too much data is encrypted with a single key. That notion of “too much” depends on the block size: with blocks of n bits, it takes about 2n/2 blocks for problems to become, indeed, a problem. Back in 1997, when the AES competition began, candidates had to use 128-bit blocks precisely so that the quantity of data would be so big that the issue would never arise. However, while AES has 128-bit blocks, 3DES uses 64-bit blocks, so there may be data leaks beyond a few dozens or hundreds of gigabytes of data exchanged in a single SSL/TLS connection, a huge but no longer impractical amount. SWEET32 is a demonstration of exactly that. The only cure is not to use a 64-bit block cipher in CBC mode, or to take care to renew connections (possibly with a handshake renegociation) after a few gigabytes have been exchanged.

BearSSL supports 3DES but (by default) puts it at the very bottom of its list of supported cipher suites, so that it may be used only in cases where the alternative is not using SSL/TLS at all, which is still worse than SWEET32.

Hash Functions

BearSSL implements MD5, SHA-1, SHA-224, SHA-256, SHA-384 and SHA-512. Some or all of these functions are used in various places of the protocol. Moreover, they can all be individually activated or deactivated; “deactivation” also allows not pulling the code into the linked binary, hence saving on code footprint.

SSL Protocol

In SSL/TLS, hash functions are used in the following places:

  • To compute the hash of all handshake messages up to some point; this is used as input to the computation of the Finished messages, and for the CertificateVerify message (for client certificates). For TLS 1.0 and 1.1, MD5 and SHA-1 are used for such a job; in TLS 1.2, the chosen hash function depends on the cipher suite (normally SHA-256, or SHA-384 for some specific suites).

  • As part of the HMAC invocations hidden in the “PRF” which is used to do some “secret extension” operations:

    • To turn the pre-master secret (from the key exchange algorithm) into the master secret.

    • To compute the keys and IV for symmetric encryption and MAC.

    • To compute the contents of the Finished messages.

    There again, with TLS 1.0 and 1.1, the PRF will rely on MD5 and SHA-1, while SHA-256 or SHA-384 is used in TLS-1.2.

  • As part of the signatures computed over the ServerKeyExchange (for ECDHE cipher suites) and CertificateVerify (for client certificates) messages. In TLS 1.0 and 1.1, SHA-1 (completed with MD5 for RSA signatures) is used, while TLS-1.2 leaves the choice to the client and server, depending on what they support and what they claim to support.

  • For the per-record HMAC computations. Hash function depends on the cipher suite (with TLS 1.0 and 1.1, this can be only MD5 or SHA-1; with TLS 1.2, SHA-256 and SHA-384 are also possible).

It so happens that none of these usages has any known practical weakness, even when used with MD5, which is otherwise known to be thoroughly broken with regards to collisions. BearSSL still implements no cipher suite that uses HMAC/MD5 for records (no such cipher suite is defined, with AES or 3DES encryption).

Certificate Validation

X.509 certificate validation implies verifying signatures on certificates. Each signature uses an underlying hash function. Pairs of colliding messages for signatures have been used for attacks in the case of MD5, thereby demonstrating that the issue is real.

Normally, modern certification authorities employ defensive measures against such attacks, namely by using random serial numbers for the certificates they issue; since the serial number occurs very early in the certificate structure, this puts back the attacker to the “second preimage” situation, where collisions do not matter per se, and even MD5 appears to be robust. If the CA do their job properly, then there should be no further concern about collision weaknesses in hash functions (and if the CA do not do their job properly, then this raises the question of why you would trust them at all).

Nevertheless, BearSSL does not support MD5 in signatures, since it has been deprecated everywhere. However, BearSSL supports SHA-1 by default, and this proves controversial. There is an ongoing push by Web browser and OS vendors to reject certificates signed with SHA-1 as supporting hash function; the exact reasons for such a push are not all technical. In fact, to a large extent, there is a marketing issue in which no major vendor can afford to appear to be complacent with regards to security; also, the deprecation of SHA-1 in certificates is a good excuse to evict the implementations that do not support SHA-256 (both on the CA side, to get rid of a number of institutional but poorly maintained CA, and on the applicative side, to force an upgrade of software that has gone out of support for far too long).

In the X.509 validation engine provided with BearSSL, the supported hash functions are configured separately from those used in the client or server context (use br_x509_minimal_set_hash(ctx, br_sha1_ID, 0) to disable SHA-1). Right now, for better interoperability, SHA-1 is enabled by default (when using the “full” client profile); this may change in the future, but, in my view, it would be too early, considering that the involved security risks in practice are very exaggerated.

  1. Zeev Tarantov (@ZTarantov) pointed out to me that TLS-1.3 drafts use a draft-specific version, e.g. draft 17 is really “SSL 127.17”. While this allows incompatibilities to be reliably detected, it still means that once a library begins to implement a draft, it must keep on with implementing it, lest deployed applications may break if the library is updated on the client side while the server is not.

  2. A case could be made that P-192, and even more P-224, are still way beyond reach of Earth-based attackers, but they still get deprecated in future RFC currently in draft status.