2020 >

Common cryptography mistakes for software engineers

Aljoscha Lautenbach- Watch Now

Most implementations of security mechanisms depend on cryptography, and yet, many vulnerabilities exist because cryptography is used incorrectly. This is partly due to lacking user-friendliness of cryptographic library API designs [1][2], and partly due to a lack of education in the developer community of the underlying mechanisms. As for the API design, we can only lobby for more user-focused design during library development and advocate user-friendly libraries. We can, however, try to improve the communal understanding of how to use cryptography securely. By way of examples, this talk will explore questions such as: What is an IV and why does it matter? Why does entropy matter? Which cipher mode is appropriate for my application? In essence, we highlight points to watch out for when implementing security mechanisms using cryptography.

[1] https://www.cl.cam.ac.uk/~rja14/shb17/fahl.pdf, Comparing the Usability of Cryptographic APIs, IEEE S&P 2017

[2] http://mattsmith.de/pdfs/DevelopersAreNotTheEnemy.pdf, Developers are not the enemy! The need for usable security APIs, IEEE S&P 2016

italicssurround text with
boldsurround text with
**two asterisks**
or just a bare URL
surround text with
strikethroughsurround text with
~~two tilde characters~~
prefix with

Score: 0 | 6 months ago | no reply

Good one, loved it
macys insite

Score: 0 | 9 months ago | no reply

Thank you for the presentation, it is very interesting!

Score: 0 | 11 months ago | no reply

I am implementing AES-GCM using mbedTLS library. I read that NIST recommends a 96-bit IV, however mbedTLS can accept any size IV length. Looking at the mbedTLS source code, I can see that it is performing a HASH on any IV with length not equal to 96-bit such that the resulting HASH value is 96-bits. What are the drawbacks of using an IV less than 96-bits for AES-GCM?

Score: 0 | 11 months ago | no reply

Nice presentation! Thank you!

Score: 0 | 11 months ago | no reply

Thanks! Good talk! :)

Score: 3 | 11 months ago | 1 reply

During development of embedded products, it is common to use SSH to log in and execute commands for the purposes of debugging and validation. Once the product is out in the field, it is useful to keep SSH active for remote analysis, for example when the customer reports a problem. However that exposes the product to being attacked and controlled by unauthorized people. Do you have any recommendations on how commands (via SSH) and file transfers (via SCP) can continue to be possible, and yet allow us to have a high degree of confidence that only authorized people will be able to do that?

Aljoscha LautenbachSpeaker
Score: 1 | 11 months ago | no reply


that is a good question and hard to answer generically. One super simple thing I would always do if I could is move the port off 22, just to get off the radar from all the automated scanning that is going on. Of course, that is no protection and it is still easy to discover the ssh port, but an attacker will first have to run a port scan to find it, and it is a bit easier to track connection attempts to that new port. Then of course, password authentication and root login for ssh should be turned off.

And then things become more dependent on your particular use case and setup. You could for instance have a pool of 10 keypairs: the 10 public keys are preinstalled on your devices for a specific service technician login (use sudo if needed), and each private key is protected with an automatically generated password that you store safely somewhere (e.g., a company safe or a password manager with backups). Similarly, you'll have to keep the private keys secure, but how and where depends on your situation. Maybe each key is stored on a separate USB stick that is put in a safe, or a yubikey, or on separate servers that require extra authentication, or the work laptops of the service technicians, there are many different options depending on the desired level of security.

And then you could also consider to enable two-factor authentication (2FA) for ssh: https://ubuntu.com/tutorials/configure-ssh-2fa#1-overview
Note that 2FA typically uses TOTP (https://en.wikipedia.org/wiki/Time-based_One-time_Password_algorithm), so in this case the secret must be shared among all the devices and all the service technicians. That is only useful if the secret can be stored securely on the devices, for instance in secure storage using TrustZone or something similar.

Best regards,

Score: 2 | 11 months ago | 1 reply

Do you have any opinion of monocypher? I like that it is easy to use, safe by default, readable, and easy to integrate into small embedded systems.
Disclaimer: I created python bindings to assist with host-side development.

I am not sure about how well it would hold up to attacks, such as differential power analysis. Colin O'Flynn will likely cover this in his talk. XChaCha20 was designed to hold up reasonably well as a constant-time shift, XOR, add algorithm. Blake2b and Poly1305 are similar. I am not sure about X25519 key exchange. Any thoughts, comments, or guidance?

Aljoscha LautenbachSpeaker
Score: 3 | 11 months ago | 1 reply

Hi Matt,

I had not heard of it before! The description on the website gives me pause, though. The fact that there seems to be a single developer who seems to value speed above all else does not give me a lot of confidence in the code base. Don't get me wrong, it could still be awesomely secure code, but I would personally not recommend it for production. As you point out yourself, in cryptography speed shortcuts can lead to side-channels or other vulnerabilities that are not necessarily obvious.
Then of course there are also simple maintenance questions that apply to every library: is it sufficiently well supported to be used in production, i.e., can you trust that bugs will be fixed in a timely manner for many years to come? Is there a community around it or professional support to help with problems? On first glance, the answer for monocypher does not look like a clear yes to me.

Best regards,

Score: 0 | 11 months ago | no reply

Hi Aljoscha,

Thanks for your reply! I have heard that Loup (the initial developer of Monocypher) had similar pushback from the crypto community. The project now has a small group of developers and maintainers, so at least that's good. The state of crypto on microcontrollers is pretty poor. Microcontroller security accelerators and secure elements help if you have them, but the software implementations are lacking. On the open-source side, the libraries are TweetNaCl (done to prove a point) and mbed-tls (now used by Amazon FreeRTOS). As far as "state-of-the-art", here is LoRaWAN's crypto implementation. Either way, crypto on micros is still an open, fragmented problem without a clear solution in my opinion.

Thanks for the great talk!

  • Matt
Score: 1 | 11 months ago | 2 replies

Daniel Bernstein has written NaCL [1]. He's also written a smaller version called TweetNaCL. I'm attracted by the small size and ease of integration for TweetNaCL [2]. Should I stay away from it and prefer NaCL instead?
[1] http://nacl.cr.yp.to/
[2] https://tweetnacl.cr.yp.to/

Aljoscha LautenbachSpeaker
Score: 1 | 11 months ago | no reply

Hi again,
choosing a good library is tricky. The authors of the NaCl are highly respected cryptographers, so I would expect secure code. If TweetNaCl fulfills your needs, I don't see why you could not use it. But sometimes it is difficult to know all your needs in advance, we recently found out that mbedTLS does not support PKCS#7 (CMS), which I found rather surprising. Sometimes features you expect to be there simply are not there. ;)
Best regards,

Score: 0 | 11 months ago | no reply

Another option is monocypher which is certainly better than TweetNaCl.

Score: 4 | 11 months ago | 1 reply

Thanks Aljoscha! You mentioned that you had removed common mistakes from your presentation. Can you list out some of the other common mistakes so that we can at least be aware of them?

Aljoscha LautenbachSpeaker
Score: 2 | 11 months ago | no reply


Sure! I had to cut about half of the content because I ran horribly over time with my first version, here are the points I cut:

  1. Inventing your own crypto algorithms or security protocols
  2. Implementing crypto algorithms from scratch
    ... (The ones in the presentation)
  3. No or poor key management strategy
  4. Excessive flexibility
  5. Insecure default configurations

The first two are not that common anymore, so even though they are very important points I felt it would be ok to drop them. And the last three are more practical implementation and design issues and only marginally crypto related. Especially the last point is still very common in embedded systems though.

Best regards,

Score: 0 | 11 months ago | 1 reply

lot of talk about vulnerabilities, but how do we map this to risk. i.e can a protocol with a technical vulnerability still be utilised if the risk is defined as low

Aljoscha LautenbachSpeaker
Score: 0 | 11 months ago | no reply


well, that is a completely different topic. What you want to do in that case is a threat and risk analysis for your system in question, which you then use to figure out which risks are acceptable and which risks need mitigation in form of security controls. For example, if you read the NIST recommendations on key management, for most key types the recommendation is to switch to new keys after about two years, if memory serves. But in an embedded system that operates offline, that is typically not possible. So by carefully choosing a reasonably future-proof algorithm and corresponding key length depending on the time of operation you are looking at, this risk can be mitigated but not eliminated.

More to your point, if the vulnerability in question is deemed to have acceptable risk, of course you can use it. But it is up to you and the stakeholders to define what acceptable risk is; in some contexts a breach of confidentiality is not a big deal because it is integrity that matters, which is quite common in control systems. In other contexts a breach of confidentiality can be unacceptable.

Best regards,

Score: 3 | 11 months ago | 1 reply

Thank you for your talk.
You say that we should prefer /dev/random to /dev/urandom, but AFAIK OpenSSL reads from /dev/urandom.
Thomas Pornin says here [1] that /dev/urandom should always be prefererred, and he also refers to [2]:

The man page for urandom is somewhat misleading, arguably downright wrong, when it suggests that /dev/urandom may "run out of entropy" and /dev/random should be preferred; the only instant where /dev/urandom might imply a security issue due to low entropy is during the first moments of a fresh, automated OS install; if the machine booted up to a point where it has begun having some network activity then it has gathered enough physical randomness to provide randomness of high enough quality for all practical usages (I am talking about Linux here; on FreeBSD, that momentary instant of slight weakness does not occur at all). On the other hand, /dev/random has a tendency of blocking at inopportune times, leading to very real and irksome usability issues. Or, to say it in less words: use /dev/urandom and be happy; use /dev/random and be sorry.

[1] https://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key
[2] https://www.2uo.de/myths-about-urandom/
What is your take on that?

Aljoscha LautenbachSpeaker
Score: 1 | 11 months ago | no reply

Hi dannas,

that is a very fair point. I was debating with myself before the talk whether I should remove the /dev/[u]random example or not, obviously I made the wrong choice. The point is, it depends on when you need your random data, and what you are using it for. I think the second link you provided explains one of the key points very well towards the end: "Linux's /dev/urandom happily gives you not-so-random numbers before the kernel even had the chance to gather entropy. When is that? At system start, booting the computer. FreeBSD does the right thing: they don't have the distinction between /dev/random and /dev/urandom, both are the same device. At startup /dev/random blocks once until enough starting entropy has been gathered. Then it won't block ever again."

In practice, after you boot you need to make sure that you seed your PRNG appropriately, but once the PRNG has been properly seeded, it is ok to use it. So I agree that /dev/urandom is not inherently insecure and often it is what you want to use. My apologies for choosing a bad example there.

Best regards,

Score: 0 | 11 months ago | 1 reply

Is it safe to stick to Thomas Ptaceks Cryptography Right Answers or should I be looking elsewhere for recommendations on cryptographic algorithm choices?
Encrypting data: KMS or XSalsa20+Poly1305
Symmetric Key Length: 256-bit keys
Symmetric signatures: HMAC
Hashing Algorithms: SHA-2
Random IDs: 256-bit random numbers from /dev/urandom
Password Handling: scrypt, argon2, bcrypt
Assymetric encryption: Nacl/libsodium (box /crypto_box)
Assymetric signatures: Nacl or Ed25519
Website security: Use AWS ALB/ELB or OpenSSL, with LetsEncrypt
Online backups: Tarsnap
[1] https://latacora.micro.blog/2018/04/03/cryptographic-right-answers.html

Score: 1 | 11 months ago | no reply

Dannas I'm fellow attendee (don't work for ARM or WolfSSL) but I think it depends on your application. In my experience, the Latacora recommendations are great but in embedded systems many of these options and related software packages are either too big or have dependencies that are difficult to integrate into an embedded binary. That said, for something like SSL/TLS, WolfSSL implements many of the functions/algorithms listed by Latacora and is integrated into the Keil (MDK 5) development environment as a software pack [0]. The size can be customized but estimates [1] are between 20kb and 100kb. Ultimately the size and functionality will depend on: device constraints (RAM/ROM) and the application needs.
0 - WolfSSL and Keil
1- WolfSSL Overview