December 2019

A recent study by KeyFactor has confirmed what many in the IoT industry have been pointing out for a while.  A major source of insecurity in low cost connected devices is down to the lack of access to entropy (i.e. random numbers) when generating supposedly unique security keys.

Of course there will be poor code and hence exploitable bugs on many of these devices too, especially if the code was written in a bug prone language like C rather than something more suitable (but equally performant) like Rust, or if it was thrown together using free or cheap standard libraries that were never meant for commercial use in a hostile cryptographic environment.

But with the weak security keys, the vulnerability is actually designed in, rather than being there because of a coding error.

When supposedly randomly seeded keys turn out to not be, and an attacker can identify and challenge millions of connected devices at no great cost, the probability of a successful exploit rises to an unacceptable level.

And an exploited device can damage more than just its own functionality – it can compromise network security or be co-opted into a criminal botnet.

There are a couple of ways to address this issue:

One is to spend a bit more on the device so that it either has a unique key embedded in a coded component or else has a true source of entropy at hand (perhaps a component that reads data from a source of thermal noise) which can be used when it wakes up and generates its first keys.

Another would be to pre-populate the device with highly secure unique keys before it leaves the manufacture environment.  This has a small implication for manufacture cost (though not actual component cost), but needn’t be onerous.  Many devices are powered up en-masse for code loading and/or testing at manufacture and the overhead of implanting a unique key into the new device at this point should be minimal.  Provided of course that a suitable source of unique keys exists in the first place.  There’d be no point in going to the trouble if the keys implanted at manufacture were not a lot more secure than the device could generate locally.

The first step, however, is for product developers to understand and recognise the problem, and factor it into their requirements from the start.

Then, get proper advice.  You should never attempt to create your own crypto (any more than you should attempt your own cardiac surgery).  The experts are better at this than the rest of us ever will be, and so unfortunately are the attackers.

Finally, ensure that security considerations span not just the product design but the whole device life cycle including manufacture, onboarding of new connected devices or users, in-field upgrades, possible ownership changes, right through to eventual device de-activation and disposal.

An IoT device has security liabilities that extend far beyond the device’s own functionality. Make sure your next implementation is fit for purpose.

Nobody wants to be a security case study.