Security as a feature

Apple iOS: Why it’s the most secure OS, period

Some interesting analysis on why the iOS platform can be considered to be secure – largely as a result of the level of control that Apple maintains over the hardware, OS and available applications for commercial purposes and not because of any inherent choice for the sake of security.

To me, this opens up some interesting questions about the security design of the variety of programmable machines we now use, ranging from true “general purpose computers” to specific function devices and where a “phone” sits on that spectrum. We’ve moved a long way in the mobile device world in a very short amount of time – modern phones share more in common with our desktop computers than with first generation mobiles.

One potential cause of security issues (particularly in embedded or specialist systems) is allowing the device to do to much (in a basic betrayal of the principle of least privilege). If I’m making, for example, a domestic refrigerator I probably don’t need to include a HTTP server, unless I want to start adding “features” such as inventory checking over a network (because, y’know, it’s easier than opening the door). The issue then becomes that the HTTP server in question is configured by people who manufacture ‘fridges and not by experts in apache (or IIS!).

Phones (or indeed tablets) are hybrid devices – more than a ‘fridge, but not as flexible as a laptop – that’s mostly a choice of the OS provider though and we see easy to use hacks (such as jailbreak) to extend that flexibility. The problem is that, in almost all security systems, the weak link is the humanity – by giving that greater flexibility we will see security issues – the fact there is a default “root” password on iPhones, or the ability to run applications that have not been vetted. For those of us that are advocates of open systems this can be a dilemma – how can we give freedom, but ensure that the stupid edge of the user-base is properly blunted?

This is worse when we consider what “security” means to the vendors rather than the owners of the device – preventing people from playing unauthorised media (DRM) or using functions that would “inhibit” revenue (smartphone data tethering).

This brings us back to a point about who controls the update process for a device and when those updates are released. The great success of Apple has been to remove control from the carriers – they deliver the update to your computer and the device is updated when you sync media – it’s elegant and means that more people have the latest versions. Other devices do not fare so well – over the air delivery is one thing, but potentially less reliable, uses precious mobile bandwidth and pushes your phone back to being at the mercy of whomever controls that channel.

One other point about patching is that whilst it’s almost always better to patch there have been plenty of examples where it has caused more problems – ranging from new security flaws, unexpected changes in functionality or rendering a device unusable. It’s taken Microsoft many years to establish a process that works where Windows users can be kept up to date without too much worry – even so, it’s always possible to roll-back. Would that be possible with an over the air update that somehow renders the device unable to re-connect..? Not a likely scenario, but something to consider.

As we get more and more connected devices understanding the software used and potential vulnerabilities will become more important – how we can quickly and easily update those, correcting the errors, is a vital part of the system, but will never be the most important – the ability to work around the security issues of the human element will be.

Ultimately, Apple may have the most “secure” OS, but that’s because it’s one of the most locked down. Security is easy to achieve on any system – switch it off, lock it away somewhere inpenetrable and don’t allow any inputs or outputs – making it usable and secure is slightly tougher.