Needles and the weakest link

My Haystack: Is finding that one needle really all that important? (Hint: Yes it is.)

Ed Adams raises some good points in his article, specifically around the increase in coverage of breaches (I’m still not 100% sure there is a true increase, or just more reporting) and the passive, reactionary response of “spend more on ‘x’ technology”. The reality, as pointed out is that there’s no way to guarantee the security of any system and the analogy of a “needle in a haystack” is quite an interesting one. Although the focus is on application security, the principles are useful to us all.

Extending that somewhat, we look at security as being a race – the attacker is looking for the needles, you’re trying to find them and remove them before he can get them. Getting rid of the obvious needles is our first task, but no matter what we do we can never be truly certain that there are none left. All we can do is reduce the probability of someone else finding one to such a degree that they give up. This is a reason why regular testing is so important – how else does one get to the needles first?

Unfortunately, attackers tend to come in one of two types. Some are opportunistic and will move on (compare someone checking out cars for visible money or valuables) to easier targets, others are focused on a certain goal. Depending on what type of attacker we face our level of “good enough” may change. Remember, you don’t need to outrun the lion, only the slowest of your friends…

Determined attackers also give another challenge. We often talk about weak links (another analogy) in a security process. What’s missed here is that there is always a weakest link – the television programme of the same name teaches us that even if everyone appears to be perfect, then someone must go. After removing (or resolving) that link, another becomes the weakest link and so on. The lesson is that we can never stop improving things – as Ed wisely says, new attack surfaces will arise, situations will change and our focus will have to adapt.

If we always see security as cost of doing business we can let it get out of control; building it into processes, training, applications and infrastructure will dramatically reduce that, but ultimately there’s a limit – it’s irrational to spend more on securing something than its value (whether that be in infrastructure, training or testing). This is why compliance and regulatory control has been such a boon for the security industry – it’s not perfect (by any means) but it focuses minds by putting a monetary value (in fines or reputation) to otherwise intangible assets.

Of course, the attacker has a similar constraint – there’s no point in spending more to acquire data than its value, but this is more difficult to quantify and shouldn’t be relied on from a defensive point of view; motives can be murky, especially if they’re in it for the lulz.

Data in the cloud

Who cares where your data is? – Roger’s Security Blog – Site Home – TechNet Blogs

There are many issues with data security as soon as we start discussing the “cloud”. Handing control of your data to third parties is pretty obviously something that should take more thought than it does. One area that people forget is to think about the data itself – who owns and controls the email addresses of your customers? The moment it’s on salesforce (to pick an example), they have that data – very few people encrypt the data they give to their service providers; the data and the service are somehow conflated.

Roger picks out a great point which brought back to me my favourite argument against cloud services. At a basic level, the cloud does not exist – what does exist are servers and drives containing data. At any time you, as a “cloud” customer have no idea where your data resides – is it in the USA (the country that searches laptops that come across the border), is it in China, is it in Libya? Only the “cloud” provider may know this. This may seem like a superficial point, but something very serious lies beneath in that different countries retain their own controls over what is acceptable. Whilst we in the UK and Europe think that online gambling is fine, it’s not in the USA – what if a “cloud” provider puts data relating such activities into the USA?

Just to drive home the point – “cloud” customers also have no idea whom else’s data resides on the same hardware as their own. If a criminal or terrorist organisation (in a particular country; obviously definitions vary wildly) happens to share the same services as you, what chance your data could be raided and analysed?

All these points serve to remind us that the cloud does not exist. What does exist are a series of buildings, housing servers, that happen to have Internet connections. There’s a huge difference.

Security as a feature

Apple iOS: Why it’s the most secure OS, period

Some interesting analysis on why the iOS platform can be considered to be secure – largely as a result of the level of control that Apple maintains over the hardware, OS and available applications for commercial purposes and not because of any inherent choice for the sake of security.

To me, this opens up some interesting questions about the security design of the variety of programmable machines we now use, ranging from true “general purpose computers” to specific function devices and where a “phone” sits on that spectrum. We’ve moved a long way in the mobile device world in a very short amount of time – modern phones share more in common with our desktop computers than with first generation mobiles.

One potential cause of security issues (particularly in embedded or specialist systems) is allowing the device to do to much (in a basic betrayal of the principle of least privilege). If I’m making, for example, a domestic refrigerator I probably don’t need to include a HTTP server, unless I want to start adding “features” such as inventory checking over a network (because, y’know, it’s easier than opening the door). The issue then becomes that the HTTP server in question is configured by people who manufacture ‘fridges and not by experts in apache (or IIS!).

Phones (or indeed tablets) are hybrid devices – more than a ‘fridge, but not as flexible as a laptop – that’s mostly a choice of the OS provider though and we see easy to use hacks (such as jailbreak) to extend that flexibility. The problem is that, in almost all security systems, the weak link is the humanity – by giving that greater flexibility we will see security issues – the fact there is a default “root” password on iPhones, or the ability to run applications that have not been vetted. For those of us that are advocates of open systems this can be a dilemma – how can we give freedom, but ensure that the stupid edge of the user-base is properly blunted?

This is worse when we consider what “security” means to the vendors rather than the owners of the device – preventing people from playing unauthorised media (DRM) or using functions that would “inhibit” revenue (smartphone data tethering).

This brings us back to a point about who controls the update process for a device and when those updates are released. The great success of Apple has been to remove control from the carriers – they deliver the update to your computer and the device is updated when you sync media – it’s elegant and means that more people have the latest versions. Other devices do not fare so well – over the air delivery is one thing, but potentially less reliable, uses precious mobile bandwidth and pushes your phone back to being at the mercy of whomever controls that channel.

One other point about patching is that whilst it’s almost always better to patch there have been plenty of examples where it has caused more problems – ranging from new security flaws, unexpected changes in functionality or rendering a device unusable. It’s taken Microsoft many years to establish a process that works where Windows users can be kept up to date without too much worry – even so, it’s always possible to roll-back. Would that be possible with an over the air update that somehow renders the device unable to re-connect..? Not a likely scenario, but something to consider.

As we get more and more connected devices understanding the software used and potential vulnerabilities will become more important – how we can quickly and easily update those, correcting the errors, is a vital part of the system, but will never be the most important – the ability to work around the security issues of the human element will be.

Ultimately, Apple may have the most “secure” OS, but that’s because it’s one of the most locked down. Security is easy to achieve on any system – switch it off, lock it away somewhere inpenetrable and don’t allow any inputs or outputs – making it usable and secure is slightly tougher.