Life at the edge

The edge is where things are most interesting. You get the best view, but there’s a risk of falling off.

In reality though, there are few hard edges; most things are sloped to some degree but there’s a cut-off point in our perception where we interpret something as a cliff rather than something we’d roll cheese down to chase. Most scenarios are covered (or can at least be partially understood) by normal distribution curves (or bell curves, if you will). Another way to think of this, and possibly more common in the IT world, is with the Pareto principle. Even without knowing the details, pretty much everyone seems to know of the underlying 80:20 rule and the associated distributions (land ownership, wealth, effort/results) along with a basic understanding of marginal utility (an extra hundred pounds is more valuable to someone who has one hundred pounds than to someone who has a one thousand).

These are incredibly useful for designing any system – whether it be security, economic or political – knowing that we can achieve 80% of the results with 20% of the effort and cost. So long as we do that, it may well be good enough. Useful as this is, this can be an insidious process – the problem comes at the back-end. Once we’ve achieved 80% of the results we’ll need to put in four times as much effort to achieve the rest (of course, the rule also applies here – the last one percent of anything is the most difficult).

Two areas that often overlap are great examples of where this can, and does, go wrong – security and legislation. Let’s think of a simple example where a company wants to block employees from accessing Facebook during working hours. A block can be implemented in many technical ways, requiring almost zero effort in most cases; but then add an exception for someone who actually needs to work with it and the effort increases. Even worse, we realise that Facebook isn’t a problem any more since everyone has moved to Diaspora, or people are using Twitter. Perhaps a more stringent policy on social media is needed, so let’s start by defining what that is. Regardless, people can use anonymous proxies, SSH tunnels or even just their phones to access the services anyway. We quickly escalate from the point where a simple policy and accompanying solution becomes something almost impossible to maintain, costly and ultimately ineffective.

We see these issues all the time in our legislation (here’s a good example from the USA), tax codes and security implementations. From a pure security point of view we see why things are so reactive and result in the “whack-a-mole” security implementations we have. This is where the edge is important.

Security decisions taken for the majority will need exceptions – even something relatively simple such as configuring an AV solution will have many options; one size doesn’t fit all and many security professionals choose not to run AV at all. To follow our analogy, some people want to lean over and look down, others want to be held back. Appetite for risk, availability of resources and many other factors will affect our security decisions, but building flexibility in is vital.

Stepping too close isn’t big, clever or usually recommended; but there are a number of situations where it’s required and our security processes and solutions should bend in those cases where we can.

McKinnon

Gary Mckinnon Extradition To US Blocked By UK Home Secretary | Techdirt

I was planning on making a quick comment on the Gary McKinnon case but the Techdirt article linked is one of the few I’ve seen actually making most of the valid points about this whole mess so deserved a special mention.

The question here is not whether McKinnon is criminally culpable for his actions (that can be established later), but whether it’s right to send him to a country he’s never been to stand trial when he already falls within the jurisdiction of UK courts and is obviously suffering health issues. I made a more flippant point previously on the questions of jurisdiction in these types of cases and, assuming the act is a crime in the place where a person is physically located it’s best left to be handled in that country. To do otherwise means we can end in a situation where a crime can be simultaneously committed in multiple territories (think of accessing data from cloud based services), or for someone to not know where they have committed a crime, or even if their actions are illegal in that location. That’s no way for the law to work. Extradition is a perfectly valid process in cases where a person has left the prosecuting jurisdiction and certainly has a place in the world; just not here.

I hope that, whatever the truth here, justice prevails and I believe that this is a good step towards making that happen. It also seems that we have a chance of doing something about the one-side extradition process we have with the USA.

As an aside I was forced to draw parallels with recent extradition cases (Abu Hamza et al) and reconcile differing views. I think there are some valid points in discussion but ultimately we look at questions of citizenship and the nation’s duty of care to those people; more importantly in those cases it is a fact that no trial could ever happen here in the UK; extradition was required for any real criminal proceedings.

Security as an advantage

This week has seen a lot of activity in the security world about one of the largest companies in Britain – Tesco. What’s unusual about this, certainly compared to most “security” news is that there’s been no notified data breach. Efforts conducted by Troy Hunt, in particular (and well documented at his web site – Lessons in website security anti-patterns by Tesco) have identified a number of potential security issues with Tesco’s online presence.

Tesco have made some responses (additional coverage at SC Magazine) and I’m sure we’ll see additional news on this.

Tesco aside, what this highlights is that most people aren’t aware of what security is in place, or should be in place for their online transactions. Not everyone has the time, ability or stubbornness of people like Troy to investigate and follow through with enough knowledge to get through the anodyne responses. This is an example of why having a knowledgeable and semi-independent security assessment is something that any organisation should do. That’s not to denigrate some of the fine people who work at Tesco – all of us sometimes need an extra set of eyes and ears, sometimes just to challenge assumptions. Luckily, here, the problems have been identified before there’s a serious issue.

One of the basic issues here is that security is hard – knowing that even if everything has been done “right” that it still may lead to a problem. This is one of the reasons that it’s good advice for users to use different passwords – even if you trust the people you give a password to, you can never be sure that it won’t get leaked. If you use the same username and password combo on multiple sites (or worse, for your e-mail access itself) then any password leak on those compromises a large amount of your online presence. Even a low value breach (a blog, for instance) escalates if those same credentials are used at a shopping site that has your credit card number stored and allows quick purchasing.

Security is about layers of defence – not assuming that each layer will hold, but mitigating and minimising the risk if it doesn’t. This incidentally is one of the issues with the “padlock” icon in browsers – it gives a false sense of security. Users are one of those layers and should assume that whatever is in place by the provider may not be enough…

One of the difficulties with any form of security is when it meets head-on issues such as finance, usability, compliance or legislation. The latter two in particular are insidious, often being used as a replacement for security (we’ve complied with XYZ policy) or even being antipathic to security. Especially in large organisations the challenges in putting forward a culture of good practice against those are immense. There may even be good and acceptable reasons for, what at first appears to be, bad practice.

That said, I’m wondering if these types of events may be the trigger for security as a competitive advantage. Would a (non-security) person actually choose to shop online at one store over another due to security deficiencies? If not, at what point would that happen?

Needles and the weakest link

My Haystack: Is finding that one needle really all that important? (Hint: Yes it is.)

Ed Adams raises some good points in his article, specifically around the increase in coverage of breaches (I’m still not 100% sure there is a true increase, or just more reporting) and the passive, reactionary response of “spend more on ‘x’ technology”. The reality, as pointed out is that there’s no way to guarantee the security of any system and the analogy of a “needle in a haystack” is quite an interesting one. Although the focus is on application security, the principles are useful to us all.

Extending that somewhat, we look at security as being a race – the attacker is looking for the needles, you’re trying to find them and remove them before he can get them. Getting rid of the obvious needles is our first task, but no matter what we do we can never be truly certain that there are none left. All we can do is reduce the probability of someone else finding one to such a degree that they give up. This is a reason why regular testing is so important – how else does one get to the needles first?

Unfortunately, attackers tend to come in one of two types. Some are opportunistic and will move on (compare someone checking out cars for visible money or valuables) to easier targets, others are focused on a certain goal. Depending on what type of attacker we face our level of “good enough” may change. Remember, you don’t need to outrun the lion, only the slowest of your friends…

Determined attackers also give another challenge. We often talk about weak links (another analogy) in a security process. What’s missed here is that there is always a weakest link – the television programme of the same name teaches us that even if everyone appears to be perfect, then someone must go. After removing (or resolving) that link, another becomes the weakest link and so on. The lesson is that we can never stop improving things – as Ed wisely says, new attack surfaces will arise, situations will change and our focus will have to adapt.

If we always see security as cost of doing business we can let it get out of control; building it into processes, training, applications and infrastructure will dramatically reduce that, but ultimately there’s a limit – it’s irrational to spend more on securing something than its value (whether that be in infrastructure, training or testing). This is why compliance and regulatory control has been such a boon for the security industry – it’s not perfect (by any means) but it focuses minds by putting a monetary value (in fines or reputation) to otherwise intangible assets.

Of course, the attacker has a similar constraint – there’s no point in spending more to acquire data than its value, but this is more difficult to quantify and shouldn’t be relied on from a defensive point of view; motives can be murky, especially if they’re in it for the lulz.

Hacking Tools and Intent

EU ministers seek to ban creation of ‘hacking tools’

As I read this story on various sites this morning I was reminded of the old quote – “If cryptography is outlawed, only outlaws will have cryptography”. Attempting to ban tools that may be used for “hacking” is quite extraordinary – as with many of these things, the devil is in the details.

Generally with many tools there are multiple uses – the tool itself does not determine intent. Outside of the IT world, people may own guns for hunting, sport, or even self defence. The argument that every gun is bad is quite specious (no matter what an individuals thoughts on the matter are).

The same is true of a security tool – things that may be used to secure, may also be used to break in, whether in the physical world, or in IT. The comment in the article regarding password cracking/recovery tools is a good one, but the situation is exacerbated when we look at testing.

The whole point about security testing is to check whether the “bad guys” can perform certain activities, but under a controlled and known scenario – the risk can be understood without having the impact of real malicious activity. There’s a simple question of how a valid test can be done without using tools designed for “hacking”.

It’s already a criminal offence in the UK to supply or make something that is likely to be used in offences – including “hacking”, DoS (‘denial of service’) or data modification under the Computer Misuse Act 1990 (‘CMA’) (as amended). Unfortunately this leaves a lot open to interpretation and confusion. There have been successful prosecutions under the act, but they include such crimes as sending too many emails, thereby creating a DoS attack (in the case in question ‘R v Lennon’, the defendant had deliberately set out to achieve this, but the tool in question was merely one designed to send emails.)

Although not directly in the CMA, the prosecutor’s guidance notes do point out that a legitimate security industry exists that may create and publish such applications (articles in the wording of the act) and that tools may have dual use. This does give a situation where a tool may be created and possessed by someone for legitimate reasons, distributed to a second person for apparent similar reasons, but then used by the second person for unlawful purposes (who may then be prosecuted).

Based on this guidance, things may not be all bad, but there’s still a lot of work to do in legitimising the concepts of testing in the law. If correctly written and applied then this may actually help and an EU-wide standard may reduce some of the problems seen with discrepancies and difficulties in interpretation across member states.

Data in the cloud

Who cares where your data is? – Roger’s Security Blog – Site Home – TechNet Blogs

There are many issues with data security as soon as we start discussing the “cloud”. Handing control of your data to third parties is pretty obviously something that should take more thought than it does. One area that people forget is to think about the data itself – who owns and controls the email addresses of your customers? The moment it’s on salesforce (to pick an example), they have that data – very few people encrypt the data they give to their service providers; the data and the service are somehow conflated.

Roger picks out a great point which brought back to me my favourite argument against cloud services. At a basic level, the cloud does not exist – what does exist are servers and drives containing data. At any time you, as a “cloud” customer have no idea where your data resides – is it in the USA (the country that searches laptops that come across the border), is it in China, is it in Libya? Only the “cloud” provider may know this. This may seem like a superficial point, but something very serious lies beneath in that different countries retain their own controls over what is acceptable. Whilst we in the UK and Europe think that online gambling is fine, it’s not in the USA – what if a “cloud” provider puts data relating such activities into the USA?

Just to drive home the point – “cloud” customers also have no idea whom else’s data resides on the same hardware as their own. If a criminal or terrorist organisation (in a particular country; obviously definitions vary wildly) happens to share the same services as you, what chance your data could be raided and analysed?

All these points serve to remind us that the cloud does not exist. What does exist are a series of buildings, housing servers, that happen to have Internet connections. There’s a huge difference.

Password Security

Sony hack reveals password security is even worse than feared • The Register

I was going to comment on something similar to this after my previous posts highlighting the generally poor user security awareness across the enterprise AND consumer spaces. The article is useful as an indicator of where the problem lies, but gives me chance to makes a couple of additional comments.

The common advice regarding passwords is to:

  • keep them complex;
  • change them regularly;
  • use a unique one for each application/system;
  • don’t write them down.

The obvious problem is that the more we follow the first three of those points, the more likely people are to need some easy way of remembering their passwords – writing them down, or otherwise documenting them can be a good way of doing that.

There are better solutions – SSO (‘simplified sign on’), or password lockers (typically with a master password) that can help with this – even the options to remember a password in a browser can help (note that, conceptually, this is no different from writing it down, but is likely to be less obvious or otherwise protected).

Attacks against password stores, as mentioned, provide some very interesting points of analysis – the way that breaches of stores at different sites/hosts can be used for comparison of the commonality of password reuse is obviously of particular interest and provides a good case to argue against such practices. This is a good example that anyone can see of why it’s a bad idea.

On the other hand, it’s perfectly reasonable to argue that it shouldn’t matter – if user credentials were stored securely then we wouldn’t have the information to even begin this analysis. Attempting to educate users of a system in security is pointless if the admins and owners of that system can’t do the basics. Add to that the sometimes conflicting messages and the lack of sense shown by some security wonks and it’s not a wonder that users are the weak link in the process.

Security teams would do well to get the basics right in systems as well as demanding more from people. Humans are the problem, but focusing on technical restrictions on passwords is not the place to start. No matter how simple, or oft-used a password is the simplest attacks are against those that are told to someone, either electronically (such as phishing), or through bribery such as with a bar of chocolate.

Of course, even aside from bribery there are other ways of getting a password, no matter what security is put in place.

xkcd security

(from the always excellent xkcd comic). This concept is tradionally known as a rubber hose attack and is the best indication of the weakness of the flesh in security.

Recent breaches

Stolen RSA data used to hack defense contractor • The Register

There’s a lot more analysis out there today on the Lockhead Martin hack that has led to a recall of RSA SecurID tokens. Anyone using them should demand replacements, or, as a better option alternatives. As the article notes, it’s difficult to trust RSA now…

It’s interesting how the use of a single security product has contributed so severely to a breach. The defence in depth seems to have completely failed. Perhaps this is a case of putting too much faith into a single product – almost along the lines of “we’re safe; we have a firewall”.

A significant point here is how organisations are entwined so that breaches for one company can have serious implications for others – we tend to see this more with business partners (extranet services, VPNs etc.) where choices are made to allow third-party access to data, but this blurs the distinction; the security providers should be treated as business partners.

Many large companies have clauses in contracts providing the right to audit and test partner facilities – this can include running pen tests, or insisting that a validated third party does so – in essence the security domain is extended to include the wider community. With the trends we’re seeing in security as the industry reacts to changing business practices I believe the auditing of external organisations will become more prevalent.

This could be a watershed for how companies treat their security providers as well as their business partners. For those on the other side I can also see a competitive advantage in security – something that I hope will become relevant, especially in “cloud” based services.

Security as a feature

Apple iOS: Why it’s the most secure OS, period

Some interesting analysis on why the iOS platform can be considered to be secure – largely as a result of the level of control that Apple maintains over the hardware, OS and available applications for commercial purposes and not because of any inherent choice for the sake of security.

To me, this opens up some interesting questions about the security design of the variety of programmable machines we now use, ranging from true “general purpose computers” to specific function devices and where a “phone” sits on that spectrum. We’ve moved a long way in the mobile device world in a very short amount of time – modern phones share more in common with our desktop computers than with first generation mobiles.

One potential cause of security issues (particularly in embedded or specialist systems) is allowing the device to do to much (in a basic betrayal of the principle of least privilege). If I’m making, for example, a domestic refrigerator I probably don’t need to include a HTTP server, unless I want to start adding “features” such as inventory checking over a network (because, y’know, it’s easier than opening the door). The issue then becomes that the HTTP server in question is configured by people who manufacture ‘fridges and not by experts in apache (or IIS!).

Phones (or indeed tablets) are hybrid devices – more than a ‘fridge, but not as flexible as a laptop – that’s mostly a choice of the OS provider though and we see easy to use hacks (such as jailbreak) to extend that flexibility. The problem is that, in almost all security systems, the weak link is the humanity – by giving that greater flexibility we will see security issues – the fact there is a default “root” password on iPhones, or the ability to run applications that have not been vetted. For those of us that are advocates of open systems this can be a dilemma – how can we give freedom, but ensure that the stupid edge of the user-base is properly blunted?

This is worse when we consider what “security” means to the vendors rather than the owners of the device – preventing people from playing unauthorised media (DRM) or using functions that would “inhibit” revenue (smartphone data tethering).

This brings us back to a point about who controls the update process for a device and when those updates are released. The great success of Apple has been to remove control from the carriers – they deliver the update to your computer and the device is updated when you sync media – it’s elegant and means that more people have the latest versions. Other devices do not fare so well – over the air delivery is one thing, but potentially less reliable, uses precious mobile bandwidth and pushes your phone back to being at the mercy of whomever controls that channel.

One other point about patching is that whilst it’s almost always better to patch there have been plenty of examples where it has caused more problems – ranging from new security flaws, unexpected changes in functionality or rendering a device unusable. It’s taken Microsoft many years to establish a process that works where Windows users can be kept up to date without too much worry – even so, it’s always possible to roll-back. Would that be possible with an over the air update that somehow renders the device unable to re-connect..? Not a likely scenario, but something to consider.

As we get more and more connected devices understanding the software used and potential vulnerabilities will become more important – how we can quickly and easily update those, correcting the errors, is a vital part of the system, but will never be the most important – the ability to work around the security issues of the human element will be.

Ultimately, Apple may have the most “secure” OS, but that’s because it’s one of the most locked down. Security is easy to achieve on any system – switch it off, lock it away somewhere inpenetrable and don’t allow any inputs or outputs – making it usable and secure is slightly tougher.

Linking

Feds Really Do Seem To Think That Linking To Infringing Content Can Be A Jailable Offense | Techdirt

The story reminded me of a point I made a while ago – regardless of anything else, you (my reader), or me (as the author) has absolutely no idea what will be displayed if you click on the link. At the time of writing, using the particular DNS servers currently provided on the wireless network I am using it is an interesting story about how linking to infringing content shouldn’t really be an offence. Of course, given the way the Internet works, that may not be true for you (your own hosts file may resolve that name to a completely different address) and I guess the people at Techdirt could also change the story at any time which would make this post somewhat non-sensical.

There’s a current trend of using URL shorteners, which seems to be related to the stupid and arbitrary 140 character limit on twitter (which is derived from the limit on SMS message length, despite the fact that every modern phone can concatenate messages into one, making the whole thing even more absurd, but I digress…), which introduce another level of abstraction and make it utterly impossible to know what will happen if a link is clicked. Here’s an example, just to drive the point home…

http://bit.ly/f1dzwo

For a start… notice the CTLD is .ly. That means that this service is controlled by Libya, so obviously nothing wrong there. Secondly, you don’t know what site that links to. Thirdly, you don’t know how the people who control your DNS servers will resolve that name to an address. Fourthly, you don’t know what the http server at that address actually serves as content (malware, porn, movies, live sport). Yet, people click these things all the time.

There’s a major disconnect between the way the law wants to work and the way that things actually do work.