McKinnon

Gary Mckinnon Extradition To US Blocked By UK Home Secretary | Techdirt

I was planning on making a quick comment on the Gary McKinnon case but the Techdirt article linked is one of the few I’ve seen actually making most of the valid points about this whole mess so deserved a special mention.

The question here is not whether McKinnon is criminally culpable for his actions (that can be established later), but whether it’s right to send him to a country he’s never been to stand trial when he already falls within the jurisdiction of UK courts and is obviously suffering health issues. I made a more flippant point previously on the questions of jurisdiction in these types of cases and, assuming the act is a crime in the place where a person is physically located it’s best left to be handled in that country. To do otherwise means we can end in a situation where a crime can be simultaneously committed in multiple territories (think of accessing data from cloud based services), or for someone to not know where they have committed a crime, or even if their actions are illegal in that location. That’s no way for the law to work. Extradition is a perfectly valid process in cases where a person has left the prosecuting jurisdiction and certainly has a place in the world; just not here.

I hope that, whatever the truth here, justice prevails and I believe that this is a good step towards making that happen. It also seems that we have a chance of doing something about the one-side extradition process we have with the USA.

As an aside I was forced to draw parallels with recent extradition cases (Abu Hamza et al) and reconcile differing views. I think there are some valid points in discussion but ultimately we look at questions of citizenship and the nation’s duty of care to those people; more importantly in those cases it is a fact that no trial could ever happen here in the UK; extradition was required for any real criminal proceedings.

Needles and the weakest link

My Haystack: Is finding that one needle really all that important? (Hint: Yes it is.)

Ed Adams raises some good points in his article, specifically around the increase in coverage of breaches (I’m still not 100% sure there is a true increase, or just more reporting) and the passive, reactionary response of “spend more on ‘x’ technology”. The reality, as pointed out is that there’s no way to guarantee the security of any system and the analogy of a “needle in a haystack” is quite an interesting one. Although the focus is on application security, the principles are useful to us all.

Extending that somewhat, we look at security as being a race – the attacker is looking for the needles, you’re trying to find them and remove them before he can get them. Getting rid of the obvious needles is our first task, but no matter what we do we can never be truly certain that there are none left. All we can do is reduce the probability of someone else finding one to such a degree that they give up. This is a reason why regular testing is so important – how else does one get to the needles first?

Unfortunately, attackers tend to come in one of two types. Some are opportunistic and will move on (compare someone checking out cars for visible money or valuables) to easier targets, others are focused on a certain goal. Depending on what type of attacker we face our level of “good enough” may change. Remember, you don’t need to outrun the lion, only the slowest of your friends…

Determined attackers also give another challenge. We often talk about weak links (another analogy) in a security process. What’s missed here is that there is always a weakest link – the television programme of the same name teaches us that even if everyone appears to be perfect, then someone must go. After removing (or resolving) that link, another becomes the weakest link and so on. The lesson is that we can never stop improving things – as Ed wisely says, new attack surfaces will arise, situations will change and our focus will have to adapt.

If we always see security as cost of doing business we can let it get out of control; building it into processes, training, applications and infrastructure will dramatically reduce that, but ultimately there’s a limit – it’s irrational to spend more on securing something than its value (whether that be in infrastructure, training or testing). This is why compliance and regulatory control has been such a boon for the security industry – it’s not perfect (by any means) but it focuses minds by putting a monetary value (in fines or reputation) to otherwise intangible assets.

Of course, the attacker has a similar constraint – there’s no point in spending more to acquire data than its value, but this is more difficult to quantify and shouldn’t be relied on from a defensive point of view; motives can be murky, especially if they’re in it for the lulz.

Hacking Tools and Intent

EU ministers seek to ban creation of ‘hacking tools’

As I read this story on various sites this morning I was reminded of the old quote – “If cryptography is outlawed, only outlaws will have cryptography”. Attempting to ban tools that may be used for “hacking” is quite extraordinary – as with many of these things, the devil is in the details.

Generally with many tools there are multiple uses – the tool itself does not determine intent. Outside of the IT world, people may own guns for hunting, sport, or even self defence. The argument that every gun is bad is quite specious (no matter what an individuals thoughts on the matter are).

The same is true of a security tool – things that may be used to secure, may also be used to break in, whether in the physical world, or in IT. The comment in the article regarding password cracking/recovery tools is a good one, but the situation is exacerbated when we look at testing.

The whole point about security testing is to check whether the “bad guys” can perform certain activities, but under a controlled and known scenario – the risk can be understood without having the impact of real malicious activity. There’s a simple question of how a valid test can be done without using tools designed for “hacking”.

It’s already a criminal offence in the UK to supply or make something that is likely to be used in offences – including “hacking”, DoS (‘denial of service’) or data modification under the Computer Misuse Act 1990 (‘CMA’) (as amended). Unfortunately this leaves a lot open to interpretation and confusion. There have been successful prosecutions under the act, but they include such crimes as sending too many emails, thereby creating a DoS attack (in the case in question ‘R v Lennon’, the defendant had deliberately set out to achieve this, but the tool in question was merely one designed to send emails.)

Although not directly in the CMA, the prosecutor’s guidance notes do point out that a legitimate security industry exists that may create and publish such applications (articles in the wording of the act) and that tools may have dual use. This does give a situation where a tool may be created and possessed by someone for legitimate reasons, distributed to a second person for apparent similar reasons, but then used by the second person for unlawful purposes (who may then be prosecuted).

Based on this guidance, things may not be all bad, but there’s still a lot of work to do in legitimising the concepts of testing in the law. If correctly written and applied then this may actually help and an EU-wide standard may reduce some of the problems seen with discrepancies and difficulties in interpretation across member states.

Password Security

Sony hack reveals password security is even worse than feared • The Register

I was going to comment on something similar to this after my previous posts highlighting the generally poor user security awareness across the enterprise AND consumer spaces. The article is useful as an indicator of where the problem lies, but gives me chance to makes a couple of additional comments.

The common advice regarding passwords is to:

  • keep them complex;
  • change them regularly;
  • use a unique one for each application/system;
  • don’t write them down.

The obvious problem is that the more we follow the first three of those points, the more likely people are to need some easy way of remembering their passwords – writing them down, or otherwise documenting them can be a good way of doing that.

There are better solutions – SSO (‘simplified sign on’), or password lockers (typically with a master password) that can help with this – even the options to remember a password in a browser can help (note that, conceptually, this is no different from writing it down, but is likely to be less obvious or otherwise protected).

Attacks against password stores, as mentioned, provide some very interesting points of analysis – the way that breaches of stores at different sites/hosts can be used for comparison of the commonality of password reuse is obviously of particular interest and provides a good case to argue against such practices. This is a good example that anyone can see of why it’s a bad idea.

On the other hand, it’s perfectly reasonable to argue that it shouldn’t matter – if user credentials were stored securely then we wouldn’t have the information to even begin this analysis. Attempting to educate users of a system in security is pointless if the admins and owners of that system can’t do the basics. Add to that the sometimes conflicting messages and the lack of sense shown by some security wonks and it’s not a wonder that users are the weak link in the process.

Security teams would do well to get the basics right in systems as well as demanding more from people. Humans are the problem, but focusing on technical restrictions on passwords is not the place to start. No matter how simple, or oft-used a password is the simplest attacks are against those that are told to someone, either electronically (such as phishing), or through bribery such as with a bar of chocolate.

Of course, even aside from bribery there are other ways of getting a password, no matter what security is put in place.

xkcd security

(from the always excellent xkcd comic). This concept is tradionally known as a rubber hose attack and is the best indication of the weakness of the flesh in security.

Recent breaches

Stolen RSA data used to hack defense contractor • The Register

There’s a lot more analysis out there today on the Lockhead Martin hack that has led to a recall of RSA SecurID tokens. Anyone using them should demand replacements, or, as a better option alternatives. As the article notes, it’s difficult to trust RSA now…

It’s interesting how the use of a single security product has contributed so severely to a breach. The defence in depth seems to have completely failed. Perhaps this is a case of putting too much faith into a single product – almost along the lines of “we’re safe; we have a firewall”.

A significant point here is how organisations are entwined so that breaches for one company can have serious implications for others – we tend to see this more with business partners (extranet services, VPNs etc.) where choices are made to allow third-party access to data, but this blurs the distinction; the security providers should be treated as business partners.

Many large companies have clauses in contracts providing the right to audit and test partner facilities – this can include running pen tests, or insisting that a validated third party does so – in essence the security domain is extended to include the wider community. With the trends we’re seeing in security as the industry reacts to changing business practices I believe the auditing of external organisations will become more prevalent.

This could be a watershed for how companies treat their security providers as well as their business partners. For those on the other side I can also see a competitive advantage in security – something that I hope will become relevant, especially in “cloud” based services.

Phone Hacking

BBC News – Phone hacking probe by Met faces scrutiny

What’s interesting to me about this ongoing story (how many years is this now?!) is the lack of detail and information from a security perspective and even the basics about what has been alleged.

From following the story I’m still not entirely sure what is meant by “phone”; does it refer to a handset itself, or a telecoms network? I’m also not sure what is meant by “hacking” in this case although I’m assuming it’s not someone jailbreaking an iPhone…

Either way this is less of an individual privacy story and more one related to criminal misuse of computer systems. Where are the network operators involved in all this? Shouldn’t they be the ones calling for an investigation, or at the very least demonstrating that the networks they run are not so easy to “hack”?

The media coverage of this whole “event” is pathetic. A sample line from the BBC Q&A (linked to from the above story) is –

Who do we know was hacked?

I’d go so far as to say that, with regards to this, nobody has been hacked, unless there are some related battery and ABA charges related to this.

What’s missing is clear and concise information about what has happened. This affects all of us – individuals and businesses – who use commercial telecoms networks, not just celebrities and politicians (although I’d include them in the former category nowadays). At the very least there’s a fantastic upsell opportunity for someone…

In these days when Google and Facebook are slammed for not providing satisfactory privacy controls (even though users willingly share information on those services) I find it disgusting to see the people responsible for controlling these systems are not being questioned.

Update (27 Jan 2010 @18:47): Some more information from The Register. The comments on this story indicate that there’s not really “hacking” in any true sense, but taking advantage of the ability to access voicemail from other ‘phones, along with easily guessable PINs. Perhaps there’s an easy lesson to be learnt here.