Recent breaches

Stolen RSA data used to hack defense contractor • The Register

There’s a lot more analysis out there today on the Lockhead Martin hack that has led to a recall of RSA SecurID tokens. Anyone using them should demand replacements, or, as a better option alternatives. As the article notes, it’s difficult to trust RSA now…

It’s interesting how the use of a single security product has contributed so severely to a breach. The defence in depth seems to have completely failed. Perhaps this is a case of putting too much faith into a single product – almost along the lines of “we’re safe; we have a firewall”.

A significant point here is how organisations are entwined so that breaches for one company can have serious implications for others – we tend to see this more with business partners (extranet services, VPNs etc.) where choices are made to allow third-party access to data, but this blurs the distinction; the security providers should be treated as business partners.

Many large companies have clauses in contracts providing the right to audit and test partner facilities – this can include running pen tests, or insisting that a validated third party does so – in essence the security domain is extended to include the wider community. With the trends we’re seeing in security as the industry reacts to changing business practices I believe the auditing of external organisations will become more prevalent.

This could be a watershed for how companies treat their security providers as well as their business partners. For those on the other side I can also see a competitive advantage in security – something that I hope will become relevant, especially in “cloud” based services.

Security as a feature

Apple iOS: Why it’s the most secure OS, period

Some interesting analysis on why the iOS platform can be considered to be secure – largely as a result of the level of control that Apple maintains over the hardware, OS and available applications for commercial purposes and not because of any inherent choice for the sake of security.

To me, this opens up some interesting questions about the security design of the variety of programmable machines we now use, ranging from true “general purpose computers” to specific function devices and where a “phone” sits on that spectrum. We’ve moved a long way in the mobile device world in a very short amount of time – modern phones share more in common with our desktop computers than with first generation mobiles.

One potential cause of security issues (particularly in embedded or specialist systems) is allowing the device to do to much (in a basic betrayal of the principle of least privilege). If I’m making, for example, a domestic refrigerator I probably don’t need to include a HTTP server, unless I want to start adding “features” such as inventory checking over a network (because, y’know, it’s easier than opening the door). The issue then becomes that the HTTP server in question is configured by people who manufacture ‘fridges and not by experts in apache (or IIS!).

Phones (or indeed tablets) are hybrid devices – more than a ‘fridge, but not as flexible as a laptop – that’s mostly a choice of the OS provider though and we see easy to use hacks (such as jailbreak) to extend that flexibility. The problem is that, in almost all security systems, the weak link is the humanity – by giving that greater flexibility we will see security issues – the fact there is a default “root” password on iPhones, or the ability to run applications that have not been vetted. For those of us that are advocates of open systems this can be a dilemma – how can we give freedom, but ensure that the stupid edge of the user-base is properly blunted?

This is worse when we consider what “security” means to the vendors rather than the owners of the device – preventing people from playing unauthorised media (DRM) or using functions that would “inhibit” revenue (smartphone data tethering).

This brings us back to a point about who controls the update process for a device and when those updates are released. The great success of Apple has been to remove control from the carriers – they deliver the update to your computer and the device is updated when you sync media – it’s elegant and means that more people have the latest versions. Other devices do not fare so well – over the air delivery is one thing, but potentially less reliable, uses precious mobile bandwidth and pushes your phone back to being at the mercy of whomever controls that channel.

One other point about patching is that whilst it’s almost always better to patch there have been plenty of examples where it has caused more problems – ranging from new security flaws, unexpected changes in functionality or rendering a device unusable. It’s taken Microsoft many years to establish a process that works where Windows users can be kept up to date without too much worry – even so, it’s always possible to roll-back. Would that be possible with an over the air update that somehow renders the device unable to re-connect..? Not a likely scenario, but something to consider.

As we get more and more connected devices understanding the software used and potential vulnerabilities will become more important – how we can quickly and easily update those, correcting the errors, is a vital part of the system, but will never be the most important – the ability to work around the security issues of the human element will be.

Ultimately, Apple may have the most “secure” OS, but that’s because it’s one of the most locked down. Security is easy to achieve on any system – switch it off, lock it away somewhere inpenetrable and don’t allow any inputs or outputs – making it usable and secure is slightly tougher.


Feds Really Do Seem To Think That Linking To Infringing Content Can Be A Jailable Offense | Techdirt

The story reminded me of a point I made a while ago – regardless of anything else, you (my reader), or me (as the author) has absolutely no idea what will be displayed if you click on the link. At the time of writing, using the particular DNS servers currently provided on the wireless network I am using it is an interesting story about how linking to infringing content shouldn’t really be an offence. Of course, given the way the Internet works, that may not be true for you (your own hosts file may resolve that name to a completely different address) and I guess the people at Techdirt could also change the story at any time which would make this post somewhat non-sensical.

There’s a current trend of using URL shorteners, which seems to be related to the stupid and arbitrary 140 character limit on twitter (which is derived from the limit on SMS message length, despite the fact that every modern phone can concatenate messages into one, making the whole thing even more absurd, but I digress…), which introduce another level of abstraction and make it utterly impossible to know what will happen if a link is clicked. Here’s an example, just to drive the point home…

For a start… notice the CTLD is .ly. That means that this service is controlled by Libya, so obviously nothing wrong there. Secondly, you don’t know what site that links to. Thirdly, you don’t know how the people who control your DNS servers will resolve that name to an address. Fourthly, you don’t know what the http server at that address actually serves as content (malware, porn, movies, live sport). Yet, people click these things all the time.

There’s a major disconnect between the way the law wants to work and the way that things actually do work.

Phone Hacking

BBC News – Phone hacking probe by Met faces scrutiny

What’s interesting to me about this ongoing story (how many years is this now?!) is the lack of detail and information from a security perspective and even the basics about what has been alleged.

From following the story I’m still not entirely sure what is meant by “phone”; does it refer to a handset itself, or a telecoms network? I’m also not sure what is meant by “hacking” in this case although I’m assuming it’s not someone jailbreaking an iPhone…

Either way this is less of an individual privacy story and more one related to criminal misuse of computer systems. Where are the network operators involved in all this? Shouldn’t they be the ones calling for an investigation, or at the very least demonstrating that the networks they run are not so easy to “hack”?

The media coverage of this whole “event” is pathetic. A sample line from the BBC Q&A (linked to from the above story) is –

Who do we know was hacked?

I’d go so far as to say that, with regards to this, nobody has been hacked, unless there are some related battery and ABA charges related to this.

What’s missing is clear and concise information about what has happened. This affects all of us – individuals and businesses – who use commercial telecoms networks, not just celebrities and politicians (although I’d include them in the former category nowadays). At the very least there’s a fantastic upsell opportunity for someone…

In these days when Google and Facebook are slammed for not providing satisfactory privacy controls (even though users willingly share information on those services) I find it disgusting to see the people responsible for controlling these systems are not being questioned.

Update (27 Jan 2010 @18:47): Some more information from The Register. The comments on this story indicate that there’s not really “hacking” in any true sense, but taking advantage of the ability to access voicemail from other ‘phones, along with easily guessable PINs. Perhaps there’s an easy lesson to be learnt here.

Kilt Wearing

The Big Bear in Your Mind: “The TSA: fondling fat men’s balls since 2010,” or “I got felt up by a guy with a community college security degree, and all I got was this lousy blog topic”


SubliminalPanda gives a great account of his experience as a “kilted fat man” going through airport security in the states and makes a few good points about the stupidity of the checks and the fact that he was put through a traditional scanner instead. As I pointed out here this “security” system cannot easily cope with these scenarios.

Things, thankfully, are still a little more sane in most European airports. Let’s hope it stays that way.


Airport Security

New aviation risk: pleats – Boing Boing

If this is true (and I have no reason to suspect that it isn’t) it would be typical of the process used for “securing” our shared transport infrastructure. Recent comments by Martin Broughton about the state of this hinted towards the ad-hoc nature of the rules (is an ipad a laptop?)(although as a side note, this is an accusation I would level at legislative bodies more generally), which of course are applied differently in different airports (do I take my shoes off? what about my belt?) and lead to confusion on all travellers, including those of us who fly a lot.

Whilst taking off over-clothes (jackets, &c) is natural and expected there will be incidents with clothing of multiple layers and pleats (or maybe just thick material) where this will lead to difficulties. I can also imagine a number of cultural implications of this – within English law, during a stop and search, for instance there are items of clothing that can be removed in public and others that must be removed in private (as you’d expect). I have never seen (in any airport) facilities for me to remove clothing in private, then pass through the scanners (and maintaining my modesty from other passengers). The reality is that this isn’t an issue for me, but I can easily imagine persons of certain culture where this would be inconvenient or offensive – without picking out the obvious situations where someone may be fully covered, what about a Scotsman in traditional dress (kilt and nothing underneath).

As I write there’s a comment on the boing boing page linked to about restrictions on clothing in future which is interesting. For regular travellers I wonder already how much clothing choice is dictated by the demands of airport security – slip on shoes, non metallic items (never mind the liquids – not that the security people ever notice) etc. How much inconvenience are people willing to put with?

Ultimately there will need to be changes – this relentless march towards “security” can’t last forever.

Facebook Security

There’s been a lot said about privacy on Facebook recently, that I won’t go over or comment on at the moment.

Given Google’s recent move to supporting HTTPS on the search page I was starting to check other sites for the same. I was slightly surprised to find that Facebook does indeed allow HTTPS connections and then even more surprised to find that chat doesn’t work (“disabled on this page”) – possibly an area this would be most warranted.

I can’t think of a good reason why this would be the case – perhaps more investigation is required!