Google Reader

By now, anyone using Google Reader should probably know that it’s going to shutdown on July 1 this year. Personally, I think it’s a shame, but not entirely unexpected.

I use Google Reader daily and in fact it’s the only Google service I still have need for (although I’ve kept Gmail as a backup mail service – it’s good to have a secondary address that can be used for temporary communication, particularly with one use web sites). Once the decision was taken to let the feedburner service run down I always assumed Reader wouldn’t be far behind.

I think there are some issues with the concept of RSS as a “consumer” technology – these implementations never quite gained the popularity I think they should have. I’ve introduced other people to the idea and seen how it can help ease the way that updates are retrieved from web sites. Anyone working online should have least have tried it, but I think the perception was always that it’s a geeky tool and little used. Hopefully the outcry around the web about Google’s decision will contradict that.

One of the comments I was most interested in around this decision was from Dave Winer. Dave makes two points that are worth mentioning; Firstly around his favoured “River of News” approach, I think this is personal preference. One of the reasons I like the Reader style RSS approach (or inbox style, as he refers to it) is that I don’t miss the stories – rivers mean that much is missed, if I don’t see the story when it’s still onscreen I may never see it (this incidentally is my biggest complaint with Twitter as well). The way that I, and in my experience, most other people use RSS is to have that collection of stories that I can come back to when *I* want. If I’m out for a few days I can skim over the list and read as much or as little as I like. I also find the Inbox metaphor constricting – these are not messages to me and no-one is expecting a response; if I mark all as read, no-one is going to chase me for a response!

Dave’s second interesting point is also, to me, somewhat ingenuous.

Next time, please pay a fair price for the services you depend on.

I, like most people are happy to pay for a good service and there are paid for services out there, unfortunately they are, from what I’ve seen just not as good (for various definitions of good) as Google Reader. Quite frankly, everything else I’ve tried to use so far has fallen short. I have paid for apps that provide a front-end to Google Reader (on both iPhone and Mac) that ultimately use the service as a back end. These add value to the experience.

I’ll obviously be checking out alternatives to Google Reader from now. Any suggestions would be welcome!

Addendum: The Ars Technica story and discussion

Site updates – old content and RSS

A few things to alert any potential reader to.

First, I’m re-adding in some content from a few years ago since there are places out there that still link to it. It’s old stuff that was lost during an “episode” with my former hosting company. Probably not worth reading, but it may show up in other places.

Secondly, I’m changing the RSS feed in the next few weeks – currently it’s directed through Feedburner but in future it’ll be direct from here. If you are reading this through RSS (and I can see I have some subscribers, at least one of them may be a real person) please point your RSS reader to http://pasquires.net/feed/ instead.

Life at the edge

The edge is where things are most interesting. You get the best view, but there’s a risk of falling off.

In reality though, there are few hard edges; most things are sloped to some degree but there’s a cut-off point in our perception where we interpret something as a cliff rather than something we’d roll cheese down to chase. Most scenarios are covered (or can at least be partially understood) by normal distribution curves (or bell curves, if you will). Another way to think of this, and possibly more common in the IT world, is with the Pareto principle. Even without knowing the details, pretty much everyone seems to know of the underlying 80:20 rule and the associated distributions (land ownership, wealth, effort/results) along with a basic understanding of marginal utility (an extra hundred pounds is more valuable to someone who has one hundred pounds than to someone who has a one thousand).

These are incredibly useful for designing any system – whether it be security, economic or political – knowing that we can achieve 80% of the results with 20% of the effort and cost. So long as we do that, it may well be good enough. Useful as this is, this can be an insidious process – the problem comes at the back-end. Once we’ve achieved 80% of the results we’ll need to put in four times as much effort to achieve the rest (of course, the rule also applies here – the last one percent of anything is the most difficult).

Two areas that often overlap are great examples of where this can, and does, go wrong – security and legislation. Let’s think of a simple example where a company wants to block employees from accessing Facebook during working hours. A block can be implemented in many technical ways, requiring almost zero effort in most cases; but then add an exception for someone who actually needs to work with it and the effort increases. Even worse, we realise that Facebook isn’t a problem any more since everyone has moved to Diaspora, or people are using Twitter. Perhaps a more stringent policy on social media is needed, so let’s start by defining what that is. Regardless, people can use anonymous proxies, SSH tunnels or even just their phones to access the services anyway. We quickly escalate from the point where a simple policy and accompanying solution becomes something almost impossible to maintain, costly and ultimately ineffective.

We see these issues all the time in our legislation (here’s a good example from the USA), tax codes and security implementations. From a pure security point of view we see why things are so reactive and result in the “whack-a-mole” security implementations we have. This is where the edge is important.

Security decisions taken for the majority will need exceptions – even something relatively simple such as configuring an AV solution will have many options; one size doesn’t fit all and many security professionals choose not to run AV at all. To follow our analogy, some people want to lean over and look down, others want to be held back. Appetite for risk, availability of resources and many other factors will affect our security decisions, but building flexibility in is vital.

Stepping too close isn’t big, clever or usually recommended; but there are a number of situations where it’s required and our security processes and solutions should bend in those cases where we can.

McKinnon

Gary Mckinnon Extradition To US Blocked By UK Home Secretary | Techdirt

I was planning on making a quick comment on the Gary McKinnon case but the Techdirt article linked is one of the few I’ve seen actually making most of the valid points about this whole mess so deserved a special mention.

The question here is not whether McKinnon is criminally culpable for his actions (that can be established later), but whether it’s right to send him to a country he’s never been to stand trial when he already falls within the jurisdiction of UK courts and is obviously suffering health issues. I made a more flippant point previously on the questions of jurisdiction in these types of cases and, assuming the act is a crime in the place where a person is physically located it’s best left to be handled in that country. To do otherwise means we can end in a situation where a crime can be simultaneously committed in multiple territories (think of accessing data from cloud based services), or for someone to not know where they have committed a crime, or even if their actions are illegal in that location. That’s no way for the law to work. Extradition is a perfectly valid process in cases where a person has left the prosecuting jurisdiction and certainly has a place in the world; just not here.

I hope that, whatever the truth here, justice prevails and I believe that this is a good step towards making that happen. It also seems that we have a chance of doing something about the one-side extradition process we have with the USA.

As an aside I was forced to draw parallels with recent extradition cases (Abu Hamza et al) and reconcile differing views. I think there are some valid points in discussion but ultimately we look at questions of citizenship and the nation’s duty of care to those people; more importantly in those cases it is a fact that no trial could ever happen here in the UK; extradition was required for any real criminal proceedings.

Security as an advantage

This week has seen a lot of activity in the security world about one of the largest companies in Britain – Tesco. What’s unusual about this, certainly compared to most “security” news is that there’s been no notified data breach. Efforts conducted by Troy Hunt, in particular (and well documented at his web site – Lessons in website security anti-patterns by Tesco) have identified a number of potential security issues with Tesco’s online presence.

Tesco have made some responses (additional coverage at SC Magazine) and I’m sure we’ll see additional news on this.

Tesco aside, what this highlights is that most people aren’t aware of what security is in place, or should be in place for their online transactions. Not everyone has the time, ability or stubbornness of people like Troy to investigate and follow through with enough knowledge to get through the anodyne responses. This is an example of why having a knowledgeable and semi-independent security assessment is something that any organisation should do. That’s not to denigrate some of the fine people who work at Tesco – all of us sometimes need an extra set of eyes and ears, sometimes just to challenge assumptions. Luckily, here, the problems have been identified before there’s a serious issue.

One of the basic issues here is that security is hard – knowing that even if everything has been done “right” that it still may lead to a problem. This is one of the reasons that it’s good advice for users to use different passwords – even if you trust the people you give a password to, you can never be sure that it won’t get leaked. If you use the same username and password combo on multiple sites (or worse, for your e-mail access itself) then any password leak on those compromises a large amount of your online presence. Even a low value breach (a blog, for instance) escalates if those same credentials are used at a shopping site that has your credit card number stored and allows quick purchasing.

Security is about layers of defence – not assuming that each layer will hold, but mitigating and minimising the risk if it doesn’t. This incidentally is one of the issues with the “padlock” icon in browsers – it gives a false sense of security. Users are one of those layers and should assume that whatever is in place by the provider may not be enough…

One of the difficulties with any form of security is when it meets head-on issues such as finance, usability, compliance or legislation. The latter two in particular are insidious, often being used as a replacement for security (we’ve complied with XYZ policy) or even being antipathic to security. Especially in large organisations the challenges in putting forward a culture of good practice against those are immense. There may even be good and acceptable reasons for, what at first appears to be, bad practice.

That said, I’m wondering if these types of events may be the trigger for security as a competitive advantage. Would a (non-security) person actually choose to shop online at one store over another due to security deficiencies? If not, at what point would that happen?

Back & Forth

Prompted by switching a personal computer to a Mac recently I’ve been looking at more software than I usually would. One particular area I’ve been looking at is that of e-mail clients.

For work purposes Outlook rules. Together with Exchange it provides an experience of using email with associated functions of calendaring and contact management that is difficult to beat. However, this isn’t what I want for home usage.

My choice of e-mail client for personal use is Thunderbird, paired with an appropriate CalDAV based system for a calendar. Of course, the data there is also going to my phone, which brings us closer to the point, which is apps.

Many years ago computing was performed largely on big machines accessed through terminals, but of course you knew this. We then moved slowly towards having clever terminals, running applications locally and then slowly back towards having centralised systems. In my short time in the IT industry I think I’ve seen this trend occur fully twice. The reasons, aside from fashion and wanting to sell something new is typically been swings between the merits of computing power and bandwidth. If bandwidth is low and computing power is high then distribute everything out.

The past few years have seen a trend – move services online, share data, make it social, make it cloudy. This has led to a reduction in the use of desktop applications; just put everything in the browser – e-mail, documents, even games. The browser becomes nothing more than a terminal, enabled with scripting to have a little more intelligence.

The reason this is so interesting right now is because of a counter-trend – mobile devices, the apps they provide and, in some cases, the security challenges they give us. Small screen sizes and connectivity mean that, in some cases data, and therefore apps provide a better user experience. Add to that the need to input passwords regularly (Google especially seems to love invalidating cookies just as I’m doing something useful) and you can see why an app is better on these platforms.

The world is changing again – last year Apple released the Mac App Store. The idea being that software delivery on the “real” computers would become more like that of a mobile device. The new version of OSX moves many UI “innovations” from iOS also. Windows 8 will blur the boundaries further. Essentially things should be the same whether you’re using a phone, a tablet or a laptop. Suddenly where popular wisdom was to put it in the browser we’re back with apps – even with something as quintessentially webby as Twitter, I’m using an app.

This needless waffling brings us back to the point. Email. I use IMAP. For most purposes this is much better than POP (storage and processing is cheap, get over it). Anyone using a phone for email will typically use an app; even on an Android device, produced by Google, mail is an app. However, for some reason this doesn’t seem to translate to the desktop. A quick search for “twitter” in the Mac App Store results in dozens of free apps to access that particular service, but the number of fat client applications for e-mail is low and dropping.

In the past few weeks we’ve seen announcements by Mozilla that Thunderbird development will be scaled back and Sparrow has been bought by Google. Aside from the applications shipped in the OS there are very few actively developed options available for what is still one of the key Internet uses.

And that’s the thing – with social media, instant and online messaging, plain old e-mail isn’t sexy. BUT so many other things rely on it – information from banks and retailers is one thing, but you can’t even get other accounts online without your email address and what happens if you need your password changing…

Plus, it’s still a very effective way of communicating with people.

Long term, usage of any application or protocol will change (we’re seeing it with Twitter already), but I genuinely struggle to believe that we have an entire generation of computer users who now think that e-mailing is an in-browser activity (for goodness sake, half the time I press the backspace key I lose everything I’ve done), so there must be something else at play. Are clients too difficult to use, too difficult to configure or just too clunky?

Needles and the weakest link

My Haystack: Is finding that one needle really all that important? (Hint: Yes it is.)

Ed Adams raises some good points in his article, specifically around the increase in coverage of breaches (I’m still not 100% sure there is a true increase, or just more reporting) and the passive, reactionary response of “spend more on ‘x’ technology”. The reality, as pointed out is that there’s no way to guarantee the security of any system and the analogy of a “needle in a haystack” is quite an interesting one. Although the focus is on application security, the principles are useful to us all.

Extending that somewhat, we look at security as being a race – the attacker is looking for the needles, you’re trying to find them and remove them before he can get them. Getting rid of the obvious needles is our first task, but no matter what we do we can never be truly certain that there are none left. All we can do is reduce the probability of someone else finding one to such a degree that they give up. This is a reason why regular testing is so important – how else does one get to the needles first?

Unfortunately, attackers tend to come in one of two types. Some are opportunistic and will move on (compare someone checking out cars for visible money or valuables) to easier targets, others are focused on a certain goal. Depending on what type of attacker we face our level of “good enough” may change. Remember, you don’t need to outrun the lion, only the slowest of your friends…

Determined attackers also give another challenge. We often talk about weak links (another analogy) in a security process. What’s missed here is that there is always a weakest link – the television programme of the same name teaches us that even if everyone appears to be perfect, then someone must go. After removing (or resolving) that link, another becomes the weakest link and so on. The lesson is that we can never stop improving things – as Ed wisely says, new attack surfaces will arise, situations will change and our focus will have to adapt.

If we always see security as cost of doing business we can let it get out of control; building it into processes, training, applications and infrastructure will dramatically reduce that, but ultimately there’s a limit – it’s irrational to spend more on securing something than its value (whether that be in infrastructure, training or testing). This is why compliance and regulatory control has been such a boon for the security industry – it’s not perfect (by any means) but it focuses minds by putting a monetary value (in fines or reputation) to otherwise intangible assets.

Of course, the attacker has a similar constraint – there’s no point in spending more to acquire data than its value, but this is more difficult to quantify and shouldn’t be relied on from a defensive point of view; motives can be murky, especially if they’re in it for the lulz.

Hacking Tools and Intent

EU ministers seek to ban creation of ‘hacking tools’

As I read this story on various sites this morning I was reminded of the old quote – “If cryptography is outlawed, only outlaws will have cryptography”. Attempting to ban tools that may be used for “hacking” is quite extraordinary – as with many of these things, the devil is in the details.

Generally with many tools there are multiple uses – the tool itself does not determine intent. Outside of the IT world, people may own guns for hunting, sport, or even self defence. The argument that every gun is bad is quite specious (no matter what an individuals thoughts on the matter are).

The same is true of a security tool – things that may be used to secure, may also be used to break in, whether in the physical world, or in IT. The comment in the article regarding password cracking/recovery tools is a good one, but the situation is exacerbated when we look at testing.

The whole point about security testing is to check whether the “bad guys” can perform certain activities, but under a controlled and known scenario – the risk can be understood without having the impact of real malicious activity. There’s a simple question of how a valid test can be done without using tools designed for “hacking”.

It’s already a criminal offence in the UK to supply or make something that is likely to be used in offences – including “hacking”, DoS (‘denial of service’) or data modification under the Computer Misuse Act 1990 (‘CMA’) (as amended). Unfortunately this leaves a lot open to interpretation and confusion. There have been successful prosecutions under the act, but they include such crimes as sending too many emails, thereby creating a DoS attack (in the case in question ‘R v Lennon’, the defendant had deliberately set out to achieve this, but the tool in question was merely one designed to send emails.)

Although not directly in the CMA, the prosecutor’s guidance notes do point out that a legitimate security industry exists that may create and publish such applications (articles in the wording of the act) and that tools may have dual use. This does give a situation where a tool may be created and possessed by someone for legitimate reasons, distributed to a second person for apparent similar reasons, but then used by the second person for unlawful purposes (who may then be prosecuted).

Based on this guidance, things may not be all bad, but there’s still a lot of work to do in legitimising the concepts of testing in the law. If correctly written and applied then this may actually help and an EU-wide standard may reduce some of the problems seen with discrepancies and difficulties in interpretation across member states.

Data in the cloud

Who cares where your data is? – Roger’s Security Blog – Site Home – TechNet Blogs

There are many issues with data security as soon as we start discussing the “cloud”. Handing control of your data to third parties is pretty obviously something that should take more thought than it does. One area that people forget is to think about the data itself – who owns and controls the email addresses of your customers? The moment it’s on salesforce (to pick an example), they have that data – very few people encrypt the data they give to their service providers; the data and the service are somehow conflated.

Roger picks out a great point which brought back to me my favourite argument against cloud services. At a basic level, the cloud does not exist – what does exist are servers and drives containing data. At any time you, as a “cloud” customer have no idea where your data resides – is it in the USA (the country that searches laptops that come across the border), is it in China, is it in Libya? Only the “cloud” provider may know this. This may seem like a superficial point, but something very serious lies beneath in that different countries retain their own controls over what is acceptable. Whilst we in the UK and Europe think that online gambling is fine, it’s not in the USA – what if a “cloud” provider puts data relating such activities into the USA?

Just to drive home the point – “cloud” customers also have no idea whom else’s data resides on the same hardware as their own. If a criminal or terrorist organisation (in a particular country; obviously definitions vary wildly) happens to share the same services as you, what chance your data could be raided and analysed?

All these points serve to remind us that the cloud does not exist. What does exist are a series of buildings, housing servers, that happen to have Internet connections. There’s a huge difference.

Password Security

Sony hack reveals password security is even worse than feared • The Register

I was going to comment on something similar to this after my previous posts highlighting the generally poor user security awareness across the enterprise AND consumer spaces. The article is useful as an indicator of where the problem lies, but gives me chance to makes a couple of additional comments.

The common advice regarding passwords is to:

  • keep them complex;
  • change them regularly;
  • use a unique one for each application/system;
  • don’t write them down.

The obvious problem is that the more we follow the first three of those points, the more likely people are to need some easy way of remembering their passwords – writing them down, or otherwise documenting them can be a good way of doing that.

There are better solutions – SSO (‘simplified sign on’), or password lockers (typically with a master password) that can help with this – even the options to remember a password in a browser can help (note that, conceptually, this is no different from writing it down, but is likely to be less obvious or otherwise protected).

Attacks against password stores, as mentioned, provide some very interesting points of analysis – the way that breaches of stores at different sites/hosts can be used for comparison of the commonality of password reuse is obviously of particular interest and provides a good case to argue against such practices. This is a good example that anyone can see of why it’s a bad idea.

On the other hand, it’s perfectly reasonable to argue that it shouldn’t matter – if user credentials were stored securely then we wouldn’t have the information to even begin this analysis. Attempting to educate users of a system in security is pointless if the admins and owners of that system can’t do the basics. Add to that the sometimes conflicting messages and the lack of sense shown by some security wonks and it’s not a wonder that users are the weak link in the process.

Security teams would do well to get the basics right in systems as well as demanding more from people. Humans are the problem, but focusing on technical restrictions on passwords is not the place to start. No matter how simple, or oft-used a password is the simplest attacks are against those that are told to someone, either electronically (such as phishing), or through bribery such as with a bar of chocolate.

Of course, even aside from bribery there are other ways of getting a password, no matter what security is put in place.

xkcd security

(from the always excellent xkcd comic). This concept is tradionally known as a rubber hose attack and is the best indication of the weakness of the flesh in security.