All posts tagged tech

Are your RSS feeds a security risk?

We’ve all done it; you’ve had a long day at work and you write a quick blog entry when you get home, late at night, forget to spell-check it and release it into the masses blissfully unaware. The next morning you read the drivel you posted and edit it for grammar and spelling mistakes, hoping that nobody noticed.

Only, they did.

It showed up in their feed-readers as soon as you hit publish, archived for all of eternity should they so wish. In this instance, it’s only a few spelling mistakes and typos, it’s not a huge deal and in all honesty, only the particularly finicky would have noticed anyway.

But what about when it’s something a little more controversial?

Picture this if you will, you’ve just had a majorly bad dealing with a supplier/client/colleague and in the heat of the moment you log onto your blog and write a lengthy rant declaring their incompetence. 5 minutes later realisation hits home and you delete the entry immediately. Phew. Or not. What you forget is that a (possibly large) number of people may already have received that post in their feed-readers, and your seemingly final act of deletion is rendered completely useless should they hesitate to refresh their feeds – or worse – save the article on purpose.

That’s not the only scenario of doom and despair. Say for instance, your company has a blog, and you have a hidden category that only employees can post to and view. This category contains some potentially dynamite material in the wrong hands and oops – you just forgot to choose the category and published it to the default, sending it out to all of those lovely feed-readers. There’s no undo, you could take your entire web presence offline and those feed-readers will still have a copy of your private moments.

Then there’s the personal blog, you’ve got a few categories for personal (let’s say, emotional) entries, maybe you’ve got one for ideas for the next big invention/startup, and another for your terrible love poetry. You mark these entries as private and think nothing of writing them once you get into the swing of things, occasionally you’ll forget to mark it as private displaying it on your front page for all of 10 seconds until you realise, but that’s okay, you fixed it.

Wrong.

RSS feeds could potentially ruin your business, livelihood, relationships and reputation if not given the appropriate consideration. Don’t get me wrong, I love them and use them excessively, but let us not for a second consider feeds to be harmless, useful little channels through which we spread our news, let us take them as seriously as we would take committing something to print in a very large publication, because effectively, you are, and the internet has a far bigger reach than any printed publication.

So, what can we do about it?

Obviously, we can all take a lot more precautions. Read, re-read and re-read. Do an all systems check before launching, and generally exercise a lot more caution, but personally I think that we as web developers can do more.

Let’s look into flagging entries for changes and the ability to disable local caching in feed-readers. Let’s put a delay on the RSS publication say 15 minutes after the article itself is published. Let’s develop the RSS spec to take into account these measures and let’s work together to find more solutions to a problem that we have barely acknowledged even exists yet. Because if we don’t, we may live to regret it, and we all know that prevention is far, far better than the cure.

Is Google Analytics reliable enough for client work?

Last week’s massive data loss has got me reconsidering whether or not Google Analytics is the right solution for my clients.

I’ve long been aching for a logfile-based stats package as I’m utterly fed up of implementing and changing goals and filters and having to restart the stat-count from the time of implementation. A logfile-based solution will apply your new filter/goal retroactively and instantly give you some stats to work with. I’m tired of waiting a month after implementing the smallest tweak before being able to report anything back to the client.

However, a logfile-based solution with the advanced features and adaptability of Google Anayltics is not cheap. Or pretty. Of all of the systems I’ve been able to test, Urchin (which Google Analytics is actually based on) is the only one that comes even remotely close to the functionality I need – and costs an arm and a leg for it.

I’ve looked past the JavaScript-only counting, I’ve endured the countless downtimes, I’ve put up with the 6-24 hour delay in reporting, and I’ve forgiven the odd data loss, but the (possibly) final straw for me has come in the shape of last week’s massive data loss. For a period of over a week data was lost on at least 9 of my major accounts relating to the e-commerce tracking – the most important feature of Google Analytics for me. This has completely skewed my conversion rates, my goal conversion rates, the average transactions/amounts etc. Bascially, that one week of last data can equate to me writing off the entire month for reporting. The only thing worse than not reporting at all is reporting bad data.

Of course, who am I to complain whilst Google Analytics is a free service? Well, I’m a user. They’ve lured us all in with a gorgeous interface, awesome functionality and the best price tag in the world – having coerced us away from paid solutions for the most part – it’s their responsibility to provide a service that’s equally as reliable in my opinion. Why? Because it was their intention to have us reliant upon them. It’s a completely closed platform with no interoperability or export function to speak of. Once you go Google, you can’t go back. This is another of the reasons for preferring a logfile-based solution – logfiles are controlled by you.

I’m going to do some research this week to find an alternative solution, I’ve been playing with Woopra a lot recently and I love it, but again it’s JavaScript-based and appears to have little conversion rate and marketing tracking built-in at this stage. Great for brochure sites, not quite there yet for the big guns.

The most frustrating part of the recent outage is that I had a programmer looking at the checkout processes on the sites that were missing data for hours searching for any inkling of a bug, I briefly toyed with the idea that it could be Google’s fault but thought better of it as normal visits and page view etc. were still being counted – it was only the e-commerce tracking that was missing. I had our lead programmer spend the better part of a day trying to hunt this bug down which equated to at the very least a loss of £750 billable time.

That’s probably enough to warrant an Urchin license (£1500 for 100 sites).