[ Content | Sidebar ]

XCRI Validation: A retrospective on feedback so far

February 28th, 2012

This is the fifth in a series of blog posts aimed at documenting the development of an XCRI-CAP 1.2 validator. The entire series can be found by visiting the validator blog post list. My aim is to post a new blog post every week until the development is complete, then at least once every month to document both the usage of the validator as well as any community-highlighted modifications or issues.

Two weeks ago I posted a link across to the in-development version of the XCRI-CAP 1.2 validator and early adopters are already using the validator to help start their feed implementations.  In fact, in that time, the validator has had 105 visits from 49 unique visitors with 890 page views and an average of 8.5 pages per visit.  The average time on the site is over 15 minutes per visit.  Since I added more detailed logging late on Friday 24th February (3 days ago) there have been over 175 validations made using the system.  I’ve just added even further logging which will start to get collated over the next few days.  In addition to the raw metrics, two educational institutions have also contacted me with further feedback.

The great news is that, thus far, there have has only been one report that actually broke the validator (a bad XPath selector, now fixed) and one report of an incorrect XPath selector (now fixed).  The feedback has also highlighted an issue in the XCRI-CAP 1.2 sample schema files (around child elements within mlo:location, if anyone is interested), an agonisingly painful typographical error on the wiki (for the Dublin Core namespace, now resolved) and also highlighted some inconsistencies in the sample XML that institutions are finding.  All of these issues are fantastic and are already being worked on by relevant parties within the Course Data Programme and the XCRI Project Team.  A fix has been produced for the child elements within mlo:location being reported in an incorrect namespace and this will filter around the various people involved for comment before being published.

There has also been some feedback around the error text being returned by the validator, in particular that which is returned for structural XML issues (rather than the XCRI-CAP file content).  This feedback has been incredibly useful and has already resulted in some modifications being made to the validator to aid future users.  In particular, additional work has been done for issues where XML elements are correctly placed in a document but are in the wrong namespace (e.g. rather than ) and also where elements are correctly placed but incorrectly capitalised (e.g. rather than ).

The in-development version of the XCRI-CAP 1.2 validator continues to be modified on a daily basis with a number of help/UI modifications being published today.  Additionally there are now 212 unique unit tests as part of the open-source solution; a number which is growing day-by-day.  If you have any comments about the XCRI-CAP 1.2 validator, please feel free to comment directly upon this blog post, or post within the “Using the validator” category on the XCRI forum.

Vocabularies and validation

February 20th, 2012

This is the fourth in a series of blog posts aimed at documenting the development of an XCRI-CAP 1.2 validator. The entire series can be found by visiting the validator blog post list. My aim is to post a new blog post every week until the development is complete, then at least once every month to document both the usage of the validator as well as any community-highlighted modifications or issues.

One item that has come up during the development of both the XCRI-CAP 1.2 validator and aggregator is vocabularies.  Using vocabularies for elements such as subject (optionally along with an xsi:type attribute) allows consumers of the feeds to understand the context of the information when it’s imported.  Whilst vocabularies vary according to target audience, some vocabularies such as JACS are highly-used within the UK educational sector.  It is highly likely that groups involved in the latest – and any future – JISC-funded activity around XCRI will use vocabularies such as JACS in their feeds.

In order to ensure that the validator can handle vocabularies, it has been decided to write a module enabling validation of content using IMS VDEX files.  To achieve this, a VDEXValidator class has been created that can be used to select elements from the XCRI-CAP 1.2 document and validate them against a specific VDEX file.  If any of the information doesn’t match the expected vocabulary, feedback can be highlighted to the user in the standard manner.  [Please note that VDEX files will be used to validate vocabularies using the online validator but VDEX files are not required to use vocabularies within the XCRI-CAP standard itself.]

To test this, the <studyMode> element from the XCRI-CAP 1.2 standard has been converted to VDEX format and the validation rules have been updated to use this VDEX file rather than having the values hard-coded within the validation module itself.  The process has also started for other enumerated values within the XCRI-CAP standard (attendancePattern, attendanceMode, etc), as well as other vocabularies such as JACS.

If you are implementing an XCRI-CAP 1.2 feed on behalf of your institution and would like more information on the recommended vocabularies to use, please contact the working and support groups on the XCRI forum.

The XCRI-CAP 1.2 validation library and an online tool to validate feeds are expected to be completed by the end of February 2012.

Validation: a first public look

February 13th, 2012

This is the fourth in a series of blog posts aimed at documenting the development of an XCRI-CAP 1.2 validator. The entire series can be found by visiting the validator blog post list. My aim is to post a new blog post every week until the development is complete, then at least once every month to document both the usage of the validator as well as any community-highlighted modifications or issues.

Firstly: an update on the validation code’s changes since last week’s blog post:

  • XML rulebase
    Thanks to some invaluable feedback from Alan Paull and co., some modifications have been highlighted and are currently being implemented. These changes encompass both feedback shown to the user, the XCRI-CAP 1.2 standard, plus the work the Course Data Programme are doing. Approximately 40% of the changes have been done with the others hopefully to be completed over the next week.
  • API
    In discussion with K-Int, the team producing the XCRI-CAP 1.2 aggregator, a basic API structure has been agreed and a first iteration implemented and provided across to K-Int. Calls to the validator are made using HTTP POST requests, with validation results returned in a JSON format.

Secondly, I’d like to take this opportunity to ask for feedback on the UI and UX of the validator. The current development version of the validator is available at http://xcricapvalidator.apphb.com. Please note that this is a development version of the validator and is subject to change. If you would like to comment on the design or usability of the validator, please either comment on this blog post or feel free to contact me directly @CraigHawker.

The XCRI-CAP 1.2 validation library and an online tool to validate feeds are expected to be completed by the end of February 2012.

State of the Nation

February 6th, 2012

This is the third in a series of blog posts aimed at documenting the development of an XCRI-CAP 1.2 validator. The entire series can be found by visiting the validator blog post list. My aim is to post a new blog post every week until the development is complete, then at least once every month to document both the usage of the validator as well as any community-highlighted modifications or issues.

This week the blog post will summarise the current state of the project and, hopefully, point people to resources they can use to research elements further. The development of the validator itself is broken down into several key sections and an overview of each section’s current status is given below.

  • Validation code (~95% done)
    The underlying core validation code is almost complete, subject to any issues that come up in testing or any additional validators that are thrown up by the rulebase’s QA processes. Automated testing of the core validation code is being done using the MSTest unit test projects within the solution. The current code can be cloned from the Google Code repository at http://code.google.com/p/xcricap-validator/.
  • XML rulebase (~90% complete)
    The XML rulebase has undergone its first iteration and the rules that it validates are currently being checked by other members of the team. Once the rules themselves have been agreed, work will start on the the guidance that relates to the rules. The current version of the rulebase can be downloaded from http://code.google.com/p/xcricap-validator/source/browse/src/XCRI.Validation/xml%20files/ValidationModules/XCRICAP12.xml.
  • Web interface (80% complete)
    The web interface – to use the validation code online – is currently in its first iteration. The basic approach with the web site is to be simple and clean, allowing developers to easily get at the issues with the feeds and feedback so far has been positive. The current interface has been tested in the latest builds of Chrome and Firefox, as well as Internet Explorer 9 on a PC.
  • API (planning stage)
    A RESTful API is currently being developed to allow the aggregator to integrate with the validation code. In its first implementation the API will support returning validation results via JSON. The API design has been planned with the aggregator team and the plan is for a mock implementation to be available to them within the next 7 days. The API itself will utilise the core validation code and the XML rulebase.

The XCRI-CAP 1.2 validation library and an online tool to validate feeds are expected to be completed by the end of February 2012.

Validation Library structure

January 30th, 2012

This is the second in a series of blog posts aimed at documenting the development of an XCRI-CAP 1.2 validator. The entire series can be found by visiting the validator blog post list. My aim is to post a new blog post every week until the development is complete, then at least once every month to document both the usage of the validator as well as any community-highlighted modifications or issues.

In this blog post I have decided to concentrate on the top-level structure of the validation routine with the aim for users to understand the basic terminology used and how it fits within the project as a whole. The validator library is built as a .NET 4 library using C# and builds upon the base validation functionality exposed by the .NET library itself.

Validation of an XCRI-CAP 1.2 feed is done in three separate ways: by ensuring the document is a valid XML document, that the XML document is valid according to the XML schemas involved, and that the XML document passes some additional rule-based validation. At each level, each exception captured by the document is presented back to the user along with guidance on how to resolve the issue.

  • The first level of validation ensures that the document itself is a valid XML document. This captures issues such as incorrectly escaped characters like “>” and “<” within the document, incorrect tag nesting and undeclared namespace prefix issues.
  • The second level of validation ensures that the document is valid according to the XML Schemas that it references. The XML Schema itself for XCRI-CAP 1.2 can be found on Google Code and the XCRI.co.uk website (note, though, that the XML Schema documents are interpretations of the specification not the specification itself – the specification is found on the XCRI wiki). This level of validation highlights, for example, invalidly used elements or elements that are used that are in the wrong namespaces.
  • The third level of validation runs various validation rules over the document to ensure both compliance with the specification (above and beyond that enforced by the XML Schemas), and to aim to ensure a consistency of information within the feed itself. The current document can be found within the validator code repository.

The rule base used in the validation is fed in from both the official XCRI-CAP 1.2 standard, through guidance issued via associated JISC-funded projects, and through the wider XCRI-CAP community.

The XCRI-CAP 1.2 validation library and an online tool to validate feeds are expected to be completed by the end of February 2012.

XCRI-CAP 1.2 validation – the first steps

January 23rd, 2012

This is the first in a series of blog posts aimed at documenting the development of an XCRI-CAP 1.2 validator. My aim is to post a new blog post every week until the development is complete, then at least once every month to document both the usage of the validator as well as any community-highlighted modifications or issues.

For those unaware of the validator project, the purpose of the validator is to aim to improve the validity and consistency of information contained within XCRI-CAP 1.2 feeds that are currently being produced, including those by institutions involved in the JISC “Course data: making the most of course information” call.

The validator will build upon a community-produced XCRI-CAP Online Validator, with the validation rules being driven by key individuals and groups involved in the XCRI community. Feeds that fail validation for any reason will be provided with guidance on the reasons for the test being made and recommendations on altering the feed to resolve the issue.

The key deliverables of the project are:

  1. The production of an open-source validation library and rulebase, and
  2. A version of the validation tool available online to anyone who wants to use it

Development of the validator started in December 2011 and is expected to be complete at the end of February 2012. The online implementation of the library will remain available (at a minimum) through until March 2013.

How to get multiple Gmail calendars on Windows Phone

December 30th, 2011

One of the most frustrating things I’ve found with Windows Phone is that – like the first iterations of the iPhone – it will not synchronise multiple calendars with Google. This is incredibly annoying when you have multiple people all with Google calendars which are perfectly visible through the web interface but cannot be seen on the device.

How happy was I to find a post at gigaom by @kevinctofel on how to enable them. The irony is that it’s exactly the same steps that you had to undertake with the first iPhone devices.

I made some small tweaks to his post as I don’t have Safari, but the steps are basically the same:

  • Download and install Firefox (if you don’t already have it)
    This is so that we can use an extension to change the user agent. You can do it other ways if you fancy, this is easiest.
  • Download the “User Agent Switcher” extension for Firefox. This, basically, allows you to pretend to be an iPhone to the server. Yep, you read that right, we’re neutering ourselves.
  • Open Firefox and go to http://m.google.com/sync and sign in with your Google account
  • Within Firefox, press “alt” (so you get the menu at the top of the screen) and choose “Tools”, then “User Agent”, then “iPhone 3.0″
  • Click on the Firefox tab at the top-left of the screen and choose Options
  • Go to the “Content” tab and untick ‘enable JavaScript'” (we’re really neutered now), then click “OK”
  • Click on your WindowsPhone in the list of devices
  • Tick all the calendars you’d like to see on your phone and click “Save”
  • Go to the Settings section on your phone, then Email + Accounts, and tap and hold on your GMail account. Choose “sync” from the context menu.
  • Open up your calendar and you should see entries from the calendars in it. You can then further turn calendars off within the settings
  • Remember to turn JavaScript back on and change your user agent back to the default!

These steps are pretty much identical to those within the gigaom article, just customised for someone who doesn’t have Safari on their machine. Full credit goes to @kevinctofel for finding the solution!

Eurogamer Expo 2011 (live!)

September 22nd, 2011

Live pictures from the Eurogamer Expo 2011

(Should update as I post while I’m out, hopefully!)

A break from the norm – Zune Pass

June 4th, 2011

Disclaimer: I was provided with a month’s free trial for the Zune service. I am not a member of the press, I just happened to be in the right place at the right time and got one for free. Anyone can (currently) sign up for a 14-day trial through the Zune website. I have not received any other payment or service to write this blog post.

I wanted to post this early-on, whilst my impressions were still vivid. I’ve only activated my Zune Pass today and wanted to write down my initial thoughts. I plan on following up this blog post soon after I’ve had a better play with the service.

The beginning: Spotify

For a couple of years now I’ve been an avid user of Spotify. Not only an avid user, but I’ve vocalised the virtues of their service to a large number of people, both within the software development arena and outside.

Spotify’s business model suited me. I created a number of playlists, broadly categorised by genre, and would typically listen to them through large headphones whilst trying to drown out the normal office banter. However, I’ve always used Spotify Free. I always found the adverts relevant, clever and – some – exceptionally geolocalised. I didn’t mind them for the sake of legally listening to music.

However, over the past few months it’s become more apparent that adaptations to Spotify’s business model were not moving in the same direction as my music consumption habits:

The end result of Spotify’s shift in direction was them stopping their free service on the 1st of May. I’m not going to attempt to qualify why this happened, just that it did. Their new “free” service means a maximum of 10 hours per month, 5 unique plays of each song. Ever. So if I am quite happy listening to Bat Out Of Hell, I can listen to it 5 times in total and then no more. Even with one of the really-long edits, you’d be pushing to get an hour of listening. With adverts, of course. Effectively to use their service in the way which works for me, I now have to pay. And that’s fine, because that’s their business model.

Subscription-based services

The result of their approach is that their service now has to compete with other subscription-based models out there, most notably for me being Microsoft’s Zune service. All of these services differ slightly in price and functionality, but basically they boil down to you paying a monthly rental fee to be able to listen to music that you may not already own. The caveat is that you never own these tracks and they become unusable once you stop paying your subscription fee (I am aware that in the US you get to permanently keep a number of tracks per month – that isn’t the case in the UK).

In addition to their new “open” model, Spotify have two pay-for options:

  1. Spotify Unlimited (£4.99 per month)
    Unlimited streaming of music, no adverts
  2. Spotify Premium (£9.99 per month)
    Unlimited streaming of music, no adverts, plus offline track availability and access from a mobile device

Microsoft’s Zune service has one pay-for option: £8.99 per month for unlimited streaming of music and no adverts. Once you have a Zune Pass, it can be used from a number of devices logged in with the same Windows Live ID (it looks like 3 PCs, plus an unknown number of other devices); that is any PC with the Zune software, any Windows Phone, or any XBox. Microsoft also offer some discounted pricing if you pre-purchase more than one month at a time. At the time of writing this gave approximately a 10% discount if you pre-purchased a year rather than paying month-by-month.

The Future? Microsoft Zune

Forget the Zune devices, this is different

In the US, Microsoft had a number of Zune devices. These devices made a small impact upon the exploding iPod market but never got them any significant market share. But forget those devices because the Zune name is all that ties the two together; Zune is now Microsoft’s overarching brand for entertainment consumption, whether that be movies or music, through pretty much any of their devices.

On the PC

The Zune desktop software is available for PCs running XP SP3 upwards (so that includes Vista and Windows 7), the Zune software is what powers the Music and Movies hub on Windows Phone 7, and the Zune service is what powers music and movie (playback and rental) through the XBox 360. All of these services are tied together through Windows Live. As the main graphic on the Zune website insinuates, this really is the “three screen” service they’ve been trying to push for the past few years.

What’s virtually impossible to get across in words is the fluidity in the interface. Whilst I’ve used the Zune software on occasion to sync my Windows Phone, I hadn’t seen their Now Playing screen before using the Zune Pass and it’s a marvel. If you’ve ever used Windows Media Centre then you’ll be aware of the album art wall that scrolls album art behind now-playing music. The version within Zune is like that but on steroids; it’s a cross between that and a professionally-produced film. As the album art wall fades out, it’s replaced by subtly-moving pictures of the artists, fading professionally between shots. Whilst this happens, cleverly-faded text overlays the graphics at angles showing various information. It’s got to be seen to be believed. It’s far from the metro-inspired minimalistic take of Media Centre but it’s exceptionally consumer-orientated.

That said, the software itself is not without its failings. The navigation, once you first start the software, can be confusing confusing. Zune Pass blurs the line between which tracks are your own and which have been downloaded (but you’ll lose if/when you stop the Zune Pass). With a track listing of hundreds (if not thousands), I’m not sure what the Quick Play tab is really useful for. I end up going straight for the Collection (which allows me to access my current albums) or the MarketPlace to get new tracks via Zune Pass.

On Windows Phone 7

Having extolled the virtues of the graphical interface within the desktop application, the Windows Phone client is a much more pedestrian affair. This may change with the upcoming mango release, but at the moment it’s primarily based around lists of tracks arranged by artist, album or genre, or discovered through searches. The sound playback is good although arguably highly dependent upon both the device you’re using and the quality of headphones.

Don’t get me wrong, it’s a perfectly adequate music player – very akin to what I’ve seen of both iTunes and the iPhone Spotify application, but it’s missing the wow factor of the desktop application. But that’s not what makes me feel the phone version is missing a trick. What’s missing here is Smart DJ (see below) and, if you have a Zune Pass, it’s the one thing you’ll miss.

On the XBox

I can’t comment on the XBox as I don’t have one. I have a feeling that that’ll have to change one of these days, but not just yet.

The Zune service


Signup/Activation was very straight-forward. I simply logged onto the Zune website, logged in with my Windows Live ID, and clicked the “free Zune Pass trial” link. Once I’d activated that, I put in an additional code for a month’s Zune Pass. All went through simply, very straight-forward, and was greeted with a lovely “thank you” page afterwards.

Unfortunately, getting to use the Zune Pass wasn’t quite that straight-forward.

Like most people after a “purchase”, the first thing you want to do is to fire it up and have a play. Unfortunately it wasn’t that simple. After doing the above, the tracks/albums within the Zune software and the phone client were still only giving me preview/purchase options. Even after closing the software and logging in again I couldn’t get it to work. After logging out/in a few times, the Zune client asked me to agree to some new terms and conditions. I assume this was due to the newly-enabled Zune Pass. After this it was fine. The phone, however, required a reboot.

First impressions of the Marketplace

The Marketplace is very good. I searched for a number of new and older artists and found their music freely available, as well as full back-catalogues. The simplicity of clicking “download” next to an album, downloading it to my collection, then working with it in the same way as other files was very good. If you don’t want to search by album or artist specific album then you can choose to drill down by genre or go into “mixtapes” – very similar to Spotify’s shared playlists. However, let’s be honest, this was pretty much exactly the same as Spotify and other music services out there.

My moment of clarity – Smart DJ

Smart DJ is similar to iTunes’ Genius system. There’ve been a number of articles that compare Smart DJ to iTunes’ Genius so I’ll not go into that in too much detail, just to say that the power that the Zune Pass brings this technology is fantastic.

Simply choose any song, artist or album and select “Smart DJ”. Smart DJ will automatically create you a playlist of similar tracks combining both the music you own and music it will automatically stream down to your computer. If you like the playlist, clicking a couple of buttons will allow you to download it en-mass for future use.

Out and about

I don’t use music out and about much. A combination of two young children and the fact I don’t use public transport to/from work means that my time spent in a situation where I can listen to music is minimal. However, I plan a follow-up blog post and I’ll try and find a way to use it.

The problem

What I asked myself was “Do I listen to that much music?”. £8.99 about the same as the cost of an album per month, which is exceptionally good value for money for the literally endless amount of music that I could listen to for that amount. Let me put it in context: if I have my headphones on for 5 hours per day, 5 days a week then that’s 100 hours – give or take – per month. At 9p per hour, that’s far better value than virtually any other entertainment medium.

Using these services, however, what do I end up with? If I pay for a year then it costs around £90. If I then stop paying, I have nothing. I have no new music. To my knowledge, at least in the UK, this is the same for all equivalent services. Note that this is different in the US on Zune where you can choose to keep 10 songs every month. If this was available in the UK then this would be a huge stick with which to beat Spotify (which has major market share).

Will I subscribe and, if so, with whom?

Well, the “with whom” option is a clear winner. For me, with the devices I have, Spotify does not work in the way I would like. That’s not a criticism of their service at all but, even with a Windows Phone client, their service would only provide exactly the same functionality as Zune, but at £1 per month dearer. For me, Zune is a clear winner. It works with the devices I have (say what you want about what that means about me!), and it works well. I’m really willing to try Spotify again when they have a Windows Phone client, if I have the option.

Will I subscribe? I don’t know. The quality, quantity and ubiquity of the service – for probably significantly less than 9p per hour – can’t be argued with. I keep asking myself the same question, though: for the way in which I listen to music, does this subscription model work for me? If I could keep 10 tracks per month then there’d be no question – I’d immediately subscribe. For me, that’s probably the number of new tracks that I want to keep each month. I would effectively be able to sample music wherever I went but choose each set that I wanted and keep it. It would be a have-your-cake-and-eat-it scenario.

But have Microsoft missed a trick?

Hands up if you’re a developer. Keep your hand up if you often put your headphones on and listen to music in order to concentrate. I’ll bet that’s a large chunk of the first batch. I’ve seen this a lot, particularly in small companies where development departments are not detached from other business functions such as sales. So much so that I’ve started to hear about companies offering subscriptions to streaming music services – such as Spotify or Zune Pass – as perks for developers.

Bearing in mind the literally thousands of Microsoft Partners that it has already subscribed, how easy would it be for Microsoft to roll out a “Zune Pass (Microsoft Partner Subscription)” for a reduced rate. Companies could easily manage access to the system in a similar way to MSDN/Partner entitlements and be charged on a SPLA-style basis.

What’s next?

I still have a couple of weeks left on the Zune Pass. I’ll continue to trial it and hopefully write a follow-up post when I’ve decided what to do.

The “EU Cookie Directive” (2009/136/EC) and you.

May 23rd, 2011

Prompted primarily by a customer enquiry, I recently posted on Twitter asking my followers whether they had any knowledge of industry best standard with respect to the EU Cookie Directive that’s due to come into force on the 26th May.  The answer, unfortunately, was a resounding “no”.

This is in most part due to the fact that the Information Commissioner’s Office (ICO) – an independent authority which aims to uphold personal information rights – has not given firm guidance on its interpretation of the directive.  The information that it has published falls short of identifying specific methodologies that may or may not fall foul of this directive.

What follows is my understanding of the guidance thus far.  Please note that:

  • I am not a lawyer
    If you are reading this and want a legal interpretation of the law, then I suggest you find someone legally trained and pay for their comments.
  • Blog posts almost immediately become out of date
    By the time I hit “publish” on this post, someone will come along with another interpretation, or the ICO may published improved guidance.  Read around the subject.

What exactly is changing

The basic rule that’s changing is that storing information on a user’s computer requires an explicit opt-in from the user.  The user must be given the option of not having that information placed on their computer.  Placing information on a user’s computer without a conscious and informed decision by the user would be breaking this directive.

So, cookies are out then?

It’s important to highlight that the EU directive makes no explicit differentiation between any local storage mechanism, whether they be cookies, Flash Local Storage or any other mechanism. Specifically, the ICO guidance says:

The Regulations also apply to similar technologies for storing information. This could include, for example, Locally Stored Objects (commonly referred to as “Flash Cookies”).

If you store any information on a user’s machine then this directive almost certainly applies.  Just because the industry has concerned themselves with HTTP cookies as one impact, it is not the only one.  Even if you are using a “client-side” technology such as Flash, you may well still be affected.  My recommendation is not to focus on eliminating cookies but to focus on any technology that places information that could affect a user’s privacy on the client machine.

How will this opt-in work?

Unfortunately the EU haven’t stated how this opt-in would work. They’ve defined that the user must give their consent to it happening, but shied away from exactly how that works. Various suggestions have been included such as explicit agreement to revised Terms and Conditions (note: you can’t just hide it away – it has to be explicit) or the use of popup windows to inform the user of what will happen.

The user’s browser accepts cookies, so they’ve opted in… Right?

Not in the ICO’s eyes. The theory is that long-term this might be an option, it currently is not.

This will ruin the user’s “flow”, what can I do about it?

<shrug />
At the moment, there is little industry agreement on what ways this can be accomplished without adversely affecting the way in which a user interacts with a site. My guess is that most websites will start to refrain from setting cookies unless the user has to log-in. Or, more likely in the medium term, websites will not change until they are forced to by the law being enforced.

Are there any exclusions to the opt-in rule?

Yes, although they’re vague and open to interpretation. The basic exclusion is that cookies are allowed where they are technically required to complete an action the user has explicitly requested. In other words, if the cookie is required in order for a function to be completed, you don’t necessarily need opt-in. The directive’s worded in such a way that this is not a get-out clause for cookie use. An article that discusses specific exclusions is available from Jeremy Gordon.

Should we panic?

No. The ICO have stated that they expect this to be phased in. If they receive a complaint from a user then their first step will be to ask what analysis they have done. If you have done – or are in the process of doing – that analysis then you’ll be in good stead.

What should we do now?

The current ICO guidance is to identify what cookies (and other equivalent technologies) you use and to try and identify the impact their usage has on an individual’s privacy.

  1. Speak to your web developers/designers
    They will quite easily be able to give you an idea of what cookies – or Flash Local Storage, or equivalent technologies – are in use and be able to give you an idea of which categories they fall into.  They should also be able to guide you upon what impact these cookies may have on a user’s privacy.  Remember: this directive is approaching it from the perspective of the user’s privacy, not how your current site – or business – works.  Long-term they will expect you to change if the two are not compatible.
  2. Decide whether these cookies are required for the site to function (and, if so, in what capacity)
    If not, can a plan be devised to turn them off?
  3. Analyse whether these cookies impact the user’s privacy (and, if so, to what degree)
    The ICO guidance says that the directive is intended to improve the level of privacy for individuals who use the web and  it’s important to bear this in mind when analysing your current cookie usage; the more intrusive your use of cookies is, the more important it is that you have a plan to allow users to opt-in or, conversely, opt-out.

Is there a simple answer or fix – can we use another technology instead?

Typically there’s no simple “if we’d used technology X to do A instead of technology Y then we’d be okay now“. This is because the way in which web pages work – HTTP – is “stateless”. That means that if you visit the homepage of a website, then go to the checkout, there’s no explicit relationship between the two page visits. If you need to be able to track that user across the requests, you have to use something like cookies to achieve it.
If your analysis determines that you’re only using cookies to maintain session state then there may be options some options depending upon what your website does:

  • Turn session state off
    Most web frameworks give you the option to turn off session state entirely. If your website doesn’t offer online purchasing or doesn’t require a login (or does, but only to administer it), then it’s possible that the solution is exceptionally simple – just turn them off.
  • Track session state using another technique
    Some web frameworks allow session state to be tracked using the URL, although this can have an ugly effect on your URLs.

But wait, you’re wrong!

I may well be – I am not a lawyer, and the industry hasn’t come up with viable guidance for clients. That’s the point of this blog post – to hopefully start a discussion. If you have a comment, guidance, or would like to berate my interpretation of anything, please comment!

Update: Other posts on this issue