[ Content | Sidebar ]

Windows Phone URI associations, emails and text messages

September 26th, 2013

Last night I wasted far too much of my time trying to debug an issue with Windows Phone URI associations and emails containing links using these URIs. Far too much time.

URI associations, for those who haven’t used them, are the ability to register a URI prefix so that any links that use this prefix fire up your application. The possibilities for this are endless and are the basis of app to app communication within the current Windows Phone 8 release. For more information on the basics, there’s a great MSDN article on how to register both URI and file type associations. I’m not going to talk about the basics because that article covers them all. Instead I’m going to concentrate on what caused me to lose hours.

Over the last few evenings, I’ve been mocking up a Windows Phone user interface for an application that I use daily. This is just in a proof of concept state at the moment but I’ve been pulling in various functions available within Windows Phone that would be of use to this application. One of them, of course, was using URI and file type association to show the correct item.

In testing the URI association, I sent myself an email to my Gmail address containing a link that used my prefix. The HTML for the link would have been something like <a href="my-prefix://{guid}">click me!</a>. I diligently added my Gmail account into the emulator’s email accounts, downloaded the email and clicked the link. The application popped up and was passed the GUID. Happy days. Check-in, compile, publish XAP to beta participants, email them to tell them of the wonderful functionality, celebratory curry.

Everything is good in the world.

Now, I’m going to ignore that I should have tested this on my device. Yes, I should have. No, I didn’t.

I get an email that the beta’s been updated, I update and go to check it out. However opening the email on my phone didn’t show my link properly. Instead of being shown in blue and underlined, the link was shown black. Clicking it did nothing. This is the same email, on the same Gmail account, as I tested within the emulator. Even more bizarre is that I get emails from some people saying it works and others saying it doesn’t.

To shorten the story significantly, after a lot of Googling and swearing, I posted on Stack Exchange. It was Matt Lacey that came to the rescue.

Because of the age of my device, my device communicates with Gmail via the Exchange protocol. When the emulator was pointed at Gmail, Google saw it as a new device and forced it down the IMAP route. On the phone, some Exchange policies were being applied, via IMAP they weren’t. I confirmed this by adding Gmail to my device a second time but only as an IMAP source. The links functioned perfectly as they did in the emulator.

First finding: retrieving messages via "Exchange" may do funny things to links containing non-standard URI prefixes.

What’s even more strange is that sending an email with the correct HTML from my work exchange account to my Gmail account, then receiving it via IMAP, also broke the link. The link appeared correctly but clicking it stated that the link didn’t work on the phone and copying the URL just copied the string “inbox”. Sending the same email to myself from my own Gmail account and the link worked.

Second finding: even if Exchange isn’t directly involved in the send/receive, if it’s been involved somewhere then it may break links containing non-standard URI prefixes.

An interesting footnote to add is that Windows Phone will automatically identify links within textual content and make them clickable for users. Most people will have seen this if they receive a text message with a web address or something similar. However, whatever Microsoft are using to identify that string doesn’t seem to work if the URI prefix contains a hyphen – geo:lat,lng will be converted, but my-prefix:abc won’t.

Third finding: if you’re sending links to people containing your non-standard URI prefix then either avoid hyphens in the prefix name (note that several Nokia URI prefixes contain hyphens), or ensure you send HTML emails with the link correctly in the A element’s href attribute.

I hope that helps someone!

XCRI validation: An integrated approach

February 5th, 2013

A downloadable version of the code is attached to this blog post. It includes not only an executable version of the code, but also a copy of the validation code at this point in time and the source to the executable version.

I was lucky to be able to attend the #coursedata meetup last Tuesday in Aston and it was fantastic to meet the instutions that are creating XCRI feeds and to see those feeds starting to trickle into the demonstrator applications that JISC have funded.

During the technical sessions I was approached separately by two individuals who were interested in integrating the validator into their build process. Whilst the online validator isn’t the best solution for these scenarios, there’s absolutely no reason why the validator code can’t be utilised to achieve the end-game. As a result I decided to write this blog post to explain how the validator can be compiled into an executable that could be used as part of an automated process and include a downloadable version for people who may want to do just that.

This post builds upon a version I built to help an institution that were having issues with the validator timing out for long-running queries. The code itself currently outputs a HTML file – without styling – for ease of the person who was using it but there’s no reason why this couldn’t be altered to output into XML or something that could be queried by a separate application. Also note that I’ve not touched on the actual integration of this with your build systems (as the variation would be too significant to cover).

Making the validator work locally really only involves three steps:

  1. Organising some dependencies
  2. Calling the validation routines
  3. Outputting the results

I’m not going to go through every single line of code (there’s the attachment if you really want to), but the important elements are detailed below.

Organising some dependencies

This code is within the “Program” class within the XCRI.Validator.App project
The validation code itself requires some other elements in order to work. In addition to some objects required for the program to function, it also takes a “Validation Module” (an XML file containing rules that are run on the input file) and an “Interpretation Module” (an XML file that contains interpretations for common issues thrown by the built-in .NET XML validation). These two files were created as part of the XCRI-CAP 1.2 validator project and are included both within the Google Code repository and also within the downloadable document.

Calling the validation routines

This code is within the Run method of the “ValidateRunner” class within the XCRI.Validator.App project
Once the validation module, interpretation module and input file (along with all the objects required for the code to run) have been collated, the actual validation code is run. This is handled by the Validate method of the applicable ValidationService and will download the referenced XSD files and use them, along with the XML rulebase, to validate the file.

Outputting the results

This code is within the Output method of the “ValidateRunner” class within the XCRI.Validator.App project
Once the validation data is collated, it is output to a HTML file named “output” within the folder that the application is running in. In an automated build system it would make more sense to parse out the results for rules that passed and rules that are only recommendations and to concentrate solely on the Exceptions and applicable Warnings. These could either be printed out to screen or into a structured data file for another process to pick up and deal with.

Calling the executable

The executable takes one mandatory and two optional arguments:

The input file (e.g. “input.xml”)
The Validation Module file (e.g. “xcricap12.xml”). Defaults to “xml-files\ValidationModules\XCRICAP12.xml”.
The Interpretation Module file (e.g. “interpretation.xml”). Defaults to “xml-files\XmlExceptionInterpretation.xml”.

An example call using all three parameters may be:
XCRI.Validator.App.exe -i input.xml -vm xcricap12.xml -im interpretation.xml
Remember to quote any paths that contain spaces.

12 TDDs of Christmas (day eight: ranges)

January 2nd, 2013

I wasn’t going to blog about today’s 12 TDDs of Christmas but for two reasons…

Firstly: today was the first time that I added a major refactor as part of the process.

Refactoring is simply another means of saying that you altered – hopefully for the better – some code that has previously been written without affecting the way in which it operates from an external perspective. The major benefit of tests in your development is that your tests allow you to test that any refactoring you do does not have any knock-on effects. The major benefit of test driven development is that your tests should comprehensively test the code so that you have confidence in your work.

Today’s challenge – “Ranges” – is simply to create a Range object that represents a range of integers (then doubles). That object can then allow you to retrieve the minimum or maximum, or to retrieve an intersection between this range and another. I started by coding the integer range object, starting with constructor tests and moving onwards, fleshing out the functionality. Same iteration over the double range object. All in all, 26 tests. Then, as had been my plan, I decided to alter the code to run off a base class using generics. What do you know – all duplicated code reduced down to one generic class and the original two IntegerRange and DoubleRange which just contain constructors with the expected signature. And what’s more, all tests are green. Major refactor, major set of duplicated code removed, confidence I haven’t ballsed something too major up. That’s gold dust.

The second reason I decided to blog was I’ve just read @martin_evans‘ blog post entitled The 12TDDs of Christmas – the Times Crossword for Programmers. In it he nicely posts that he likes my approach to the Number Names challenge. I like my approach too. I don’t like some of the code in it, but I like the approach. :-)

As always, the code’s available at the BitBucket repo: https://bitbucket.org/CraigHawker/12-tdds-of-christmas.

Items in this series:

  1. 12 TDDs of Christmas (day one: Calc Stats)
  2. 12 TDDs of Christmas (day two: Number Names)
  3. 12 TDDs of Christmas (day three: Mine Field)
  4. 12 TDDs of Christmas (day four: Monty Hall, day five: FizzBuzz and day six: Recently-Used List)
  5. 12 TDDs of Christmas (day seven: Template Engine)
  6. 12 TDDs of Christmas (day eight: ranges)

12 TDDs of Christmas (day seven: Template Engine)

January 1st, 2013

I think I’ve started to turn a corner with TDD now as I found today’s exercise much more simple to approach properly. In fact, I’ve started to embrace the element of TDD which has, in the past, put me off (more on that below). That’s not to say I’m a complete convert (let’s let work start back up again and see what “real-world” stuff I manage to approach this way), but I’m certainly very much more comfortable with it.

Today’s 12 TDDs of Christmas challenge (code on bitbucket)was to create a simple template engine that accepts a set of key-value pairs and replaces the key placeholders found within the template with the values. It also has to throw an exception if there’s a key placeholder but no key within the set passed. Quite simple but I doubt anything I produce will be rivalling Razor just yet.

One of my pain-points with TDD – when I’ve tried it in the past – has been the concern that I should be implementing the minimum code required in order to pass each test. So, for a method that looks for multiple key placeholders and replaces them, after my first test (with one item), I should only replace a single item. Then, when a second test (with multiple keys) is added, I should alter the code to handle multiple items. I’ve always seen this as inefficient (adding a loop takes so little time) and have struggled to make peace with this when I’m writing code. After all, no-one particularly likes writing code multiple times for the sake of it and certainly not when you’re up against a tight deadline. I’ve always understood the reasoning for it, though.

Today was the first one where I’ve been quite strict. Not as strict as I perhaps should be, but far more than normal (e.g. this commit). I know it’s still something I need to work on (e.g. notice how I don’t only handle one key first time thorugh), but I’m getting better at it.

The result of today’s challenge was I fancied extending it slightly by passing in a custom expression extractor (i.e. a function that can be used to extract “firstName” from ${firstName}, rather than it being hard-coded. The resultant code change was actually quite clean.

I’m not there yet but I’m moving in the right direction.

Items in this series:

  1. 12 TDDs of Christmas (day one: Calc Stats)
  2. 12 TDDs of Christmas (day two: Number Names)
  3. 12 TDDs of Christmas (day three: Mine Field)
  4. 12 TDDs of Christmas (day four: Monty Hall, day five: FizzBuzz and day six: Recently-Used List)
  5. 12 TDDs of Christmas (day seven: Template Engine)
  6. 12 TDDs of Christmas (day eight: ranges)

12 TDDs of Christmas (day four: Monty Hall, day five: FizzBuzz and day six: Recently-Used List)

December 31st, 2012

I’ve just completed today’s 12 TDDs of Christmas and realised that I haven’t blogged about my experiences on days four through to today (day six). Perhaps the reasons for that will come out as I write this blog post. The code, as always, is at https://bitbucket.org/CraigHawker/12-tdds-of-christmas/.

It’s fair to say that I’m enjoying some elements of this and not others. I’m still finding it hard to approach something like Monty Hall in a way that makes sense to test from a “black box” perspective. For example, for Monty Hall, should which doors are chosen even be visible to the calling entity? If they shouldn’t then, arguably, all that should be public is a method marked public bool DidIWin(bool useSwitchStrategy). So we’re then down the road of creating proxies or objects that inherit from the Monty Hall object(s) just to expose bits to testing which I know is something people argue about a lot. Also, how do you write tests for something that’s supposed to be random (which door’s chosen by the user and which door’s opened by the host)? Suffice to say that I’m going to re-visit Monty Hall and re-write it, probably from the ground up, with a much better TDD approach. I’m chalking this down to the learning process and what I was trying to concentrate on (TDD) over what I wanted to achieve (the actual code).

Day five’s challenge (FizzBuzz) was much more testable but I found the challenge almost the complete opposite to the Monty Hall. This is code with a finite set of inputs (integers between 1 and 100) and an algorithm that borders on the obscenely simple. In fact, all I did was throw test data into an array and iterate over every single possible integer in the range and test it was correct.

Day six – Recently-Used List – has been much more useful for me. It lacks the uncertainty and hidden complexity of the Monty Hall challenge but had a bit more to bite off than the FizzBuzz one. I found that this challenge was the one where I’ve started to actually approach the code from a “proper” TDD approach; by only implementing exactly what’s required in order to make each specific test go green, then adding layers of complexity as and when I write tests for each one. I actually quite enjoyed this one and extended the task slightly (only slightly) by making use of generics within .NET to allow the recently-used list to be used with any type rather than just strings. That did make me alter the specification slightly by treating nulls and empty strings separately but that’s because from a .NET perspective they are.

What’s also nice is that I got some positive feedback from @martin_evans last night regarding my approach to the Number Names challenge. I’ve since realised there’s a few more tests that need to be added as there’re situations where commas and “and”s may not be 100% correct.

But that’s the point of this, right?

Items in this series:

  1. 12 TDDs of Christmas (day one: Calc Stats)
  2. 12 TDDs of Christmas (day two: Number Names)
  3. 12 TDDs of Christmas (day three: Mine Field)
  4. 12 TDDs of Christmas (day four: Monty Hall, day five: FizzBuzz and day six: Recently-Used List)
  5. 12 TDDs of Christmas (day seven: Template Engine)
  6. 12 TDDs of Christmas (day eight: ranges)

12 TDDs of Christmas (day three: Mine Field)

December 28th, 2012

I’m pretty much back on track for the 12 TDDs of Christmas as I’ve managed to find the time to complete day three’s task on the correct day (shock!). As with previous days, the code’s available at BitBucket: https://bitbucket.org/CraigHawker/12-tdds-of-christmas.

Today’s task was a nice little task: to create a “hint array” for a given input of mine fields (think the numbers that appear within Minesweeper). I altered the implementation slightly to make it a bit more .NET-y by passing in structured data rather than strings but I don’t think this really affected the task much.
The task wasn’t complicated but it did allow me to try and approach creation of tests, and therefore creation of the actual methods, in a much more structured way. Unlike with the first task, this task allowed me to start with a zero size array and move up through 1×1, through to 3×1 (with mines in different places), then 3×3, then further. At each step I tried to implement only the smallest amount of code to achieve the test passing. I didn’t quite accomplish this as my mind tends to go into overdrive and wants to implement a few more switch statements than are actually required at each stage, but I was much more forceful with myself than I normally would be.

Thanks to @tjlytle who provided a few test cases at https://gist.github.com/4401086, too.

Items in this series:

  1. 12 TDDs of Christmas (day one: Calc Stats)
  2. 12 TDDs of Christmas (day two: Number Names)
  3. 12 TDDs of Christmas (day three: Mine Field)
  4. 12 TDDs of Christmas (day four: Monty Hall, day five: FizzBuzz and day six: Recently-Used List)
  5. 12 TDDs of Christmas (day seven: Template Engine)
  6. 12 TDDs of Christmas (day eight: ranges)

12 TDDs of Christmas (day two: Number Names)

December 28th, 2012

So, day two of The 12 TDDs of Christmas has actually spanned into day three. I managed to get started yesterday morning but didn’t manage to get back to the PC until this morning.

I’m starting to really get into the TDD approach now and will post a longer blog post along with day three’s challenge later today.

My code is available at https://bitbucket.org/CraigHawker/12-tdds-of-christmas.

Items in this series:

  1. 12 TDDs of Christmas (day one: Calc Stats)
  2. 12 TDDs of Christmas (day two: Number Names)
  3. 12 TDDs of Christmas (day three: Mine Field)
  4. 12 TDDs of Christmas (day four: Monty Hall, day five: FizzBuzz and day six: Recently-Used List)
  5. 12 TDDs of Christmas (day seven: Template Engine)
  6. 12 TDDs of Christmas (day eight: ranges)

12 TDDs of Christmas (day one: Calc Stats)

December 26th, 2012

I was sat downstairs in a post-Christmas haze this morning when @DavidGouge happened to tweet a link to an article entitled the “12 TDDs of Christmas“, by @TheRealBifter.

For those not up on the lingo, TDD stands for Test Driven Development and is an approach to developing which focuses on writing test for how code should function before you implement the code. The basic premise is that most developers (guilty) focus on writing functional code first and subsequently aim to add unit tests to ensure that the code runs as it should. Particularly where environments encourage results quickly, this approach often leads to tests being unwritten, incomplete or lacking decent coverage.

Since I came across the approach a couple of years ago, I’ve been a major fan. However, I haven’t found it easy to modify my behaviour to write tests before writing the code. In fact, I’ve found it quite painful. This post, as a result, caught my eye. The author has set himself a challenge of writing a single piece of code a day for the next 12 days and forcing him to use a TDD approach. I’ve decided that I’ll join in, provided the amount of time for each challenge doesn’t impact my family life over this (very much required) break.

All my code will be available within a BitBucket repo available at https://bitbucket.org/CraigHawker/12-tdds-of-christmas with a folder for each set of code.

Today’s challenge is a simple one:

Your task is to process a sequence of integer numbers
to determine the following statistics:
o) minimum value
o) maximum value
o) number of elements in the sequence
o) average value

For example: [6, 9, 15, -2, 92, 11]
o) minimum value = -2
o) maximum value = 92
o) number of elements in the sequence = 6
o) average value = 21.833333

You’ll find today’s set of code within a folder named “Dec26″. Note that you’ll find my actual NumberAnalyser.Analyse method rather sparse (in fact, everyone will say I cheated if they look at the code) but my focus is on the process of TDD, not the actual method I’m implementing. I couldn’t care less that the internals of my method use Linq to work out the min/max/count/average and, actually, neither should you.

So, what’ve I learnt? The main thing so far is that I often find with testing that it’s hard to know where to start and stop, particularly where you’re writing something that works with numbers. For example, I’ve got methods that test for an expected exception, and methods that check for expected min/max/count/average values for arrays with a single item and a second set that tests arrays with multiple items, but where do you stop? I found this hard with TDD as it wasn’t obvious which tests made sense to start with and implement the method out in stages. I’m hoping that comes more naturally over time.

Anyone else fancy joining me?

Items in this series:

  1. 12 TDDs of Christmas (day one: Calc Stats)
  2. 12 TDDs of Christmas (day two: Number Names)
  3. 12 TDDs of Christmas (day three: Mine Field)
  4. 12 TDDs of Christmas (day four: Monty Hall, day five: FizzBuzz and day six: Recently-Used List)
  5. 12 TDDs of Christmas (day seven: Template Engine)
  6. 12 TDDs of Christmas (day eight: ranges)

Eurogamer Expo 2012

September 29th, 2012

Below is an album that’ll hopefully automatically update as I/we publish pictures during the day.

Testing, testing and more testing!

March 8th, 2012

This is the sixth in a series of blog posts aimed at documenting the development of an XCRI-CAP 1.2 validator. The entire series can be found by visiting the validator blog post list. My aim is to post a new blog post every week until the development is complete, then at least once every month to document both the usage of the validator as well as any community-highlighted modifications or issues.

Last week’s blog post concentrated upon the feedback that I’ve received so far for the validator, how useful it’s been and how it’s been integrated into the validator code-base.  This week, though, I have been concentrating purely upon testing – both automated testing and testing by humans.

Automated testing allows the creation of individual sections of code that can be automatically run when changes are made to the code.  Each test puts the validator into a certain state, runs some code, then tests the validator did as expected.  Some of these tests check the underlying validation code itself, some of the tests check the expected vocabulary values (e.g. languages, studyMode, etc), and some of the tests check the rules which are held within the rule-base against XML fragments.  Every time code on the validator is changed, these tests are re-run.  By knowing whether these tests pass or fail we can be more confident in the quality of the validation that’s being done.  In last week’s blog post I stated that there were 212 automated tests.  As of the time of writing there are now 620.  The good news is that this automated testing has brought up a couple of issues with the rule-base, typically around the XPath selectors that were being used.  These issues have been resolved as the tests were written.

In addition to the automated testing, Alan and Jennifer Paull have also found some time in their hectic schedules to manually test the validator with test files and have identified additional rules from the Data Definitions Document which needed to also be run, as well as a number of really useful usability/readability issues.  All of the additional rules have been implemented (and automated tests written!), and most of the usability/readability items are also resolved.  This process has been exceptionally useful and the quality of the end result should be much improved as a direct result.

As a last comment, the more eagle-eyed of you may have noticed that the validator now has a new home.  This will be its permanent address now and I recommend that you update any links you may have.  Anyone going to the old address will be automatically redirected across, obviously.

Please feel free to highlight any issues you encounter with the validator either via twitter (@CraigHawker), as a comment to this post, or on the XCRI Forum.