The focus of Web 2.0 this year was the power of less. While there was definitely a feeling that nothing really new was happening in the industry, a couple of talks I attended focusing on optimisation of your business were probably the highlights of the event for me.
He espoused the virtues of measurement, feedback loops and iterative, agile development. All music to my ears and things we're doing and continue to improve on at Esendex.
What really got me thinking though was the notion of A/B testing of application features and measure how that translates into improving your businesses KPIs (key performance indicators).
To date I had considered A/B testing to be the domain of web sites, try different graphics, messages, calls to action, processes, etc and measure the goal completion percentages.
We're deep into building the new version of our new application at Esendex and we're making important decisions about the functionality and features we're going to make available.
The problem is those decisions are pretty much based on opinion rather than any objective measure. While we think they're a good idea, it remains to be seen whether our customers find them useful.
There is a certain amount of inspiration required and going with our gut instinct. Innovation generally involves a step change after all. As Henry Ford famously said:
If I had asked people what they wanted, they would have said faster horses.
But, if we take the leap and put the feature out there, wouldn't it be good to know if it gave the desired results. We need to be able to measure a) whether people use it and b) whether not it had the desired effect.
Outcomes in the web analytics world are generally fairly well defined. A site visitor bought something, registered on the site or, in our case, signed up for a trial.
The desired outcome of introducing a feature could be more grey. Outcomes are very likely not to contribute directly to one of our KPIs. The path to KPI improvement will probably be circuitous and require a degree of assumption but at each step we should be testing the hypothesis.
Often we'll be introducing a feature because we believe that it will improve on of our KPIs but that could just be by offering something other services don't, encouraging people to sign up with us rather than someone else.
In this case we will need to measure an indirect outcome until such time as we enough of a population to then measure more directly against our KPIs.
Very much more art than science.
I'm very much working this through at the moment. We're adding feature measurement into the beta product we're launching in May and I'm looking forward to using this process to improve the product in the direction our customers want.
I'll report back.