A/Benefits: How Important Is It That You Know You're Right?

A small tool to see if your site would benefit from A/B testing.

The confused response to A/B testing might be the perfect example of organizations not handling uncertainty well. This is partly because, to make any meaningful progress at all, you’ll need to test out things that don’t work. But it’s also because the more helpful a change is, the less necessary A/B testing is, because it’s a more obvious improvement. If sales go up even 10% that’s frequently enough for people to notice without advanced statistical techniques. 

Not that the use of the techniques is particularly expert. It’s well known that many public tools will show a statistically significant improvement even when no change is made at all

Constant improvement is an easy sell in software. It’s baked into the process – it’s why you aren’t supposed to capitalize software development expenses. Having software inevitably means needing to improve it. 

But to detect very very subtle changes requires patience and trying a lot of subtle changes, many of which won’t do anything. If you could make changes that consistently improve the site otherwise, why would you need A/B testing? I like being sure, but if it delays the changes to half the users for a few weeks you’re just losing money. 

It’s a complex trade-off, deciding to invest in making a great deal of potentially value-less changes and having the patience to see if they’re any good. How much data can you gather? How sure do you want to be? 

I felt the world needed an A/B testing estimator that could give people better insight into the core trade-off: if you need to collect large data sets to see the improvement, are you operating at a scale where doing the tests you can makes any sense? Smaller firms won’t have the ability to quickly detect subtle changes, and subtle changes in small firms are worth much less than in larger firms. 

A/Benefits: How Important Is Mathematical Precision?