Trustworthy Online Controlled Experiments:
Five Puzzling Outcomes Explained
To appear in KDD 2012 Aug 12-16, 2012, Beijing China. PDF. Talk powerpoint
Online controlled experiments are often utilized to make data-driven decisions at Amazon, Microsoft, eBay, Facebook, Google, Yahoo, Zynga, and at many other companies. While the theory of a controlled experiment is simple, and dates back to Sir Ronald A. Fisher’s experiments at the Rothamsted Agricultural Experimental Station in England in the 1920s, the deployment and mining of online controlled experiments at scale—thousands of experiments now—has taught us many lessons. These exemplify the proverb that the difference between theory and practice is greater in practice than in theory. We present our learnings as they happened: puzzling outcomes of controlled experiments that we analyzed deeply to understand and explain. Each of these took multiple-person weeks to months to properly analyze and get to the often surprising root cause. The root causes behind these puzzling results are not isolated incidents; these issues generalized to multiple experiments. The heightened awareness should help readers increase the trustworthiness of the results coming out of controlled experiments. At Microsoft’s Bing, it is not uncommon to see experiments that impact annual revenue by millions of dollars, thus getting trustworthy results is critical and investing in understanding anomalies has tremendous payoff: reversing a single incorrect decision based on the results of an experiment can fund a whole team of analysts. The topics we cover include: the OEC (Overall Evaluation Criterion), click tracking, effect trends, experiment length and power, and carryover effects.
What people said
- Greg Linden: A fun upcoming KDD 2012 paper out of Microsoft, "Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained" (PDF), has a lot of great insights into A/B testing and real issues you hit with A/B testing. It's a light and easy read, definitely worthwhile.
- Thomas Crook, Online Experiments Done right: There are a lot of people doing online A/B and multivariate testing these days, but few of them bring as much analytic rigor to the process as Ronny Kohavi and his colleagues. Ronny and his collaborators are back with a new paper that anyone who wants to get trustworthy results from online experimentation should read.
- Xavier Amatriain @ Netrlix: Building Large-scale Real-world Recommender Systems Netrflix, slides 56-57
- Markus Breitenbach - AI, Data Mining, Machine Learning and other thing: A really interesting paper on A/B testing and experiments in online environments just got accepted to KDD 2012:
- Panos Ipeirotis: Great read for anyone running online experiments
- Douglas Galbi : Kohavi et al. (2012) point to the importance of A/A testing. If you can’t understand and control the outcomes of A/A testing, don’t waste your time doing A/B testing
- Andrew Gelman, statistics and political science Professor at Columbia: A must-read paper on statistical analysis of experimental data...many people could learn a lot from this article. I was impressed that this group of people, working for just a short period of time, came up with and recognized several problems that it took me many years to notice. Working on real problems, and trying to get real answers, that seems to make a real difference (or so I claim without any controlled study!). The motivations are much different in social science academia where the goal is to get statistical significance, publish papers, and establish a name for yourself via new and counterintuitive findings. All of that is pretty much a recipe for wild goose chases