terça-feira, 12 de março de 2019

The non-experimentation result anti-pattern

Many of us have gone through the scenario of putting new software functionality into production and seeing tremendous improvement in business metrics. These results are often celebrated and attributed to this new achievement and this is used in presentations and corporate bonuses definitions.

But can we attribute causal relation between the new functionality and the new business metrics?

Before answering this question, however difficult it may be to abandon the emotive side of the functionality we have created, let us try to imagine some hypothesis that might have caused this improvement other than our change:

  • An advertising campaign
  • An unexpected viral
  • The functionality created by another squad
  • A concurrent product crashes
  • It is a normal seasonal movement

These are just a few assumptions that could cause improvement in metrics. It is possible to mitigate each of them through some data analysis. But can you anticipate all other possible causal assumptions to be able to rule them out? Difficultly.

Since you can not assign causality between the two events, the most you can say is that there is a correlation between them. And yet it may be a spurious correlation. Some examples of this kind of correlations can be seen on this site here. It speaks, for example, about the strong correlation between the number of Nicolas Cage films per year and the number of drownings in the United States per year.

So the answer to the initial question is no! It is not correct to attribute the improvement of the business metric just because of its new functionality. If you want to check whether a new feature causes a change in some metric, an AB experiment is required.

In an AB experiment we have a variant (or alternative) called control (usually A) and a variant containing the modification to be validated (B). A random sample of users will receive the version of their product with variant A, and another random sample of users will see variant B. Everything in the two variants must be the same, except the modification made in variant B. With this you can control the environment and all those possible causes we reported above and more the others hypothesis we could not predict before.

At the end of the experiment we ate going to be able to attribute the causality of the new functionality to the incremental metric because it was the only thing different in the whole environment. Taking into account of course all statistical variance due to the use of a sample, which can lead us to possible false positives in a minimal percentage of cases.

AB experiments are the most accepted technique currently for you to determine the causality of a change in your product. They are not a new thing and are widely used as a part of the scientific method.

However there is currently a large discussion in the statistical area regarding causal inferences, which would be models for inferring cause from one event to another without necessarily an experiment controlling all variables. This discussion became even stronger after the publication of The Book of Why.

However, if you do not master the techniques of causal inference, it is best not to disclose cause and effect without being sure. I know it's very hard for us to leave our emotions aside for having participated in the development of that piece of software. But it's important to stay cool. And of course, always do AB experiments before putting something into production.

This post is part of a series about Experimentation Anti-Patterns, a talk that I presented recently.

Nenhum comentário:

Postar um comentário