Skip to main content
What is refutation in A/B testing?
Updated over 3 months ago

Standard cross-validation techniques used in supervised machine learning don't really apply when analyzing A/B test results. Since A/B testing involves causal analysis we use a validation technique called refutation to understand whether our results are valid or not

In causal analysis refutation involves testing the robustness of your estimated effects to ensure they are reliable and not driven by random chance or unmeasured confounders. Here are three common refutation methods:

  • Placebo Treatment Method: Randomly assign the treatment variable to different units in your dataset and re-estimate the causal effect. If the effect persists or remains significant, it suggests that the original effect may not be causal. This is the most common method and the one that is currently implemented in the G2M Platform because of its robustness and broad applicability.

  • Random Common Cause Method: Introduce a randomly generated variable as a confounder in the model. If the causal effect changes significantly after adding this random common cause, it indicates that your original model might be sensitive to unmeasured confounders.

  • Unobserved Confounder Method: Simulate an unobserved confounder that affects both the treatment and outcome. Then, assess how much this hypothetical confounder could change the causal estimate. This helps evaluate how robust the results are to potential bias from unmeasured variables.

These methods are used to strengthen confidence in the causal relationships identified in your analysis, ensuring that the observed effects are likely to be genuine and not artifacts of the data or model. If you want to learn more about causal analysis and refutation the DoWhy library is a good place to start.

Did this answer your question?