Episode 27

Matt and Iqbal discuss the optimal number of metrics to track for A/B tests and experiments. Too few metrics can result in action paralysis and wasted efforts due to lack of actionable insights. Conversely, too many metrics can lead to analysis paralysis, slowing down decision-making and test construction. We share past experiences to illustrate these points, such as using dual revenue goals to account for outlier orders, and use of extensive metrics to identify and iterate on UI issues quickly.

Cons of Too Few Metrics

  • not learning anything substantial
  • missing critical insights
  • and experiencing action paralysis

Cons of Too Many Metrics

  • analysis paralysis
  • overcomplicated data analysis
  • creating site/app performance issues
  • increased resource consumption

You should find your “Goldilocks zone”:

  1. Conducting a pre-mortem to identify critical metrics.
  2. Ensuring metrics are statistically viable within the test timeframe.
  3. Balancing primary metrics with secondary and guardrail metrics.

Once you’ve settled on an appropriate number of metrics for your use context, apply statistical pre-planning methods to those metrics to evaluate test viability for you given sample size. You can always fall back to alternate research methods should a test not be statistically viable.