Matt Beischel and Iqbal Ali discuss the nuances of testing at different scales in experimentation—referred to as “testing big” and “testing small.” The conversation, initiated by a topic suggestion from regular CRO Roundtable attendee Shirley Lee, explores the complexity and impact of various tests. They emphasize that not all tests yield proportional outcomes; a simple test might deliver significant results, whereas a complex test might not.

Key points include the importance of test prioritization, balancing the return on investment (ROI), and understanding the correlation between the effort put into tests and their potential impact. They also discuss the need for a mix of iterative (small, incremental changes) and transformative (large, impactful changes) testing strategies. Both types of testing contribute to learning and optimizing processes, but they carry different risks and potential payoffs.

The conversation digs into practical aspects of experiment sizing, like identifying and managing variables and confounders, and the necessity of pre-planning or “pre-mortem” exercises to anticipate various outcomes. They also touch on the importance of having good data analytics and integrations with other tools to measure and analyze test results effectively. Finally, they stress the importance of adapting testing strategies to the scale and resources of the organization, advocating for a flexible, informed approach that balances learning with speed of implementation.