Test Targeting Conditions

CRO Talks Podcast - Episode 28 - Test Targeting Conditions

Episode 28 https://youtu.be/oWEVZdPsBZE Matt and Iqbal dive into the technical details and importance of test targeting conditions in A/B testing with guest Jonas Alves. The episode highlights the complexities of segmenting users for experiments to ensure accurate and reliable results. They focus on effective test targeting conditions for A/B experiments, emphasizing the importance of precisely […]

How Many Metrics Should You Track?

CRO Talks Podcast - Episode 27 - How Many Metrics Should You Track?

Episode 27 https://youtu.be/GXZns_mEMuQ Matt and Iqbal discuss the optimal number of metrics to track for A/B tests and experiments. Too few metrics can result in action paralysis and wasted efforts due to lack of actionable insights. Conversely, too many metrics can lead to analysis paralysis, slowing down decision-making and test construction. We share past experiences […]

Episode 26: When to Decide What Metrics to Track

CRO Talks Podcast - Episode 26 - When to Decide What Metrics to Track

https://youtu.be/6JnPEgVLDbo In this episode, Matt Beischel and Iqbal Ali discuss the process for determining the right metrics to track when running tests and performing research. The topic originated from a LinkedIn post by Nils Koppelman. Key points discussed include: Importance of Metrics: Metrics should be in a balanced “Goldilocks zone”—neither too few (leading to directionless […]

Episode 25: What’s Your Primary Metric?

CRO Talks Podcast - Episode 25 - What's Your Primary Metric?

https://youtu.be/rPXGo9XgtjI Matt and Iqbal explain how to select primary metrics for experiments and A/B tests. They explore the definition of a primary metric as the key metric used to evaluate the success or impact of changes being tested. They emphasize that while conversion rate is often considered a primary metric, it is not always the […]

Episode24: Testing Big and Testing Small

CRO Talks Podcast - Episode 24 - Testing Big and Testing Small

https://www.youtube.com/watch?v=_sU-Ldlbi78 Matt Beischel and Iqbal Ali discuss the nuances of testing at different scales in experimentation—referred to as “testing big” and “testing small.” The conversation, initiated by a topic suggestion from regular CRO Roundtable attendee Shirley Lee, explores the complexity and impact of various tests. They emphasize that not all tests yield proportional outcomes; a […]

Episode22: Authenticity in Case Studies

CRO Talks Podcast - Episode 22 - Authenticity in Case Studies

https://www.youtube.com/watch?v=_sU-Ldlbi78 In this episode, Matt and Iqbal explore what constitutes a case study, emphasizing the need for transparency in reporting experiment results; including statistical data, sample sizes, and the analysis and recommendations derived from those statistics. Case studies should not only demonstrate competence but also avoid misleading claims of exceptionalism. Authentic case studies should provide […]