CRO Talks Podcast - Episode 29
The TL:DL – A One-Sentence Takeaway
Novelty and familiarity effects are often overstated but rarely proven in practice.
The Full Episode Summary
Novelty effects are heightened interest or engagement resulting from new or unfamiliar stimuli. Familiarity effects relate to users’ comfort with known designs and features, which can impact their response to changes. These effects are often cited as reasons for test failures or successes, but do they really influence experiment results? Should you consider them when designing tests and analyzing outcomes?
In practice, they are less prevalent than many assume, and should not be used as excuses for undesired experiment outcomes. Novelty causes initial heightened interest but fades as users adapt to change. On the flip side, familiarity involves user preference for known design, resisting new changes. Either can be confused with regression to the mean, where performance naturally converges over time as sample size increases.
Analysis of interaction patterns can clarify if either of these effects is genuine or illusory. This requires a combination of segmentation, surveys, and technical tracking. All of these are yet more reasons for having well-organized, standardized methods of data definition and collection. What is the technical classification of a new user? Are there multiple data points included in the identification, with fallbacks? Is this configured consistently across tools? Technical accuracy in user tracking is vital for collecting reliable behavioral data.
Recommendations for You to Consider
- Segment return users to understand familiarity and novelty effects accurately.
- Use long test duration times to distinguish genuine effects from statistical anomalies.
- Conduct surveys and look at heat maps for deeper insights into user interactions.
- Ensure technical accuracy in assigning new versus returning user segmentation.
- Replicate tests to confirm the consistency of observed effects.
- Avoid using familiarity and novelty as blanket excuses for experiment failures.
- Analyze super user segments for insights into familiarity effects.
- Build robust technical setups for collecting reliable behavioral analytics data.
- Be cautious of regression to the mean indicating false effect assumptions.
- Continually refine user segmentation strategies for better experiment outcomes.