Variant performance history
Every variant in your snippet library has a performance history that accumulates across campaigns.
What is tracked
For each variant, liftstack records the number of exposures, engagement metrics (open, click, conversion, revenue per exposure), the win/loss record across campaigns, and the overall probability of being the best variant based on all accumulated data.
Performance verdicts
Each variant receives a verdict based on its track record across campaigns:
| Verdict | Criteria | What It Means |
|---|---|---|
| Strong performer | Won 60%+ of campaigns it appeared in, across 4+ campaigns | This variant reliably outperforms. Consider making it your default. |
| Consistent | Won 40%+ of campaigns with low variability | Reliable middle-of-the-road performer. |
| Variable | High variability in conversion rates across campaigns | Sensitive to audience or timing. Results are unpredictable. |
| Needs more data | Fewer than 3 campaigns | Too early to judge. Keep testing. |
The conversion rate sparkline
Each variant’s detail page includes a sparkline chart showing its conversion rate across every campaign it has appeared in.
- Each dot represents one campaign
- The line connects campaigns in chronological order
- A grey band shows the credible interval at each point
A flat line means consistent performance. An upward trend might indicate a primacy effect (recipients warming up to this content style). A downward trend might indicate a novelty effect (the initial excitement wore off). If performance is highly volatile (big swings up and down), this variant’s results are sensitive to audience or timing, so be cautious about relying on it.
Cross-campaign confidence
Individual campaign verdicts may have moderate confidence (e.g., 75%). But when you combine data from multiple campaigns, confidence increases. A variant that shows a consistent 0.3% conversion rate improvement across five campaigns is a much stronger signal than a single campaign with 90% confidence.