Choosing your primary metric
Your primary metric determines how liftstack evaluates variant performance and produces verdicts. Choose the metric that best reflects your business objective for this campaign.
Available primary metrics
| Metric | What it measures | Best for |
|---|---|---|
| Conversion rate (default) | Percentage of recipients who completed the desired action | Most campaigns |
| Click rate | Percentage of recipients who clicked any link | Quick-signal tests, especially with smaller audiences |
| Open rate | Percentage of recipients who opened the email | Subject line and preview text testing |
| Revenue per exposure | Average revenue generated per recipient | When variants might influence order size, not just conversion probability |
| Tap rate | Percentage of recipients who tapped | Push notification testing |
Pre-registration: why your primary metric is locked after sending
The primary metric cannot be changed once the campaign starts sending. This is a deliberate safeguard called pre-registration. If you could change the metric after seeing results, you might (even unconsciously) switch to whichever metric makes a particular variant look best. This would inflate your false positive rate, causing you to “find” winners that aren’t real winners. Pre-registering the metric keeps the test honest.
Revenue per exposure in detail
Revenue per exposure (RPE) measures the average revenue each recipient generates. It captures two effects a variant can have:
- Conversion probability. Does this variant make people more likely to buy?
- Order value. When people do buy, do they spend more?
A variant could win on RPE even if it doesn’t have the highest conversion rate, because it might encourage larger orders. liftstack uses a specialised compound model for RPE that analyses these two components separately and then combines them.
A note on open rate
Open tracking is unreliable because of Apple Mail Privacy Protection (MPP) and email client pre-fetching. These technologies automatically trigger “opens” for every email, whether or not the recipient actually looked at it.
The good news: this noise affects all variants equally (since recipients are randomly assigned), so relative comparisons remain valid. If Variant A has a higher open rate than Variant B, that ranking is trustworthy. The bad news: absolute open rates are inflated, and the true difference between variants appears smaller than it really is. This means tests using open rate as the primary metric need more data to reach a conclusion.
Non-primary metrics become diagnostics
All metrics not selected as primary become diagnostics. They are shown in the metrics table for context. For example, you might optimise for conversion rate but still want to see the click rate and revenue per variant. Diagnostic metrics are never used to determine the winner.