Skip to main content
liftstack
Sign In Start Free Trial

Choosing your primary metric

Your primary metric determines how liftstack evaluates variant performance and produces verdicts. Choose the metric that best reflects your business objective for this campaign.

Available primary metrics

MetricWhat it measuresBest for
Conversion rate (default)Percentage of recipients who completed the desired actionMost campaigns
Click ratePercentage of recipients who clicked any linkQuick-signal tests, especially with smaller audiences
Open ratePercentage of recipients who opened the emailSubject line and preview text testing
Revenue per exposureAverage revenue generated per recipientWhen variants might influence order size, not just conversion probability
Tap ratePercentage of recipients who tappedPush notification testing

Pre-registration: why your primary metric is locked after sending

The primary metric cannot be changed once the campaign starts sending. This is a deliberate safeguard called pre-registration. If you could change the metric after seeing results, you might (even unconsciously) switch to whichever metric makes a particular variant look best. This would inflate your false positive rate, causing you to “find” winners that aren’t real winners. Pre-registering the metric keeps the test honest.

Revenue per exposure in detail

Revenue per exposure (RPE) measures the average revenue each recipient generates. It captures two effects a variant can have:

  1. Conversion probability. Does this variant make people more likely to buy?
  2. Order value. When people do buy, do they spend more?

A variant could win on RPE even if it doesn’t have the highest conversion rate, because it might encourage larger orders. liftstack uses a specialised compound model for RPE that analyses these two components separately and then combines them.

A note on open rate

Open tracking is unreliable because of Apple Mail Privacy Protection (MPP) and email client pre-fetching. These technologies automatically trigger “opens” for every email, whether or not the recipient actually looked at it.

The good news: this noise affects all variants equally (since recipients are randomly assigned), so relative comparisons remain valid. If Variant A has a higher open rate than Variant B, that ranking is trustworthy. The bad news: absolute open rates are inflated, and the true difference between variants appears smaller than it really is. This means tests using open rate as the primary metric need more data to reach a conclusion.

Non-primary metrics become diagnostics

All metrics not selected as primary become diagnostics. They are shown in the metrics table for context. For example, you might optimise for conversion rate but still want to see the click rate and revenue per variant. Diagnostic metrics are never used to determine the winner.