Skip to main content
liftstack
Sign In Start Free Trial

Content playbook and content memory

The Content Playbook and Content Memory are two complementary dashboards that answer different questions. The Playbook says “here is what works.” Content Memory says “here is how confident you should be in that knowledge, and where you should focus next.”

Both are available on Growth and Scale plans under Analytics.

Content Playbook

The Content Playbook is an auto-generated reference document that captures what your testing has taught you. Find it under Analytics > Playbook.

As you complete campaigns, liftstack analyses the accumulated results and distils them into four categories.

Rules

Winning patterns that have proven themselves across multiple tests. For example: “Urgency-framed hero blocks outperform neutral framing in email campaigns” or “Question-style subject lines convert higher than statement-style for your audience.” Each rule shows the evidence behind it: how many tests support it, the average uplift, and the channels where it applies.

Anti-patterns

The inverse of rules: approaches that consistently underperform. If discount-amount CTAs have lost in 6 of your last 8 tests, the playbook calls that out so your team stops defaulting to them.

Patterns that show early promise but do not have enough data to be promoted to rules yet. A trend might say “Social proof in push notification titles has won 2 of 3 tests, but more data is needed before this becomes a reliable pattern.” Trends are classified as improving, declining, or stable based on how the effect size changes over rolling time periods.

Gaps

Areas where you have not tested enough to draw conclusions. If you have never tested image variants in your SMS campaigns, or you have not run a subject line test in three months, the playbook flags it. Gaps help you prioritise your next round of tests.

Placement and channel breakdowns

The playbook also generates per-placement summaries (e.g., top insights for hero blocks, for CTAs, for subject lines) and per-channel breakdowns when you have enough data across multiple channels.

The playbook updates automatically as campaigns complete. You do not need to maintain it; it builds and refines itself from your results. Think of it as a living style guide for what actually works with your audience, backed by data rather than opinions.


Content Memory

Content Memory gives you a high-level view of how your content knowledge has grown over time. Find it under Analytics > Content Memory.

Overview metrics

At the top, Content Memory shows your workspace’s testing summary:

  • Total variants tested: how many distinct variants have been assigned in campaigns
  • Total winners: how many unique variants have been declared winners
  • Knowledge coverage: what percentage of your variant library has been tested
  • Insights discovered: how many high-confidence patterns have been identified

Growth timeline

A monthly view showing cumulative campaigns and winners over time. This visualises whether your testing programme is accelerating or stalling.

Pattern effectiveness map

A visual grouping of attribute-value pairs sorted by their effect on conversion rate. Each entry shows the direction (outperforms or underperforms), the magnitude of the effect, the confidence level, and how many campaigns support it. This is the most actionable view in Content Memory: scan it to see which content characteristics are reliably associated with better or worse performance.

Channel coverage

A breakdown of where your testing efforts have been concentrated. If you have run 40 email tests but only 3 push notification tests, Content Memory makes that imbalance visible so you can decide whether to broaden your testing.

Stale knowledge

Patterns that were established months ago but have not been validated by recent tests (90+ days). Content preferences change over time, and a rule derived from tests you ran six months ago may no longer reflect what your audience responds to. Content Memory flags these so you can prioritise retesting them.

Knowledge gaps

Channel and placement combinations with fewer than 3 completed campaigns. These are areas where you simply do not have enough data to draw any conclusions yet.


Snippet Performance Leaderboard

The leaderboard ranks every snippet variant in your workspace by win rate across completed campaigns. Think of it as a standings table for your content.

Each row shows:

  • Win rate: the percentage of completed campaigns where this variant was declared the winner
  • Campaign count: how many completed campaigns included this variant
  • Average uplift: the mean conversion rate improvement over the control
  • Conversion rate: the overall rate across all campaigns
  • Verdict labels: a summary of outcomes (e.g., “4 wins, 1 equivalent, 2 insufficient”)

The top three variants receive gold, silver, and bronze rank badges.

Filtering

Filter the leaderboard by channel, placement type, snippet tags, or minimum campaign threshold. The minimum threshold prevents a variant that won its only campaign from ranking above one that has won 8 out of 10.

Drill-down

Click any row to expand a campaign-by-campaign history showing the campaign name, date, audience size, conversion rate, uplift, and verdict for each appearance.


Testing Velocity

Velocity metrics track your operational throughput and testing momentum, available on all plans.

MetricWhat it shows
Tests this monthCampaigns created in the current calendar month
Tests last monthCampaigns created in the previous month
Average time to verdictMean days from campaign creation to first winner or equivalent verdict
Total winnersCampaigns with at least one winner result
Compounding scoreUnique snippet variants that have ever won (knowledge accumulation)
Monthly win rateRatio of campaigns with winners to total completed campaigns, per month

Use velocity metrics to set team goals and track whether your testing programme is improving over time.