Use case evaluation
Use case evaluation helps you measure the impact of your campaigns and make data-driven decisions. This article explains the key metrics, uplift calculations, and ecommerce benchmarks available in the evaluation dashboard.
Action-based use cases, such as weblayers or omnichannel orchestrations, include built-in A/B tests. Their success depends on customer actions, which vary by industry and website. Some use cases follow a best-practice format that applies across all businesses. For these, the evaluation provides generic performance reporting.
Why evaluate your use cases
Evaluation gives you a clear picture of what's working and what isn't. Use it to track campaign performance, understand customer behavior, and make adjustments based on real data rather than assumptions.
Start evaluating your use case early to measure impact from the beginning. Your delivery team can guide you through the prebuilt evaluation process.
Evaluation dictionary
Use this dictionary to understand the metrics in the evaluation dashboard and how they're calculated.
Benefit and revenue calculations
| Metric | Definition |
|---|---|
| Impressions | Sum of all actions that result in a customer being impacted by a marketing campaign — for example, a weblayer shown or clicked, an email opened or clicked, an SMS delivered, or a push notification delivered or clicked. |
| Visitors | Sum of all customers impacted by the marketing campaign (weblayers = show/click, emails = open/click). |
| Frequency | Average number of impressions per visitor — for example, opening an email or showing a banner. Formula: = Impressions / Visitors |
| Revenue | Total value of all customer purchases impacted by the campaign that occurred within the attribution window. Impacted means the customer opened or clicked an email, or was shown or clicked a weblayer. |
| Purchases | The sum of customer purchases impacted by the campaign that occurred within the attribution window. |
| Buyers | Sum of customers impacted by the campaign who made a purchase within the attribution window. |
| Conversion rate (CR) | Percentage of impressions converted into purchases within the attribution window. Formula: = count all purchases/count of all campaign impressions |
| Unique conversion rate (UCR) | Proportion of customers who saw the campaign and converted into a purchase within the attribution window. Formula: = all buyers / unique customers with impressions |
| Average order value (AOV) | Average revenue per purchase or order. Formula: = total revenue / total number of purchases |
| Revenue per visitor (RPV) | Average revenue per customer with an impression — for example, an opened email or a shown banner. Formula: = total revenue / all visitors |
| Revenue per recipient (RPR) | Average revenue per customer who received the email campaign with a tracked "delivered" status. Formula: = total revenue / all customers with the campaign delivered |
| Revenue per buyer (RPB) | Average revenue per customer with at least one purchase within the attribution window. Formula: = total revenue / all buyers |
| Attribution window for campaign performance (in hours) | The time between an email being opened or clicked and a purchase. We recommend setting this to 24, 48, or 72 hours. Formula: = (timestamp - last time campaign qualified) / 3,600 |
Uplift calculations
Uplift represents the difference in performance between Variant A and the control group. A positive uplift means Variant A outperforms the control group — maintain the use case. A negative uplift means the control group outperforms Variant A — adjust the use case hypothesis.
Consider uplift results alongside statistical significance. Results are significant when they reach above 98%. Find the significance value in the evaluation dashboard under Conversion funnel > Confidence.
| Metric | Definition | Formula |
|---|---|---|
| Revenue uplift | Extra revenue brought by Variant A compared to the control group. Uplift determines the absolute financial outcome of your campaigns. | = [ RPV(Variant A) - RPV(Control Group) ] x Visitors(Variant A) |
| Revenue uplift potential | Theoretical financial outcome of your campaign if Variant A were deployed to all customers, both Variant A and the control group. This is an extrapolation of known data, not a guaranteed number. | = [ RPV(Variant A) - RPV(Control Group) ] x Visitors(Variant A + Control Group) |
| UCR uplift | Difference between UCR of Variant A and UCR of the control group, in percentage points. | = [ UCR(Variant A) - UCR(Control Group) ] x 100 |
| UCR uplift % | Percentage difference between the UCR of Variant A and the UCR of the control group. | = [ UCR(Variant A) - UCR(Control Group) ] / UCR(Control Group) x 100 |
| AOV uplift | Difference between AOV of Variant A and AOV of the control group, in raw numbers. | = AOV(Variant A) - AOV(Control Group) |
| AOV uplift % | Percentage difference between the AOV of Variant A and the AOV of the control group. | = [ AOV(Variant A) - AOV(Control Group) ] / AOV(Control Group) x 100 |
| RPV uplift | Difference between RPV of Variant A and RPV of the control group, in raw numbers. | = RPV(Variant A) - RPV(Control Group) |
| RPV uplift % | Percentage difference between RPV of Variant A and RPV of the control group. | = [ RPV(Variant A) - RPV(Control Group) ] / RPV(Control Group) x 100 |
| RPB uplift | Difference between RPB of Variant A and RPB of the control group, in raw numbers. | = RPB(Variant A) - RPB(Control Group) |
| RPB uplift % | Percentage difference between the RPB of Variant A and the RPB of the control group. | = [ RPB(Variant A) - RPB(Control Group) ] / RPB(Control Group) x 100 |
| Attribution window for uplift since A/B split (in hours) | Time between an A/B test split and a purchase. Used only in uplift calculations. We recommend setting this to 24, 48, or 72 hours. | = (timestamp - last time split qualified) / 3,600 |
Campaign metrics
The evaluation includes two types of metrics: unique and non-unique. Unique metrics count the number of customers who took an action. Non-unique metrics count the total number of times an event occurred.
For example, if one customer opens an email three times, the non-unique open metric is 3, and the unique open metric is 1.
E-commerce benchmarks
These benchmarks are specific to each campaign. Achieving them consistently across campaigns typically leads to meeting your overall monthly benchmarks.
Email benchmarks
- Unique delivery rate: 99% and above.
- Unique open rate: 20% and above.
- Unique click rate from opened: 15% and above.
Negative email benchmarks
- Unique hard bounce rate: below 1% — aim for below 0.5% over time.
- Unique soft bounce rate: below 2%.
- Unique unsubscribe rate: below 0.5%.
- Unique complained "spam" rate: below 0.1%.
- Unique pre-blocked rate: below 0.01%.
- Unique clicked honeypot rate: below 1%.
Weblayer benchmarks
- Unique click rate from show: 1.5%–4% and above, especially for banners with vouchers.
Updated 2 days ago
