Abandoned Cart Flow with Product Personalization

This guide will help you understand:

This use case is part of Engagement’s Plug&Play initiative. Plug&Play offers impactful use cases like this one, which are ready out of the box and require minimum setup on your side. Learn how you can get Plug&Play use cases or contact your Customer Success Manager for more information.

What this use case does and why we developed it

Problem

Many online sales are lost, not because e-sellers are unable to make customers interested in their products, but rather because customers fail to go through with the last step - the purchase. In fact, the average e-commerce business loses over 75% of its online sales to cart abandonment. Thus, reducing cart abandonment can be an effective way of capturing lost revenue.

Solution

An effective way to avoid such loss, and consequently increase the conversion and RPV (revenue per visitor), is to remind the customers of the forgotten items after a few hours or days of abandoning the cart. This strategy has been proven successful by Oliver Bonas for instance, when an abandoned cart campaign led to an increase in revenue by 268%, campaign click-through rate by 197% and campaign email open rate by 155%.

This use case

This is what can be achieved with the ‘Abandoned Cart Flow with Product Personalization’ use case. This campaign automatically sends a personalized email to customers who have left the e-store without ordering the items added to their cart. Such reminders bring the customers’ attention back to the buying process and increase the chances of them finalizing the purchase.

What is included in this use case:

  • Working Automated Email Scenario
  • Prebuilt email template with personalization block
  • Custom Evaluation Dashboard

1. Setting up the use case

(1) Check the prerequisites

The following event tracking is required:

  • purchase
  • view_item with commonly named attributes
  • cart_update with commonly named attributes - tracked action “add”
  • updated product catalog
  • page_visit location contains Abandoned%20cart (optional)

Address any discrepancies if needed.

📘

If your data are tracked differently than described above, make sure that Data Manager > Data Mapping is set up for your project. This will enable you to map crucial data to your project naming conventions. For more information, see article about cloning.

(2) Adjust the created assets and scenarios

1. Customize the scenario

Open the ‘Abandoned Cart Flow’ scenario and adjust the conditions:

  • Update the consents according to your project
  • Check if events and properties used in the conditions match your project e.g. purchase event have the status ‘successful’
  • (optional) Adjust the business logic by adding/removing conditions
  • (optional) Adjust the A/B test
14141414

Learn more about connecting with your customers via Scenarios in our guide.

2. Update the email design

There are two email nodes in the scenario ‘Abandoned cart 1st email ’ and ‘Abandoned cart 2nd email’. There are several changes that you need to do in both.

14301430

Design > Editor > Visual (parameters) editor:
Customize the email so it reflects your brand and communication style as well as your project settings. This entails:

  • Main fields - e.g. Subject line, Sender email
  • Visual aspect - adjust the colors, logo, header, footer
  • Content - make sure to differentiate the content (e.g. subject and wording) of the first and the second email
  • Jinja code showing products in the cart - adjust if the product catalog in the project uses different naming conventions. Learn more about Jinja in this document.

Learn more about visually editing your email templates in our guide.

Design > Settings
Select appropriate values:

📘

To see the email preview, go to TEST and in the ‘Preview for’ set the filter to pick only customers with cart_update in the last 24 hours.

(3) Test the scenario

It is highly important to test the use case before deploying it. It enables you to make sure that everything is set up correctly before hitting the real customers.

A quick preview of the email is available directly in the email node:

  • TEST tab > Overview - go to ‘Preview for’ on the right and set the filter to pick only customers with cart_update in the last 24 hours
  • TEST tab > Email previews - to access previews for different devices
  • Send test email or preview link’ - enables to send the email directly to the chosen email address
13941394

To further test the scenario in “real” conditions, you can run it for your test customer profile:

  • Create your customer test profile (or use existing) with email and valid consents. Go through the conditions in the scenario and make sure that your test customer has all the events necessary to pass the conditions and receive emails.
  • In the scenario, make some changes to set up the test environment:
    • In the ‘tester?’ condition, insert the email of your test customer
    • Connect the ‘tester?’ node between trigger and first condition to make sure that only tester will continue down the scenario
    • To make the testing faster, you can disconnect the wait nodes and connect the email and condition nodes directly. Otherwise the wait time should be taken into account.
    • To make sure that you do not fall into the Control Group, you can disconnect the AB test and just connect the email and condition nodes directly.
14081408
  • First Save the scenario, then Start it
  • Go to the website and perform cart update action so your test customer will have a ‘cart_update’ event tracked.
  • Go to the Evaluate tab of your scenario and by hovering over the nodes, see if your tester has successfully completed the journey and consequently whether you have received the emails.
  • After a successful test:
    • Stop the scenario
    • Revert the scenario back from the test version to the original version e.g. remove ‘tester?’ node, reconnect wait nodes and AB test

(5) Run the scenario

Once the testing is over, click on the ‘Start’ button to launch the scenario.

(6) A/B test

A/B test is necessary to evaluate whether the use case is performing and most importantly if it is bringing extra revenue. We can drive conclusions from the A/B test once it reaches appropriate significance (99% and higher).

To achieve the desired level of significance faster, preferably opt for a 50/50 distribution. Once the significance is reached and the use case is showing positive uplift, you can:

  • Minimize the Control Group to 10% and continue running the A/B test to be able to check at any given moment that the uplifts are still positive.
  • Turn off the A/B test but exercise regular check-ups, e.g. turn on A/B testing after 3 months for a period needed to achieve significance to be sure that the use case is still bringing positive uplifts.

📘

Remember

The use case performance can change over time therefore we recommend regular checking instead of proving the value only in the beginning and then letting the use case run without further evaluation.

(6) Evaluate on a regular basis

The use case comes with a predefined evaluation dashboard. There might be some adjustments necessary in order to correctly display data in your projects.

Adjustments to consider:

  • campaign target 1st email - event segmentation, check if the campaign_name and action_id correspond to the 1st email
  • campaign target 2nd email - event segmentation, check if the campaign_name and action_id correspond to the 2nd email
  • Emailing metrics and Delivery Timeline - reports, check if the campaign_id correspond to the scenario
  • purchase_campaign target 1st email, purchase_split target 1st email, purchase_campaign target 2nd email, purchase_split target 2nd email - event segmentations, by default the attribution is 48h, change here if necessary. Also specify the purchase event e.g. status ‘successful’ if relevant.
  • Emailing metrics Evolution 1st email (last 90 days), Emailing metrics Evolution 2nd email (last 90 days) - reports, adjust chart display to only show the unique rate metrics
  • Metrics displayed on the top of the dashboard (Revenue Uplift, Revenue, Unique Conversion Rate) - personalise for the project needs e.g. add currency, set the target value or comparison with historical data

Check the evaluation dashboard regularly to spot any need for improvements as soon as possible.

🚧

If you decide to modify the scenario (e.g. use more Variants for A/B test), some reports and metrics in this initiative need to be adjusted to show correct data.

2. Suggestions for custom modifications

While this use case is preset for the above specification, you are able to modify it further to extract even more value out of it. We suggest the following modifications, but do not refrain from being creative and thinking of your own improvements:

  • Try different A/B tests - possibility to run more variants at the same time and test different designs, number of products displayed etc.
  • Enhance content with a specific personalization - for example, if the customer is identified, you can address them in the subject line by using their first name using Jinja
  • Redesign the email - by changing the subject, copywriting, emphasizing the CTA (call to action) more by placing it at both the top and the bottom of the email, etc.
  • Test different sending times - for example 1h after cart_update (Variant A) vs 4h after cart_update (Variant B)
  • Enhance the email content with recommended products - learn more about recommendations here
  • Offer follow-up discounts for customers - if a customer is not responding to your emails, you may add a third email with a voucher code in the scenario.

3. Evaluating and interpreting the dashboard

The dictionary below is helpful for understanding metrics in the evaluation dashboard and their calculation. The most important metrics are marked in bold.

Key metrics calculations
The attribution model used for revenue attribution takes into consideration all the purchases made within:

  • 48h since email open or click

This time frame is called the attribution window.

Benefit/Revenue calculations

  • Impressions - count of all actions that translates into customer being impacted by a marketing campaign e.g. web layer show or click actions / emails opened or clicked/ sms delivered/ push notifications delivered or clicked
  • Visitors - count of all customers that have been impacted by the marketing campaign (weblayers = show or clicked, emails = open or click)

Revenue - total value of all purchases made by customers impacted by the campaign (e.g. opened or clicked on the email, show or click on the web layer etc) that occured within the attribution window.

Purchases - all purchases made by customers impacted by the campaign (e.g. opened or clicked on the email, were shown or clicked on the web layer, etc.) that occured within the attribution window.

Buyers - all customers impacted by the campaign (e.g. opened or clicked on the email, show or click on the web layer etc) who made a purchase within the attribution window.

Conversion rate (CR) - Percentage of impressions that were converted into purchase within the attribution window

  • Conversion rate = count all purchases / count of all campaign impressions

Unique Conversion rate (UCR) - The proportion of customers who have seen the campaign and were converted into a purchase within the attribution window

  • Unique Conversion rate = count of all purchases / unique customers with impressions

Average Order Value (AOV) - Average revenue from one purchase/order

  • AOV = total revenue / total number of purchases

Revenue Per Visitor (RPV) - Average revenue per customer that has an impression (e.g. open email, show banner etc.)

  • RPV = total revenue / all visitors

Uplift calculations
Uplift represents the difference in performance between Variant A and the Control Group. If the uplift value is positive, Variant A is the winner and the use case should be maintained. If the uplift is negative, it means that the Control Group is performing better than Variant and the use case hypothesis should be adjusted.

Uplift results should be taken into consideration together with the statistical significance. The results are significant if they reach more than 98%. The significance value can be found as part of the Evaluation dashboard, more specifically Conversion funnel > Confidence.

Revenue Uplift - Uplift determines the absolute financial outcome of your Exponea campaigns. It is defined as extra revenue brought by Variant compared to Control Group.

  • Revenue Uplift = [ RPV(Variant A) - RPV(Control Group) ] x Visitors(Variant A)

Revenue Uplift Potential - Potential Uplift determines the theoretical financial outcome of your Exponea campaign if the Variant A would be deployed to all the customers (Variant A and Control Group). This outcome is extrapolation of known data, not a guaranteed number.

  • Revenue Uplift Potential = [ RPV(Variant A) - RPV(Control Group) ] x Visitors(Variant A + Control Group)

Unique Conversion rate Uplift % - Percentage difference between UCR (Variant A) and UCR (Control Group).

  • UCR uplift = [ UCR(Variant A) - UCR(Control Group) ] / UCR(Control Group) x 100

AOV Uplift % - Percentage difference between AOV (Variant A) and AOV (Control Group).

  • AOV uplift = [ AOV(Variant A) - AOV(Control Group) ] / AOV(Control Group) x 100

RPV Uplift % - Percentage difference between RPV (Variant A) and RPV (Control Group).

  • RPV uplift = [ RPV(Variant A) - RPV(Control Group) ] / RPV(Control Group) x 100

Campaign metrics
There are two types of metrics in the evaluation: non unique and unique ones. The non unique ones are counting the number of events that have occurred and the unique ones count the number of customers that have made the action. Example: one customer will open the email three times. Non unique open metrics = 3, unique open metric = 1.

Ecommerce Benchmark for emailing metrics
Unique Delivery rate - 99% and above
Unique Open rate - 20% and above
Unique Click rate from opened -15% and above


Did this page help you?