# This guide will help you understand:

  • [What the use case does](🔗)

  • [How to set up the use case](🔗)

  • [How to further customize the use case](🔗)

  • [How to evaluate the dashboard](🔗)

This use case is part of Engagement’s **[Plug&Play initiative](🔗).** Plug&Play offers impactful use cases like this one, which are ready out of the box and **require minimum setup on your side**. Learn how you can get Plug&Play use cases or contact your Customer Success Manager for more information.

# What this Use Case does and why we developed it

#### Problem

Many online sales are lost, not because e-sellers are unable to make customers interested in their products, but rather because customers fail to go through with the last step - the purchase. In fact, the average e-commerce business loses over 75% of its online sales to cart abandonment. Thus, reducing cart abandonment can be an effective way of capturing lost revenue.

#### Solution

An effective way to avoid such loss, and consequently increase the conversion and RPV (revenue per visitor), is to remind the customers of the forgotten items after a few hours or days of abandoning the cart. This strategy has been proven successful by Oliver Bonas for instance, when an abandoned cart campaign led to an increase in revenue by 268%, campaign click-through rate by 197%, and campaign email open rate by 155%.

#### This use case

This is what can be achieved with the ‘Abandoned Cart Flow with Product Personalization’ use case. This campaign automatically sends a personalized email to customers who have left the e-store without ordering the items added to their cart. Such reminders bring the customers’ attention back to the buying process and increase the chances of them finalizing the purchase.

**What is included in this use case:**

  • Working Automated Email Scenario

  • Prebuilt email template with personalization block

  • Custom Evaluation Dashboard

# 1. Set up the use case

### (1) Check the requirements

The following event tracking is required:

  • **purchase**

  • **view_item** with commonly named attributes

  • **cart_update** with commonly named attributes - tracked action “add”

  • **updated product catalog**

  • page_visit location contains Abandoned%20cart (optional)

Address any discrepancies if needed.

If your data are tracked differently than described above, make sure that _[Data Manager > Data Mapping](🔗)_ is set up for your project. This will enable you to map crucial data to your project naming conventions. For more information, see article about [cloning](🔗).

### (2) Adjust the created assets and scenarios

#### 1. Customize the scenario

Open the **‘Abandoned Cart Flow’** scenario and adjust the conditions:

  • Update the [**consents**](🔗) according to your project

  • Check if events and properties used in the conditions match your project e.g. purchase event have the **status ‘successful’**

  • (optional) Adjust the business logic by **adding/removing conditions**

  • (optional) Adjust the [**A/B test**](🔗)


Learn more about connecting with your customers via Scenarios in our [guide](🔗).

#### 2. Update the email design

There are **two email nodes** in the scenario ‘**Abandoned cart 1st email ’** and **‘Abandoned cart 2nd email’**. There are several changes that you need to do in both.


**_Design > Editor > Visual (parameters) editor:_** Customize the email so it reflects your brand and communication style as well as your project settings. This entails:

  • **Main fields** - e.g. Subject line, Sender email

  • **Visual aspect** - adjust the colors, logo, header, footer

  • **Content** - make sure to differentiate the content (e.g. subject and wording) of the first and the second email

  • **Jinja code showing products in the cart** - adjust if the product catalog in the project uses different naming conventions. Learn more about Jinja in this document.

Learn more about visually editing your email templates in our [guide](🔗).

**_Design > Settings_** Select appropriate values:

  • [**Consent category**](🔗) - select an appropriate consent category

  • [**Frequency policy**](🔗) (optional) - select corresponding frequency policy based on your project settings

To see the email preview, go to _TEST_ and in the ‘_Preview for’_ set the filter to pick only customers with _cart_update_ in the last 24 hours.

### (3) Test the scenario

It is highly important to test the use case before deploying it. It enables you to make sure that everything is set up correctly before hitting the real customers.

A **quick preview of the email **is available directly in the **email node**:

  • _TEST tab > Overview_ - go to ‘_Preview for_’ on the right and set the filter to pick only customers with cart_update in the last 24 hours

  • _TEST tab > Email previews_ - to access previews for different devices

  • _Send test email or preview link_’ - enables to send the email directly to the chosen email address


**To further test the scenario in “real” conditions, you can run it for your test customer profile:**

  • Create your customer test profile (or use existing) with email and valid consents. Go through the conditions in the scenario and make sure that your test customer has all the events necessary to pass the conditions and receive emails.

  • In the scenario, make some changes to set up the test environment:

    • In the **‘tester?’ condition**, insert the email of your test customer

    • Connect the ‘tester?’ node **between trigger and first condition** to make sure that only tester will continue down the scenario

    • To make the testing faster, you can** disconnect the wait nodes** and connect the email and condition nodes directly. Otherwise the wait time should be taken into account.

    • To make sure that you do not fall into the Control Group, you can **disconnect the AB test** and just connect the email and condition nodes directly.


  • First Save the scenario, then _Start_ it

  • Go to the website and **perform cart update action** so your test customer will have a ‘_cart_update_’ event tracked.

  • Go to the _Evaluate_ tab of your scenario and by hovering over the nodes, see if your tester has successfully completed the journey and consequently whether you have received the emails.

  • After a successful test:

    • _Stop_ the scenario

    • **Revert** the scenario back from the test version to the original version e.g. remove ‘tester?’ node, reconnect wait nodes and AB test

### (5) Run the scenario

Once the testing is over, click on the ‘Start’ button to launch the scenario.

### (6) A/B test

[A/B test](🔗) is necessary to **evaluate whether the use case is performing** and most importantly if it is bringing extra revenue. We can drive conclusions from the A/B test once it reaches appropriate significance (99% and higher).

To achieve the desired level of significance faster, preferably opt for a 50/50 distribution. Once the significance is reached and the use case is showing positive uplift, you can:

  • Minimize the Control Group to 10% and continue running the A/B test to be able to check at any given moment that the uplifts are still positive.

  • Turn off the A/B test but exercise regular check-ups, e.g. turn on A/B testing after 3 months for a period needed to achieve significance to be sure that the use case is still bringing positive uplifts.


The **use case performance can change over time** therefore we recommend regular checking instead of proving the value only in the beginning and then letting the use case run without further evaluation.

### (6) Evaluate on a regular basis

The use case comes with a predefined evaluation dashboard. There might be some adjustments necessary in order to correctly display data in your projects.

**Adjustments to consider:**

  • **campaign target 1st email** - event segmentation, check if the _campaign_name_ and _action_id_ correspond to the 1st email

  • **campaign target 2nd email** - event segmentation, check if the _campaign_name_ and _action_id_ correspond to the 2nd email

  • **Emailing metrics and Delivery Timeline** - reports, check if the _campaign_id_ correspond to the scenario

  • _purchase_campaign target 1st email_, _purchase_split target 1st email_, _purchase_campaign target 2nd email_, _purchase_split_ target 2nd email - event segmentations, by default the **attribution is 48h**, change here if necessary. Also specify the purchase event e.g. status ‘successful’ if relevant.

  • **Emailing metrics Evolution 1st email** (last 90 days), **Emailing metrics Evolution 2nd email** (last 90 days) - reports, adjust chart display to only show the unique rate metrics

  • **Metrics** displayed on the top of the dashboard (**Revenue Uplift**, **Revenue**, **Unique Conversion Rate**) - personalise for the project needs e.g. add currency, set the target value or comparison with historical data

Check the evaluation dashboard regularly to spot any need for improvements as soon as possible.

If you decide to modify the scenario (e.g. use more Variants for A/B test), some reports and metrics in this initiative need to be adjusted to show correct data.

# 2. Suggestions for custom modifications

While this use case is preset for the above specification, you are able to modify it further to extract even more value out of it. We suggest the following modifications, but do not refrain from being creative and thinking of your own improvements:

  • **Try different A/B tests** - possibility to run more variants at the same time and test different designs, number of products displayed etc.

  • **Enhance content with a specific personalization** - for example, if the customer is identified, you can address them in the subject line by using their first name using Jinja

  • **Redesign the email** - by changing the subject, copywriting, emphasizing the CTA (call to action) more by placing it at both the top and the bottom of the email, etc.

  • **Test different sending times** - for example 1h after cart_update (Variant A) vs 4h after cart_update (Variant B)

  • **Enhance the email content with recommended products** - learn more about recommendations here

  • **Offer follow-up discounts for customers** - if a customer is not responding to your emails, you may add a third email with a voucher code in the scenario.

# 3. Evaluate

**Key metrics calculations** The attribution model used for revenue attribution takes into consideration all the purchases made within 48 hours since email open/click. This time frame is called the _attribution window_.

Go to our **[Evaluation Dictionary article](🔗)** to understand different metrics in your evaluation dashboard! The article covers [Benefit/Revenue calculations](🔗), [Uplift calculations](🔗) and [Ecommerce benchmarks for emailing metrics](🔗) relevant for this Use Case.