Abandoned Browse Flow

This guide will help you understand:

This use case is part of Engagement’s Plug&Play initiative. Plug&Play offers impactful use cases like this one, which are ready out of the box and require minimum setup on your side. Learn how you can get Plug&Play use cases or contact your Customer Success Manager for more information.

What this use case does and why we developed it

Problem

When a customer views multiple products, they are likely interested in purchasing at least some of them. Yet many sales are lost to customers leaving your site without making a purchase!

Solution

To tackle this behavior, when a visitor leaves a website after viewing more than two items but not making any purchase, they will receive an email reminding them about the products they viewed. By engaging with the customer after they fail to make a purchase, we bring their attention back to the items they were initially interested in.

This use case

To help marketers engage with the customer at the right point in their shopping journey, we developed the Abandoned Browse Flow as a ready-to-use solution for this essential bottom-funnel marketing tactic.

Use Case focuses on customers who looked at multiple products but haven’t made a purchase, which makes them likely to be interested in purchase. Pointing the viewed items out is a perfect way to remind the customer of their previous search and subsequently get them to purchase.

Thanks to the ‘Abandoned Browse Flow’, the marketer can motivate customers to go through with their intended purchase. This leads to an increase in the conversion rates, overall RPV and, inevitably, the client’s revenue.

What is included in this use case:

  • Working Automated Scenario
  • Custom Evaluation Dashboard

1. Setting up the use case

(1) Check the prerequisites

The following customer attribute tracking is required:

  • email

The following event tracking is required:

  • ‘purchase_item’ event with event attributes:
    • ‘purchase_status’ (success)
    • ‘total_price’
  • ‘view_item’ with event attributes:
    • ‘Product_id’
  • ‘consent’ with the attribute:
    • category = newsletter

A regular product catalog updating is required:

  • Items identified by product_id that will be the same as product_id used in the view_item event
  • Item attributes [data type]:
    • ‘item_id’
    • image [string]
    • price [number] - current price of the product
    • title [string]
    • url [string]

Address any discrepancies if needed.

(2) Adjust the assets and scenarios

1. Customizing the nodes within the scenario

Check the nodes within the scenario and ensure that all the events / customer attributes match your tracking.

19201920

In case you need to, adjust them accordingly.

2. Customizing the email

Open the email node within the scenario and prepare it accordingly. Make sure the ‘Design’ and the ‘Settings’ tabs are both fitting your needs.

You should ensure that the email includes the desired part of this Use case, the ‘last viewed items’ from the aggregate. Ensure that you reference this aggregate so the customer’s items are correctly displayed.

Design> Editor > Visual (parameters) editor:
Customize the email to reflect your brand, communication style, and your project settings.

  • Main fields - Subject line, Sender email etc.
  • Visual aspect - adjust colors, logo, header, footer etc.
  • Content
  • Jinja code showing viewed_products - adjust if the product catalog in the project uses different naming conventions
18761876

Learn how Bloomreach Engagement enables you to easily send personalized email campaigns in our guide.

Design > Settings
Select appropriate values:

  • Consent category - select an appropriate consent category
  • (optional) Frequency policy - select the corresponding frequency policy based on your project settings
19201920

📘

Note:

To see the email preview, go to ‘TEST’ and in the ‘Preview for’, set the filter to pick only customers with more than two distinct view_item events in the last 24 hours.

(3) Test the email

It is crucial to test the use case before deploying it. It lets you make sure that everything is set up correctly before hitting the actual customers.

A quick preview of the email is available directly in the email node:

  • ‘TEST’ tab > Overview - go to ‘Preview for’ on the right and set the filter to pick only customers with more than 2 view_item events in the last 24 hours.
11731173
  • ‘TEST’ tab > Email previews - to access previews for different devices.
  • ‘Send test email or preview link’ - enables you to send the email directly to the chosen email address.

To further test the scenario in “real” conditions, you can run it for your test customer profile:

  1. Create your customer test profile (or use an existing one) with email and valid consents. Go through the conditions in the scenario and make sure that your test customer has all the events necessary to pass the conditions and receive emails.
  2. In the scenario, make some changes to set up the test environment:
    a. In the ‘tester?’ condition, insert the email of your test customer.
    b. Connect the ‘tester?’ node between the trigger and the first condition to ensure that only the tester will continue down the scenario.
    c. To make the testing faster, you can disconnect the wait nodes and just connect the email and condition nodes directly. Otherwise, the actions following the wait node are initiated only after the set time period passes.
    d. To make sure that you do not fall into the Control Group, you can disconnect the AB test and just connect the email and condition nodes directly.
  3. First, ‘Save’ the scenario, then ‘Start’ it.
  4. Go to the website and open the same product page two times so your test customer will have a ‘view_item’ event tracked. Close the website and wait until ‘session_end’ is tracked.
  5. Go to the ‘Design’ tab of your scenario. By hovering over the nodes, see if your tester has successfully completed the journey and consequently if you have received the email.
  6. After a successful test:
    a. ‘Stop’ the scenario.
    b. Revert the scenario back from the test version to the original version, e.g. remove the ‘tester?’ node, reconnect wait nodes and AB test.

We recommend testing on different browsers and devices.

(4) Run the scenario

Once the testing is over, click on the ‘Start’ button to launch the scenario. Make sure that all the settings are properly configured.

(5) A/B test

A/B test is necessary to evaluate whether the use case is performing and, most importantly, if it is bringing extra revenue. We can drive conclusions from the A/B test once it reaches appropriate significance (95% and higher).
Read more about A/B tests and their significance in our public documentation. To achieve the desired significance level faster, opt for a 50/50 distribution.
Once the significance is reached and the use case is showing positive uplift, you can:

  • Minimize the Control Group to 10% and continue running the A/B test to check at any given moment that the uplifts are still positive.
  • Turn off the A/B test but exercise regular check-ups. For example, e.g. turn on the A/B test for a 3 month period needed to ensure that the use case is still bringing positive uplifts.

🚧

Remember:

The use case performance can change over time; therefore, we recommend regular checking instead of proving the value only initially and then letting the use case run without further evaluation.

(6) Evaluate on a regular basis

The use case comes with a predefined evaluation dashboard. There might be some adjustments necessary to display data in your projects correctly.

Adjustments to consider:

  • campaign target - event segmentation, check if the campaign_id
  • purchase_campaign target, purchase_split target - event segmentations, by default the attribution is 48h, change here if necessary. Also, specify the purchase event, e.g. status ‘successful’ if relevant.

Check the evaluation dashboard regularly to spot any need for improvements as soon as possible.

📘

Note:

If you decide to modify the scenario (e.g. use the A/B test with Variants), some reports and metrics in this initiative need to be adjusted to show correct data.

2. Suggestions for custom modifications

  • Play with email send times to find the ideal time to send the reminder email. Different businesses may benefit from different wait times before reminding customers of their viewed items.
  • Offer a follow-up discount for customers who have never purchased to incentivize them to make the purchase. Set up an AB test to measure if the discount is performing better.
  • Send a second reminder email only to customers who have not opened/clicked on the first email and have not returned to the website.
  • Test different email designs if you are not satisfied with the results of email metrics.
  • Send an SMS to increase the number of channels the customer engages with to increase the probability of purchasing.

3. Evaluation

The dictionary below is helpful for understanding metrics in the evaluation dashboard and their calculation. The most important metrics are marked in bold.

Key metrics calculations
The attribution model used for revenue attribution takes into consideration all the purchases made within:

  • 48h since email open or click
    This time frame is called the attribution window.

Benefit/Revenue calculations

Impressions - count of all web layer show or click actions
Visitors - count of all customers that have seen the web layer or clicked on it
Frequency - Average number of impressions (e.g. open email, show banner etc.) per visitor

  • Frequency = Impressions / Visitors

Revenue - total value of all purchases made by customers impacted by the campaign (e.g. opened or clicked on the email) that occurred within the attribution window.

Purchases - all purchases made by customers impacted by the campaign (e.g. opened or clicked on the email.) that occurred within the attribution window.

Buyers - all customers impacted by the campaign (e.g. opened or clicked on the email) who made a purchase within the attribution window.

Conversion rate (CR) - Percentage of impressions that converted into a purchase within the attribution window

  • Conversion rate = count all purchases / count of all campaign impressions

Unique Conversion rate (UCR) - The proportion of customers who have seen the campaign and were converted into a purchase within the attribution window

  • Unique Conversion rate = count of all purchases / unique customers with impressions

Average Order Value (AOV) - Average revenue from one purchase/order

  • AOV = total revenue / total number of purchases

Revenue Per Visitor (RPV) - Average revenue per customer that has an impression (e.g. open email, show banner etc.)

  • RPV = total revenue / all visitors

Uplift calculations
Uplift represents the difference in performance between Variant A and the Control Group. If the uplift value is positive, Variant A is the winner, and the use case should be maintained. If the uplift is negative, the Control Group performs better than the Variant, and the use case hypothesis should be adjusted.

Uplift results should be taken into consideration together with the statistical significance.
The results are significant if they reach more than 98%. The significance value can be found in the Evaluation dashboard, more specifically Conversion funnel > Confidence.

Revenue Uplift - Uplift determines the absolute financial outcome of your Exponea campaigns. It is defined as extra revenue brought by Variant compared to Control Group.

  • Revenue Uplift = [ RPV(Variant A) - RPV(Control Group) ] x Visitors(Variant A)

Revenue Uplift Potential - Potential Uplift determines the theoretical financial outcome of your Exponea campaign if the Variant A would be deployed to all the customers (Variant A and Control Group). This outcome is extrapolation of known data, not a guaranteed number.

  • Revenue Uplift Potential = [ RPV(Variant A) - RPV(Control Group) ] x Visitors(Variant A + Control Group)

Unique Conversion rate Uplift % - Percentage difference between UCR (Variant A) and UCR (Control Group).

  • UCR uplift = [ UCR(Variant A) - UCR(Control Group) ] / UCR(Control Group) x 100

AOV Uplift % - Percentage difference between AOV (Variant A) and AOV (Control Group).

  • AOV uplift = [ AOV(Variant A) - AOV(Control Group) ] / AOV(Control Group) x 100

RPV Uplift % - Percentage difference between RPV (Variant A) and RPV (Control Group).

  • RPV uplift = [ RPV(Variant A) - RPV(Control Group) ] / RPV(Control Group) x 100

Emailing and Web layer metrics
There are two types of metrics in the evaluation: non-unique and unique. The non-unique metrics count the number of events that have occurred, and the unique ones count the number of customers that have made the action. Example: one customer will open the email three times. Non unique open metrics = 3, unique open metric = 1.

Ecommerce Benchmark for emailing metrics
Unique Delivery rate - 99% and above
Unique Open rate - 20% and above
Unique Click rate from opened -15% and above

Ecommerce Benchmark for web layer metrics
Unique Click rate from show - 1.5% - 4% and above namely for banners with vouchers


Did this page help you?