Abandoned Browse Flow

This guide will help you understand:

This use case is part of Engagement’s Plug&Play initiative. Plug&Play offers impactful use cases like this one, which are ready out of the box and require minimum setup on your side. Learn how you can get Plug&Play use cases or contact your Customer Success Manager for more information.

What this Use Case does and why we developed it


When a customer views multiple products, they are likely interested in purchasing at least some of them. Yet many sales are lost to customers leaving your site without making a purchase!


To tackle this behavior, when a visitor leaves a website after viewing more than two items but not making any purchase, they will receive an email reminding them about the products they viewed. By engaging with the customer after they fail to make a purchase, we bring their attention back to the items they were initially interested in.

This use case

To help marketers engage with the customer at the right point in their shopping journey, we developed the Abandoned Browse Flow as a ready-to-use solution for this essential bottom-funnel marketing tactic.

Use Case focuses on customers who looked at multiple products but haven’t made a purchase, which makes them likely to be interested in purchasing. Pointing the viewed items out is a perfect way to remind the customer of their previous search and subsequently get them to purchase.

Thanks to the ‘Abandoned Browse Flow’, the marketer can motivate customers to go through with their intended purchase. This leads to an increase in the conversion rates, overall RPV, and, inevitably, the client’s revenue.

What is included in this use case:

  • Working Automated Scenario
  • Custom Evaluation Dashboard

1. Set up the Use Case

(1) Check the requirements

The following customer attribute tracking is required:

  • email

The following event tracking is required:

  • ‘purchase_item’ event with event attributes:
    • ‘purchase_status’ (success)
    • ‘total_price’
  • ‘view_item’ with event attributes:
    • ‘Product_id’
  • ‘consent’ with the attribute:
    • category = newsletter

Regular product catalog updating is required:

  • Items identified by product_id that will be the same as product_id used in the view_item event
  • Item attributes [data type]:
    • ‘item_id’
    • image [string]
    • price [number] - the current price of the product
    • title [string]
    • url [string]

Address any discrepancies if needed.

(2) Adjust the assets and scenarios

1. Customizing the nodes within the scenario

Check the nodes within the scenario and ensure that all the events/customer attributes match your tracking.


In case you need to, adjust them accordingly.

2. Customizing the email

Open the email node within the scenario and prepare it accordingly. Make sure the ‘Design’ and the ‘Settings’ tabs are both fitting your needs.

You should ensure that the email includes the desired part of this Use case, the ‘last viewed items’ from the aggregate. Ensure that you reference this aggregate so the customer’s items are correctly displayed.

Design> Editor > Visual (parameters) editor:
Customize the email to reflect your brand, communication style, and your project settings.

  • Main fields - Subject line, Sender email etc.
  • Visual aspect - adjust colors, logo, header, footer etc.
  • Content
  • Jinja code showing viewed_products - adjust if the product catalog in the project uses different naming conventions

Learn how Bloomreach Engagement enables you to easily send personalized email campaigns in our guide.

Design > Settings
Select appropriate values:

  • Consent category - select an appropriate consent category
  • (optional) Frequency policy - select the corresponding frequency policy based on your project settings



To see the email preview, go to ‘TEST’ and in the ‘Preview for’, set the filter to pick only customers with more than two distinct view_item events in the last 24 hours.

(3) Test the email

It is crucial to test the use case before deploying it. It lets you make sure that everything is set up correctly before hitting the actual customers.

A quick preview of the email is available directly in the email node:

  • ‘TEST’ tab > Overview - go to ‘Preview for’ on the right and set the filter to pick only customers with more than 2 view_item events in the last 24 hours.
  • ‘TEST’ tab > Email previews - to access previews for different devices.
  • ‘Send test email or preview link’ - enables you to send the email directly to the chosen email address.

To further test the scenario in “real” conditions, you can run it for your test customer profile:

  1. Create your customer test profile (or use an existing one) with email and valid consents. Go through the conditions in the scenario and make sure that your test customer has all the events necessary to pass the conditions and receive emails.
  2. In the scenario, make some changes to set up the test environment:
    a. In the ‘tester?’ condition, insert the email of your test customer.
    b. Connect the ‘tester?’ node between the trigger and the first condition to ensure that only the tester will continue down the scenario.
    c. To make the testing faster, you can disconnect the wait nodes and just connect the email and condition nodes directly. Otherwise, the actions following the wait node are initiated only after the set time period passes.
    d. To make sure that you do not fall into the Control Group, you can disconnect the AB test and just connect the email and condition nodes directly.
  3. First, ‘Save’ the scenario, then ‘Start’ it.
  4. Go to the website and open the same product page two times so your test customer will have a ‘view_item’ event tracked. Close the website and wait until ‘session_end’ is tracked.
  5. Go to the ‘Design’ tab of your scenario. By hovering over the nodes, see if your tester has successfully completed the journey and consequently if you have received the email.
  6. After a successful test:
    a. ‘Stop’ the scenario.
    b. Revert the scenario back from the test version to the original version, e.g. remove the ‘tester?’ node, reconnect wait nodes, and AB test.

We recommend testing on different browsers and devices.

(4) Run the scenario

Once the testing is over, click on the ‘Start’ button to launch the scenario. Make sure that all the settings are properly configured.

(5) A/B test

A/B test is necessary to evaluate whether the use case is performing and, most importantly, if it is bringing extra revenue. We can drive conclusions from the A/B test once it reaches appropriate significance (95% and higher).
Read more about A/B tests and their significance in our public documentation. To achieve the desired significance level faster, opt for a 50/50 distribution.
Once the significance is reached and the use case is showing positive uplift, you can:

  • Minimize the Control Group to 10% and continue running the A/B test to check at any given moment that the uplifts are still positive.
  • Turn off the A/B test but exercise regular check-ups. For example, e.g. turn on the A/B test for a 3-month period needed to ensure that the use case is still bringing positive uplifts.



The use case performance can change over time; therefore, we recommend regular checking instead of proving the value only initially and then letting the use case run without further evaluation.

(6) Evaluate on a regular basis

The use case comes with a predefined evaluation dashboard. There might be some adjustments necessary to display data in your projects correctly.

Adjustments to consider:

  • campaign target - event segmentation, check if the campaign_id
  • purchase_campaign target, purchase_split target - event segmentations, by default the attribution is 48h, change here if necessary. Also, specify the purchase event, e.g. status ‘successful’ if relevant.

Check the evaluation dashboard regularly to spot any need for improvements as soon as possible.



If you decide to modify the scenario (e.g. use the A/B test with Variants), some reports and metrics in this initiative need to be adjusted to show correct data.

2. Suggestions for custom modifications

  • Play with email send times to find the ideal time to send the reminder email. Different businesses may benefit from different wait times before reminding customers of their viewed items.
  • Offer a follow-up discount for customers who have never purchased to incentivize them to make the purchase. Set up an AB test to measure if the discount is performing better.
  • Send a second reminder email only to customers who have not opened/clicked on the first email and have not returned to the website.
  • Test different email designs if you are not satisfied with the results of email metrics.
  • Send an SMS to increase the number of channels the customer engages with to increase the probability of purchasing.

3. Evaluate

Key metrics calculations
The attribution model used for revenue attribution takes into consideration all the purchases made within 48 hours since email open/click. This time frame is called the attribution window.

Go to our Evaluation Dictionary article to understand different metrics in your evaluation dashboard! The article covers Benefit/Revenue calculations, Uplift calculations and Ecommerce benchmarks relevant for this Use Case.

What´s next?