How do I run an A/B test?
How do I start?
While you can start in other areas of the Bloomreach Dashboard, we recommend starting at the Active Test Overview page for your first test (or, the Testing tab in the top navigation). This page contains a list of all the active tests your organization is running. If your test is the first test for your organization, then the list is empty.
Click images to expand them
If the images on this page are cut off on your screen or are blurry, then you can click the images to expand them. This tip applies to any image you see in the Bloomreach help center.
Start a new test
Start by clicking the Start New Test button in the upper right corner. The New Test Setup page opens where you can give your test a name and begin adding experiences.
Name your test
Enter a name for your test in the Test Name field.
Be specific when you name your test. The Active Test Overview page contains a list of all the tests running on your site. You're likely to have difficulty figuring out which test is for which set of experiences if you're faced with a long list of tests called test1, test2, my test, my other test, test1 for Bob, and so on.
After you name your test, determine the traffic you'd like to allocate to your test by using the slider under Test Traffic Allocation.
Next, click on Select Test Variants to find a rule you would like to create variants for. You can use ranking rules or redirect rules to test. To help you distinguish among the different types of rules available as variants, look for descriptors above the rule names, such as redirect or category page. You can also click the heading names on the rule variant table to sort variants.
When you finish selecting variants, you'll be taken to a menu where you can select which variant(s) to include in your test.
You have to select at least 2 variants and can select up to 4 variants for a test. To include a variant, click on the toggle switch next to that variant . You can also add notes for each variant.
Once you've selected the variants you want to include for your test, click the Done button in the lower left corner of the popup. The Bloomreach Dashboard returns you to the Test Setup page.
You can either preview the changes made to each variant by clicking on the Preview button in each test bucket or preview the variants side-by-side by clicking the Preview button at the top. Checking the Show Changes checkbox in the preview will annotate the products with additional information about how the changes affect their ranking. For example, in the screenshot below, the preview of Test Bucket A shows that the ranking of one product improved by 4 places and the ranking of several products worsened by 1 position each. For more information on Preview, you can navigate to the Previewing changes section.
Start the test
To start the test, click the Activate Test button in the upper right corner of the New Test Setup page. Or, you can Save As Draft and come back to edit the test later. As soon as you start the test, your site's traffic is automatically assigned to your test's experiences. Individual traffic is assigned randomly to different experiences in the proportion that you configured during test setup.
From this point, the test takes care of itself. You don't need to do anything to it as long as you're happy with how it's running. Theoretically, you can even let it run in perpetuity.
Retrieve your results
You can check the results of your test any time after you start it. Return to the Active Test Overview page and click the View Stats button for your test. Here's where it really pays off to give your test a good name! It's easy to find your test in the list if it has a clear name.
Here's an example of what test results can look like.
End your test
When you're satisfied with your test results or want to retest with changes to the experiences' variants, you can end the test. Find your test in the Active Test Overview page, open it, and click the End Test button in the upper right corner.
A decision box opens, asking you to decide what action to take when finishing the test. You can select an experience to become the new default experience for all of your site's visitors or you can turn off all experiences in the test. Click the Save button after you make your decision. Your test will end and all visitors will be shown the winner or the variant you have selected.
Restart your test
You can restart a test from two places–the Ranking Rule table, or the Testing tab.
From the Ranking Rule table, click on the Variant icon for the search term or category you would like to restart a test for. Select the variants you would like to include in your test. Click on Setup New Test.
From the Testing tab, click on the test you'd like to restart, then click on the Restart Test button.
How do I know if the test results are valid?
The validity of the results depends on several factors. Here are some major ones to consider:
- How much traffic has your site experienced since the start of your test?
- Has the traffic been representative of this time of day, week, or other general time period?
- What's the traffic split among your test's experiences?
There are almost certainly other factors for you to consider when assessing your test's results. Aside from alerting you about too little traffic, we don't interfere with your own best practices when determining test validity and interpreting results.
How do I modify a test?
If the test is not currently running, then you simply find it in the Active Test Overview page, open it, and make your changes. You can click the Save as Draft button any time to save your test before you finish creating or modifying it.
If the test is currently running, then you need to stop the test and create a new one. We require that you stop the test and create a new one because we want to protect the reliability of your test data. It's important that your results reflect the actual test rather than aggregation of different tests' data.
How do I delete a test?
You can't delete a test after you start it. Tests are useful beyond their conclusion, especially for determining the reason for changes in metrics such as when a product's performance changes.