Tests Tab: How to A/B Test your Product Finder
In this article we describe how to set up an A/B test for your Product Finder.
Often you want to be able to see the impact of a change to your Product Finder after it is implemented. In the Tests tab, you can release changes as an A/B test (or multivariate tests) for your Product Finder, to later see how the two — or more — versions compare.
How to Create a Test
To Create a new variant to test against your current Finder, click Create variant in the Tests tab of the Product Finder.
To set up the test, follow these steps:
The first step is to name the new variant. This is how this Product Finder variant will show up in reporting, so it is good to give the variant a descriptive name. In the example shown in the screenshot below, we want to test the effect of adding a question to the Finder about what weather the user will run in, so we added “weather question” in the Variant Name field.
The second step is to choose which Product Finder variant to base this on.
The third step is to set the testing weight, which is the percentage of Product Finder sessions that will see this variant. To reach data significance the quickest, we recommend 50/50 (or an even split between the variants that you are testing if it's more than two).
After saving, you will be able to see the two variants, and the percentage that each of them is receiving out of the total sessions.
How to Edit a Test
Once you have created a variant, you can edit any element of that variant, the same way you would edit a Product Finder without a test. You will see a dropdown in the top right corner that tells you which variant you are editing, as shown in the screenshot below.
You can edit the flow from here, any translations from the translations tab, or anything in the setup page. Any changes you make will only affect the variant that you are currently editing.
It's imporant to note that If you have already created two variants but you want to make a change that affects the overall Finder, you need to make that change in each variant individually.
We recommend running evenly split A/B tests (50/50 session split), instead of multivariate tests if you want to see fast results. Multivariate tests take longer to reach significance due to the lower number of sessions per variant.
Last updated