A/B testing on websites with little traffic

This article is about the challenges of A/B testing on websites with low traffic, our favorite tools to calculate sample size and reliability, and test tips for conducting successful experiments on sites with low traffic.

For conversion optimization, you need visitors (traffic). The more traffic you have, the more experiments (like A/B tests) you can conduct to learn from.

Many start-ups, niche, or small businesses receive fewer than 100K visitors and 500 conversions per month. This is considered low traffic.

Why is low traffic a problem for A/B testing?

Reliable test results depend on two factors:

1) Sample size (traffic levels);

2) Uplift (the difference between the conversion of A and B).

Let's say you want to test two calls to actions. You run the test for a few weeks and get 5000 visitors per variation:

Version A: 100 goals achieved/5000 visitors = 2% conversion rate
Version B: 250 goals achieved/5000 visitors = 5% conversion rate

In this example, version B performs better than version A with a 150% improvement.

Having a lot of traffic leads to quicker results. With 100K+ visitors per month, you can obtain results within 1 or 2 weeks. With low traffic, it takes more time to achieve a sufficient sample size.

With a significant difference in conversion rate, it's also possible to get results with a smaller sample size.

If the sample size is too small and the uplift is low, it can be statistically challenging to draw conclusions from your tests.

How can you calculate the significance of a test?

A test result is significant when you can statistically determine that the difference in conversion rate wasn't due to chance.

With a test duration calculator like this one from Abtestguide or this one from VWO, you can calculate whether your intended test can yield a statistically significant result within a certain number of days. You can also determine how many visitors you'll need for that.

How many visitors are enough?

There's no magic number here, just mathematics. We use tools like this one. The number of required visitors varies for each A/B test. A good rule of thumb is to have approximately 1000 visitors per week on the page you want to test or about 50 conversions per week.

Tips for A/B testing on sites with few visitors

1. Test on pages with the most traffic

On websites with few visitors, it's better to focus on the most visited pages. Take the top 3 pages and analyze the click-through rates to the next step in the funnel.

Look for sitewide changes in navigation. For example, highlighting menu items to attract more attention or changing menu labels to see which one makes a better impression on your visitors.

2. Focus on micro-conversions

Take it step by step. Don't immediately check if a single change leads to more revenue, but see if the adjustment encourages visitors to click through to the page you want.

Testing micro-conversions that lead to your primary conversion goal will help you achieve results more quickly. Think of interactive elements on the site with low commitment, such as filling out a form or creating an account on your website.

Micro-conversions could include clicking through to a product detail page or clicking 'add to cart.'

3. Limit the number of variations

The more variations tested, the more visitors and time it takes to get reliable results.

Stick to running a few simultaneous A/B tests until you have enough traffic.

If you're doing SEO A/B tests, try to limit the number of changes. If you simultaneously adjust headlines, calls to action, descriptions, and images, you won't be able to track which change had the most impact on your results.

4. Extend your durations if needed

Don't stop your tests after a few days.

To gain a good understanding, let the test run for at least 1-2 weeks. You can calculate the optimal test duration with this duration calculator from VWO.

5. Combine quantitative and qualitative testing

Quantitative data can help answer questions like: "How many people use this feature?" or "How often does this pattern occur?".

Qualitative data is suitable for sites with both high and low traffic and is a great complement to quantitative data.

User tests, session recordings, and surveys will identify clear UX problems. This type of analysis tries to answer these questions: "Why are our users exhibiting this behavior?"

Always focus on your objectives, and make sure they are central when choosing methods.

To conclude

Whether your hypothesis is proven or not, you can be sure that you've learned something new about your customers, and that's valuable information.

What are your test tips for A/B testing with low traffic?

Do you have any other questions? We'd love to hear from you! You can email us at cro@sping.nl or just visit us in Delft. We always enjoy sharing experiences and ideas!