Errors in A/B testing

Torresburriel Estudio
5 min readFeb 1, 2023

--

A/B tests are a tool used to optimize conversion rates (CRO). It involves conducting an experiment where two or more versions of the same page (or ad, or email, depending on what we will be testing) are published to see which version generates a better conversion.

Photo by Myriam Jessier on Unsplash

After conducting the A/B test, a statistical analysis is performed to determine the effectiveness of each variation. This way we can know which version generates more clicks, subscriptions, sales, or any other conversion action that has been determined.

Some examples of changes on a website that we can analyze with an A/B test are:

  • Content: We can analyze different copy, headlines/headings, colors, typography, or images. Testing different web designs or templates can also be of interest. Prices and promotions in eCommerce.
  • Call to actions: It is interesting to analyze different CTAs by modifying buttons, text, button size, typography, or colors.
  • Forms: Forms can be tedious to fill out, so we can test the number of fields, mandatory fields, or the data we request.
  • Web usability: We can also test the overall usability of the web by analyzing different landing pages, thank-you pages, menus, or the footer.

What to test in A/B tests

One of the most important parts for an A/B test to be successful is to know what is most important to test at each moment. In general, the most relevant parts are tested first, and gradually other aspects such as design or CTA color are tested.

The testing hierarchy is:

  • Who? This factor is the one that can generate the most differences in the results of an A/B test. Different audiences or segments are tested.
  • What? The next important factor is to test two different value propositions or two different types of offer, such as a 2x1 or 50%.
  • How? Finally, once the audiences and the value proposition are optimized, we can test the part related to the texts and design.

9 common mistakes in A/B tests

A/B tests are a very useful tool to obtain information that helps us improve conversions, but it is very important to carry them out correctly so that the information we collect is useful. These are some of the most common mistakes in A/B tests:

  • Not having a clear objective for the test. In general, one of the most important factors when we talk about UX is to have clear objectives so that we can act accordingly. In the case of A/B tests, it is also very important. If we know what aspect or variable we want to optimize, it will be easier to plan the test to meet that objective.
  • Conducting an A/B test without having a hypothesis. When we do A/B testing, we need a hypothesis about what may work better or worse to formulate a test. In the same way that we need to set a goal, we also need to formulate different hypotheses in order to validate them.
  • Not trusting the data. Sometimes we say that designers become attached to designs, and analysts and researchers can also become attached to the hypotheses they pose. We have to be objective with the data, as if we don’t pay special attention to this aspect, we can fall into the confirmation bias and look for data that validates preconceived ideas.
  • Testing too many variables at once. A/B tests are only valid if we test a single variable at a time, as if we have changed more than one variable and the A test achieves better results than B, we can’t say for which of the changes has resulted in better results. In this case, we should formulate a multivariate test instead of an A/B test.
  • Comparing different periods of time. Linked to the previous error, if we are testing a specific variable such as a change in the checkout process, we need to make sure that it is carried out during the same period of time. Time is also a variable, so if it is not tested during the same period, we cannot be sure if the results are relevant.
  • Finishing the test too early or too late. A/B tests are based on statistical significance percentage, so we need to make sure we have obtained enough results from the test to validate the results. On the other hand, we cannot finish the test too late, as there will come a time when the results will not be significant and we could be dedicating time to more relevant A/B tests.
  • Relying on others’ tests. In the same way that it is risky to imitate other UX designs, because each project has its own requirements, we also cannot rely on other digital product tests. For example, the decision to put a specific color on a CTA button of a digital product does not have to match another digital product.
  • Changing a running test. If we change a test that is currently running, it will lose its statistical importance and could completely invalidate the results. The solution to this problem would be to stop the current test and launch a new one, so that the results are valid.
  • Conducting many tests simultaneously. If we try to analyze many changes at once, it will be difficult to interpret the obtained results. When we do A/B tests, it is advisable to use the KISS method to study the changes slowly, without dividing the audience into multiple tests.

In conclusion, A/B tests are a very useful tool for increasing conversions in digital products, and can greatly increase profitability. However, we need to make sure we don’t make these errors that can negatively affect their effectiveness. A/B tests help us understand what works better, but we will need to perform other usability tests to understand the underlying reasons.

If you are interested in learning UX don’t miss our UX Learn trainings (in Spanish):

Programas de Especialización de UX Learn

--

--

Torresburriel Estudio

User Experience & User Research agency focused on services and digital products. Proud member of @UXalliance