A/B testing keeps the user at the centre of design

A/B testing keeps the user at the centre of design

A/B testing is a reliable way to obtain useful and comprehensive data about the functionality of a web service and the behaviour of its visitors. It is an excellent tool to determine the superiority of competing design choices and enables the continuous data-driven development of the web service.


What is A/B testing?

A/B testing refers to a situation where two or more solutions are tested to determine which one leads best to the desired outcome. In practice, this means that multiple versions of a web service or page exist simultaneously, which are shown to users who access the service.

The prerequisite for successful A/B testing is that the site has enough traffic to ensure a sufficient sample size. Clear objectives and metrics must also be set for testing. It is essential that the test target is truly significant for the functionality of the web service. Trivial and cosmetic changes can certainly be A/B tested, but the benefits achieved may be minor.

“A/B testing is sometimes thought to be tinkering with whether a blue or red button works better. Truly useful A/B testing focuses on the language used, the spatial arrangement and positioning of interface elements, and the timely presentation of information,” contemplates Crasman's Design Director Teemu Korpilahti.

The typical stages in implementing an A/B test are as follows:

  1. Identified problem

  2. Hypothesis of a solution

  3. Designing and implementing alternatives

  4. Waiting, i.e., data collection

  5. Choosing the winner

A/B tests are almost invariably unique cases, and tests developed for one company cannot necessarily be executed elsewhere as such. However, when talking about consumer e-commerce, there are some common areas of user experience where tests are often conducted.

“A common testing target is whether to include a separate login step in the purchase funnel: does it positively or negatively impact sales and does it result in a larger proportion of users making purchases while logged in. In mobile e-commerce, we test different means of shortening views and simplifying forms with solutions like one-click checkout,” explains Teemu Korpilahti.

Case Intersport: Availability testing as an A/B test target

Intersport wanted to determine how their online store's click and collect functionality could attract more users. The starting hypothesis was that the prompting phrase “Reserve and collect” was not understandable from the consumer's perspective.

Crasman conducted an A/B test in which the alternative to the “Reserve and collect” button was the more neutral “Shop availability”.


The results spoke for themselves:Shop availability” worked considerably better with a 16.4% conversion rate of opening the availability feature during the test period, compared to “Reserve and collect” at 10.8%. The test and the subsequent changes made to the service led to a direct increase in the number of real store reservations.

“This particular Intersport A/B test is an excellent example of how a really simple change can produce significant results in the best-case scenario. If the text on the button had just been blindly changed, we wouldn’t have obtained as reliable results from the trial,” considers Korpilahti.

“In the continuous development of web services, more emphasis should generally be placed on data rather than solely on assumptions about how users might use the service. A/B testing is an easy way to ensure that the changes made advance the service rather than the opposite direction. Sometimes the test result is that the new version is worse than the old, but that would not have been discovered without testing,” says Intersport Finland's Digital Marketing Manager Aki Kaipainen.

User data is the designer's safety line

Crasman systematically uses A/B testing as a tool for continuous development of web services and aims to design significant site changes with the help of A/B tests.

The subscriber and designer of the web service easily dive into a mindset that only considers the power users of the services, thus ignoring a vast number of users who require simplicity or hand-holding in their interfaces. In addition to the human factor, varying devices and internet speeds pose their own challenges to design. A change that positively impacts the desktop experience may be negative for mobile users.

In such a situation, it is crucial to find tools that enable the service to speak the users’ language and meet their needs.

“Data from user tests is a safety line for the designer, allowing them to stay on the path of user-centricity even if there are radically different solutions and functionalities under design,” describes Teemu Korpilahti.

Learn more about Intersport's customer story.

 

Crasman Ltd

10 Jan 2019