In brief, it is a form of a test between two versions of a specific item, which will indicate what the majority of a targeted audience prefers. Decisions are then taken on the yielded results to improve that item, being a product, website, landing page and more.
Let us explain.
Mostly utilised by creators, marketers, software engineers and designers among others, A/B Testing which is also known as split testing, split-run testing or bucket testing, checks user engagement and shows what the audience prefers, clicks and connects to. Companies will then adapt or amend in line with the feedback obtained to increase user commitment, customer satisfaction and revenue.
A/B Testing is far better than relying on one`s sensations, since the latter could be detrimental.
How to Conduct A/B Testing
A software engineer at a company predicts or foresees that by changing a specific feature on their website it will attract more traffic from which the company can benefit.
Another similar website is created so that the audience is presented with two variations of the said feature, then feedback is compared and analysed.
It goes without saying that checking results before, during and after data collection will assist us in taking the best decisions.
Different audiences will of course give us different feedback, while what works for one company won`t necessarily mean that it would work for the other.
So How Does A/B Testing Work?
Let`s take a couple of examples.
A/B Testing requested to check website`s difference in performance by tweaking the call-to-action button (CTA).
A/B Testing was recommended to check if site visitors react more to the CTA being top page near the main title, rather than from the sidebar.
Method: create another similar web page, but featuring CTA button in its new position.
Objective: to check if CTA improves, ideally generating more leads and revenue.
The original website with the usual placement of CTA is known as the `control` and Version A of the test, while the second website with the new CTA placement is known as the `challenger` and Version B.
Ideally the percentage of site visitors is the same or close for both web pages for the test to work out as it should.
Post A/B Testing, results are confronted, which hopefully will show and determine where the CTA is getting more responses. Website owner will then proceed to either leave CTA button where it was, or amend it accordingly in line with audience feedback.
Another example could be design-wise. So, rather than changing the CTA button placement this time round, a new colour will be adapted to it and a test is carried out to check if this new colour obtains more clicks or not.
The `control` web page will bear the CTA with red colour so to say, while the `challenger` web page will have it green.
Again according to results, one will determine which CTA colour is more attractive to web visitors.
If A/B Testing is carried out correctly, it can attract further traffic to that website, with the possibility of generating more leads, which will ideally yield an increase in revenue.
Benefits of A/B Testing, and Possible Drawbacks
The main benefit obtained by a marketing team or a business is that testing is low in cost but high in reward.
It depends though, let`s see why.
As an example, say a content writer publishes articles at €100 each. During a particular week, the creator decides to pause, skip an article and `burns` a €100 by not writing one, but instead carries out an A/B testing on article formats for a specific landing page. The responses will assist the writer in his work, indicating what the readers are after and/or prefer, thus the creator will proceed with the remaining of his articles in the favoured format, potentially obtaining more audience. Now, the writer will also be in a better position to ask for an increase in payment, so more income generated by this A/B Testing.
On the other hand, if the test fails or it isn`t carried out as it should, the writer would have just `burnt` €100 for the time spent testing and not writing/publishing the article.
Ideally risks are calculated beforehand, that`s why good planning is recommended prior to testing. If testing is not conducted correctly or it fails to accomplish the requested data, it can result in waste of time, resources and money.
However, no matter how many times the A/B test fails, its eventual success will almost always outweigh the cost to conduct it.
Another downside of A/B Testing is that its application is somewhat limited, as it can`t be implemented for all types of scenarios but rather for measurable ones only.
Split Testing Goals
A/B testing can be implemented for various reasons and targets, among which:
- to increase website traffic – testing and changing CTAs, blog posts or titles could assist in escalating traffic to the web page.
- to obtain higher conversion rate – visitors who like the web page would fill in forms and provide their details more willingly, potentially this will convert into leads.
- lower bounce rate (leaving quickly after visiting website) – retain them longer with interesting images and text.
- lower cart abandonment – this happens on shopping websites, when visitors start putting desired objects in their cart but then leave abruptly for any reason. Attractive page designs like in the check-out and product photos with an enticing description, even listing shipping costs clearly for example, will lessen the possibility of cart abandonment rate.
A/B Testing Checklist
Before starting see that you:
- pick one variable to test – if not you can`t be sure which adjusted item on your web page made the difference, if it was the CTA new placement, its colour or the new title. So, take each one at a time in separate tests.
- identify your goal – predict which item will do better if changed. Plan, test and check against results.
- create a `control` and a `challenger` – as we saw you need to create a similar website, in our case to display the new CTA placement/colour. Another example would be adding a testimonial to your landing page which could also leave a positive impact. So, `control` website without testimonial, while the `challenger` website featuring the testimonial.
Determining how significant the results are bores down to the type of A/B testing one incurs. If tests involve sending cold emails, checking responses will be easier than say modifying websites, which will need more time to weigh visitors and see the variance.
Testing is Over
Today companies like Microsoft and Google conduct over 10,000 A/B tests each annually.
A/B testing tools are either obtained from professionals, or alternatively other options exist like Google Analytics to name one.
Being a user experience research methodology, from conducting A/B testing we will get statistics indicating what the audience wants to see, and act accordingly by revising websites, blogs or designs to obtain better results, leads and deals, hoping that ultimately it results in a marked profit margin.