Small and simple
A\B tests should be small. The smaller the better. One dimensional is perfect.
More tests instead of complicated tests
Do more simple and quick tests rather than increase the complexity of a test. One piece of knowledge at the time, but often.
One metric to define success
The judgment of an A\B test success should be as simple as an A\B test itself. It's better to have one metric that you will use to say what variant has won.
You can test just about anything
Just about any element of your website or an app could be A\B tested. Those include text (product offering, C2A labels, Page headers, and titles), forms, images, videos, emails and even full web pages.
Stages of an A\B test:
What do you want to learn from A\B test? What question you want to answer? The objective should be clear, simple and one dimensional. For example: what converts (number of clicks) better: green checkout button or red checkout button?
Qualification of users
You need to ensure equal distribution of traffic to each test variant. This means: if you want to make an A\B test on 1000 users, you need to distribute equal 500 to one variant and 500 to another. You'll need to come up with a number of participants that needs to be qualified for a test in order to collect "statistically confident" data. To do this you may reach for the help of statistics theory or use one of many available sample size calculators. Also, your traffic should be distributed randomly among the test variants.
Running the test
Usually, A\B tests are either time based or traffic based. A time-based test will run for a specific period of time and would stop disregard the number of participants that were qualified to each test version. Those kind of tests are good when checking specific time-bound scenarios, like: "if we make "Buy" button more visible on the homepage during lunch time, will it increase the number of orders?"
Traffic based A\B tests are the most common. In this case, you run the test until you collect results from all planned participants, in other words: when you reach your sample size. You can estimate needed sample size using calculators below.
After an A\B test finished you'll need to analyze results and prepare test report. It may include the following key elements:
- Options that were tested
- Sample size
- Test results per a version
- Summary - overall test outcome corresponding to the objective
- Suggestions or list of actions that should\will be taken after the test
When you read about A\B tests you often spot words like: "statistical confidence", "statistical significance" or "confidence interval". While the scientific (statistics) explanation is quite tricky, here is the easiest one I've heard: "Significance is a statistical term that tells how sure you are that a difference or relationship exists." Your A\B results should be statistically significant. You can either calculate results yourself or use one of many available calculators for that. Last, you need to describe actions that you will take as a result of an A\B test.
Sample size calculator
- A/B Test Sample Size Calculator by Optimizely
- Sample Size Calculator by Evan's Awesome A/B Tools
- Sample Size Calculator by Cardinal Path
Statistical significance calculators
- A/B Split Test Significance Calculator by Wingify
- AB Test Calculator by HubSpot
- Split Test Calculator & Decision Tool by User Effect