5 Mobile AB Testing Tips to Increase Conversion

June 10, 2015

Britt Buckingham

Screen_Shot_2015-06-10_at_12.37.46_PMAs a marketer, you know that designing a marketing campaign takes careful planning and small changes in your campaign can result in big changes to your conversion rates. However, it can be difficult to know if those changes are impacting your results, good or bad. This is where AB testing becomes invaluable to your mobile marketing strategy.

The goal of AB testing is to take the guesswork out of your marketing efforts. By relying on data-driven results, you reduce the risk of continuing to run poorly performing campaigns to your users.

We’ve already walked you through the basic steps to running effective AB tests, so now let’s dig a little deeper into your mobile marketing strategy and share some best practices for each of those steps.

Here are 5 Mobile AB Testing Tips to Increase Conversion:

    1. Start with a clear goal, then measure your AB tests against the key metrics your business is working to achieve. It is important to know what drives your key business metrics and to target the users that are directly affecting those metrics in your AB tests. There are countless stories of companies who’ve implemented dozens of AB tests, only to find that their key metrics have not budged. In order to see a difference in your metrics, you need to design your campaigns based on your analysis of the attributes of the users that affect your KPIs (good or bad).

      As an example, suppose the metric you would like to improve is user retention. You have analyzed your user experience and determined that users who invite at least one friend to join the app are much more engaged with your app, and as a result the users are retained longer. The next step would be to tailor your campaign to drive friend invites, but you should ultimately choose the variant that leads to the highest retention. Bottom line: design your campaigns based on an analysis of your users and measure your tests against the specific metrics you want to achieve.

    2. Choose test variants wisely and test impactful elements first. After you have a clear idea of what you are testing and why, it’s important to choose the elements you decide to test on each variant with careful consideration. Although there likely won’t be a technical limit to how many variants you can test with most modern AB testing solutions, it can be a waste of time and resources if you over-do it. Smaller, sequential, AB tests may seem like they have less power, but they make the data collection and evaluation much more straight forward, typically leading to fewer mistakes.

      You will want to be sure to test the elements of your campaign that will make the largest difference in gaining higher conversions. While every little design element can have affect your conversions, it is better to start with running tests on larger elements that seem like they will impact the user’s decision process the most. For example, instead of first running tests for every button’s color, shape, and size, consider elements like the wording in a call-to-action or the amount of a discount in an virtual goods promotion.

    3. Build a segment of users, then target your campaigns and test for each segment. Identify the attributes that are important to achieving your overall goals. Language and location are obvious attributes that can affect user behavior, but consider creating segments based on attributes like new vs. veteran users, paying vs. non-paying users or first/last session date. However, be careful not to segment too far so that it affects the size of the segment.

    4. Run your test for the full duration in order to draw precise conclusions from your data. It can be tempting to set up an AB test and monitor its performance so closely that you are collecting inaccurate data. Drawing conclusions too soon in the testing process can cause you to want to end the test early. This usually manifests when you see an early “clear winner” or decide out of intuition the test ran long enough based on the amount of data collected.

      Evaluating the variants’ performance and choosing a winner before the planned test is fully complete will drastically increase the chance of choosing the incorrect variant. By not allowing your AB test to run the entire pre-chosen duration can, and will, lead you to incorrect and incomplete conclusions.

    5. Give your users a consistent experience. While running AB tests it is important to be sure you aren’t running them at the expense of your user’s experience. This advice is very common when testing user experience elements, where exposing users to multiple variants can lead to a jarring experience. However, it is very much applicable to advertising and promotion campaigns. Consistently showing users the same campaigns ensures the data you collect is clean. It means that when users react to a variant, they are really reacting to the shown variant, rather than the one the might have seen five minutes before. This will provide you with a clean data set of impressions and conversions to analyze and draw the right conclusions from.

http://resources.upsight.com/upsight-ab-testing-whitepaper

In: Marketing

Subscribe To Our Newsletter:

Recent Posts