5 Common Referral Marketing Mistakes (and how to avoid them)
Marketing is all about testing, testing and then testing some more to really optimise results. And referral marketing should be no different. Without A/B testing retailers might be lucky to reach 3-5% of new customer acquisition from a referral programme, but ongoing experiments can increase this to up to 30%. Here we look at 5 common mistakes which will hold your campaign back.
Mistake 1: Not giving your experiment enough time
How long should a referral experiment take? This is the product of a few factors. First you’ll need to keep in mind your traffic and order volume. A higher volume of orders means your A/B tests should reach statistical significance more quickly.
However there is another time period to bear in mind. The length of your purchase cycle matters too. A short purchase cycle and high likelihood of impulse purchase means new customers who have been referred shop at your site relatively quickly and then the friends who sent them also buy again soon. This means that you can understand the impact of refer-a-friend on the full extent of your funnel more quickly.
For many businesses the combination of traffic and order volume with their purchase cycle dynamics can mean letting some tests run for a month or more. This might come as a surprise for internet marketers who are accustomed to run simple short duration A/B tests on single page traffic which reach significance in as little as a few days.
Mistake 2: Stopping after the first test
This is a mistake we hear time and time again: “We tried an offer but it didn’t work so referral isn’t appealing to our customers”. In our experience most clients who launch a referral campaign do not start with the right offer. “For no client in Mention Me history has the first referral offer been the best offer. In fact it is quite the opposite,” says Simon D. our Head of Client Success “all clients have drawn significant benefits from a programme of testing.”
Most clients find the best results come from a programme of testing. Often the second test yields a huge jump in performance - and on average our clients see a four-fold or five-fold increase in their performance by month 6 having run between 2 and 8 tests during that period. So, don’t give up at that first hurdle!
Mistake 3: Holding back from testing discounts
We find that many retailers (especially in the luxury end of the spectrum) feel very strongly about avoiding discounts. If this is true at your brand then you are in very good company.
Deciding not to test discounts doesn’t mean no testing - in our guide to experiments we’ve detailed 4 A/B tests you can try which focus solely on messaging and positioning. There are also a variety of non-discount incentives you can test (e.g. free gifts or delivery). But testing is testing and it’s worth being open minded to what might be most effective.
The nice thing about A/B testing with Mention Me is you could just dip your toe in the water and test a small discount to a subset of your customer base. A/B testing by cohort means that exposure to a discount is limited to an exclusive group and you could make the decision to discount or not backed up by real data.
It’s also worth pointing out that whilst over-discounting can be very dangerous, there is a big difference between sending customers discounts and offering customers incentives for helping you spread the word. We’ve written about that previously in our blogpost to discount or not to discount.
Mistake 4: Jumping right in and testing a deep incentive
This mistake is the opposite of the one above. Some marketers, who are keen to give their new referral programme its best possible start in life, jump right in and over-incentivise their referrers or referees. The result varies.
Sometimes there is a great referrer response and incentives run out quickly, but a few weeks later the finance team challenges the value and incrementality of the campaign - making it hard to justify turning a test campaign into a permanent channel.
Another result of over-incentivising can be poor quality referees. This impact can turn up later when it becomes clear that the newly acquired customers don’t return after their deeply discounted first purchase.
Neither problem is the end of the road for your referral channel. Stick with it and test a bunch of incentives and offers in different increments. You will eventually find the right level for your brand - this is the level where you’ve found the balance between attracting customers and a cost per acquisition that makes sense.
Mistake 5: Making decisions with only partial data
Like all online processes, refer-a-friend can be visualised as a funnel. It's complicated because the full funnel spans your original customer, their referral behaviour, their friend, their friend’s purchase and finally back to your customer’s redemption of their referrer offer (yikes!).
This process starts in your shopping cart but can take you far and wide: email, mobile, social media, even offline word of mouth down at the pub can be stopping off points on the journey! Why is this important? It's important because in order to make a good test decision you need to consider the impact at all the steps of the funnel as well as throughout the full process.
There are some tests which impact the sharing rate negatively (the process where your customer shares the offer with a friend - at the top of the funnel) but the positive impact further along the process at the referee stage more than counter-balances. Quite a few of the experiments we’ve outlined in our guide followed this pattern. It makes interesting reading.
Test, test and test again
Launching your refer-a-friend campaign is only the start. Avoiding these 5 mistakes and creating your own series of experiments it’s possible to see a four or five-fold improvement in the performance of your referral programme over time. It takes a little time and effort to reach its full potential but getting referral working well should yield a channel that boosts acquisition by 10% to 30%.
A/B testing doesn’t have to be complicated as long as you keep the consistency of experience for each test cohort. Testing by cohort is critical to unlock referral. Look for a referral platform that enables you to do this so that you can be sure that your customers who have been shown an offer will continue to be able to share that offer even if you want to test something else.
So, what should you look for when setting up a referral programme to make sure you’ll be able to test and optimise?
- AB Testing by Cohort - test different offers against each other whilst allowing customers to receive a consistent experience.
- Segmentation – serve different offers to different segments.
- Easy Configuration of Offers – change and test variables (e.g. incentives, share options) without development support.
- Detailed Reporting – data at each step of the referral funnel.
- Content Management System – amend any customer-facing text.
- Easily Editable Designs – the design of the referral progamme should fit your brand and be easily editable.
This article was previously published on the Mention Me Referral Blog.