Look for Significant Results in Data Driven Marketing
It’s important to look for significant results in data driven marketing. For example, every election year we are inundated with poll results. It seems like every day there is a new poll out. Each poll is followed by a debate about how to interpret the results. Part of this debate is spin doctoring. But part of it is rooted in statistics.
The results of each poll are accompanied by an estimate of the margin of error associated with that poll. Essentially, this margin of error measures how significant the results really are. Fifty-one percent of respondents might say they will vote for a particular candidate. But this doesn’t really mean much if the margin of error is 3 percent.
Such a result is not statistically significant. What these results are actually saying is that somewhere between 48–54 percent of respondents will vote for that candidate. Not conclusive.
Be confident in your data driven marketing measurements
The error margins that are reported along with political polls are due to the fact that the polls are based on random samples. There’s certainly room to question the way these polls define an eligible respondent. But the error margins are related only to the size of those samples.
These samples are quite small compared with the overall population. But large or small, whenever you estimate based on a random sample, you introduce the possibility of errors.
If you flip a fair coin ten times in a row, you don’t expect it to come up heads all ten times. This almost never happens. The key phrase here is almost never. If you flip a coin enough times, it’s eventually going to come up heads ten times in a row. That’s just the nature of random variation.
What does this have to do with marketing, you ask? When you run a campaign, you randomly hold out a control group. This allows you to measure how many campaign responses were directly due to your communication. You compare your response rate to the number of responses in the control group.
Because the control group was selected randomly, it is possible that by pure chance it isn’t really representative of the overall audience. Luckily, you can, or rather your geek can, calculate exactly how likely it is for this to happen.
That calculation results in a confidence level for your response results. This is a measurement of how likely it is that your results are due purely to chance. In the coin flip example, it’s a measure of how often you would expect to get ten heads in a row. Results that have sufficiently high confidence levels are considered statistically significant.
In the worlds of statistics and science, results need to have a confidence level in excess of 95 percent to be considered significant. This means that there is only a 5 percent chance, or 1 in 20, that the results are due to purely to chance.
But because you’re doing marketing, not medical research, it’s reasonable for you to treat 90 percent confidence as significant. But anything lower than 90 percent should be treated as inconclusive.
Paying attention to confidence levels keeps you focused on what actually is working. It also makes your financial calculations extremely credible. You can literally say with 95 percent confidence that your campaign made money for your company.
How to size your control group in data driven marketing
Getting significant results is not a crap shoot. You can stack the deck in your favor from the beginning. The size of your control group is, in a sense, the determining factor in whether you can report high confidence in your response rates.
Essentially, larger control groups lead to higher confidence levels.
There is a trade-off here, though. Control groups also represent lost opportunities. Sometimes control groups need to be quite large. To the extent that your campaign is successful, not mailing to the control group costs you responses.
Your geek can help you to determine the appropriate number of customers to hold out in the control group. You will need to provide two pieces of information:
The response rate you expect.
How much you think your campaign will increase responses.
Clearly, both these estimates are guesses on your part. Campaign history is a good place to get a sense of what to expect. Experience — yours or someone who has executed campaigns in your industry — is really your only guide to estimating campaign response rates before the fact. Over the years, campaigns with response rates that range from a fraction of a percent to above 50 percent.
If you’re reasonably close in your estimates, a control group can be sized that will greatly increase your chances of seeing significant results.
Developing measurement plans for your campaigns is a core part of your job as a database marketer. Good measurement leads to learning, which in turn leads to better results.