There’s a lot of bad data out there. And most of it wasn’t produced by some mad marketer cooking up scandalous numbers in a dingy computer lab. The reality is that even the best marketers are susceptible to bad data practices.Here is a list of common shoddy data practices, and how good marketers can avoid them.
Don’t Mistake Profit for Revenue
It seems obvious, but mistaking total cash for revenue can drastically shift your data. Your cash and your revenue are two very different things. Cash refers to the pure flow of monetary funds your campaign brings home. However, that doesn’t mean you’ll be pocketing all that cash. Some of it will be invested back into your company to pay expenses. Revenue, however, is the amount of profit that’s left over. This is the number you need to take and plug into your math to ensure you get a proper ROI figure. When you start with cash instead, your campaign will look deceivingly more profitable. We don’t have to emphasize the implications that can happen to your revenue when you run with what could be a faulty campaign disguised as a winner.
Quick Fix: (Revenue Earned/Campaign Cost) – 1.
Look at Metrics Holistically
Let’s say you’re running a nurture stream and you want to find out which emails are your lowest performers. You know your open rates tend to relate to subject lines, as opposed to click-through rates that relate to the body of the email. You strive to deliver relevant content to your leads (bravo!), so you choose to look at which emails are garnering the most unsubscribes. Perhaps you find two emails with unusually high unsubscribe rates. You don’t want your emails getting sucked into a spam trap, so you decide to remove these emails from your nurture stream.
The problem? Only four people actually opened the email. So even with a high unsubscribe rate, you don’t actually have enough data to make any kind of final conclusion about the relevancy of your emails.
Quick Fix: Make sure you’re looking into multiple aspects of performance, and not just the first pretty metric you see. You also need to make sure your sample size is large enough to make your data worth recording. If you’ve got a good model set up, the super small sample will perhaps be insight into performance down the road, but is in no way a figure to broadcast. With that said, avoid…
Don’t Read Data Too Soon
Let your campaigns run long enough to procure some real data. It’s tempting to stick a fork in your campaign too early, especially if your campaign is new and you’re eager for results. While enthusiasm is great, be sure to let your campaign simmer long enough to gain some real numbers. Pulling data too soon could mean that, as afore mentioned, not enough people have interacted with your campaign. Or perhaps you pull metrics before an important date or event within your industry that would have greatly affected your results. . Pulling metrics too soon can be deceiving on how your marketing is doing (both positively and negatively.)
Quick Fix: Let campaigns run their course. Know ahead of launching when you’ll want to look at data as an indicator of performance.
Avoid Skewed Timelines
Let’s say that in the beginning of May, you started a new initiative that invites people to fill out a form on your website in exchange for a $5 Starbucks gift card. At the end of May, you noticed that you had 235% more MQLs for the month than you previously had in April. Conclusion? The initiative worked!
But wait just a minute. Unfortunately, you’ve forgotten to track the sources of engagement for those leads. As it turns out, May is also your biggest month of the year for MQLs because of the number of industry events you attend that drive qualified leads.
Quick Fix: remember to compare your timelines on a large scale. Comparing this month to last month won’t cut it, you’ve got to compare one (and sometimes more) year to another. With that said, you’ve got to have a good data tracking system in order to make educated decisions about campaign performance comparisons.
Don’t Celebrate too Early
It just makes sense that you got a 7,000% increase in engagement…right? Wrong. Before you turn on your heel to skip off and report the numbers to the rest of the team, take a moment to do some double checking.
Keep your eyes peeled for these errors because they are rarely obvious. Perhaps you’ve got 75% of conversion rate on prospects that interact with your video blog at an MQL stage. But how big is your sample size? Maybe only four people have actually engaged, and your sample size isn’t large enough to draw any real conclusions.
Quick Fix: Always dig deeper to vet your data no matter how positive the results may be. Look at multiple aspects of each campaign to make sure you’re getting an accurate portrayal of your results.
Understand Which Metrics Matter
Anyone with access to something as simple as Google AdWords is subject to this easy to make mistake. With AdWords, there are loads of different metrics like clicks, conversions per click, cost per click, and so on. It’s easy to look at an ad you’ve been running with a high cost per click and conclude that the ad isn’t working properly because its click volume is lower. However, it’s very possible that those clicks are coming from higher-value customers. To really see how that ad is performing, the cost per click should be compared to average lifetime value of customers who click those ads.
In the same way, it’s easy to look at an ad that has very few conversions and draw the natural conclusion that your ad is not targeting a qualified audience, leading you to tweak your search terms. However, it’s highly likely that the webpage the ad links to just isn’t the most relevant content for the lead.
It’s also easy to misinterpret your conversion rate if you aren’t equating your MQL’s to closed deals correctly. For example, let’s say you’ve got 10 MQLs that all exist within one company that you end up closing. If you don’t investigate where those 10 MQLs originated from, your conversion rate will look like you closed one deal for all those 10 MQLs. In reality, all of those MQLs have technically converted, and should be calculated as 10 closed deals as to not ruin your conversion metric.
In the real world, figuring out this attribution model is work that is never done. You can enlist an array of tools to help you get build such a model and there will always be gaps, errors, and leaks. But stay on guard and don’t fall victim to misreading metrics.
Quick fix: Be sure you know where your data is coming from, and attribute accordingly.
These are just a few of the common pitfalls that can come from shoddy data practices. What other marketing science flubs do you see in the industry (or perhaps that you’ve made yourself?) Tell us below!