The Numbers Don’t Lie (Or Do They?)
I’ve been on a data-gathering mission since I started my position, and now grades are up for my MBA mid-terms. In short, I’ve got numbers running out my ears! I’m probably the least numbers-conscious person you’ll ever meet (seriously, ask my husband about the calculus study incident where the book, pencil, and calculator somehow managed to fly from the table to the other side of the room), but I’ve been particularly interested in the numbers lately. Most people who like numbers say that their affinity for numbers stems from the single answer they provide, and the “truth” shown in the numbers. Us marketers know better… hence the reason we tend to hate the numbers, because the numbers don’t actually provide the data we’re looking for. So, the numbers don’t lie… or do they?
Let’s start with grades and GPAs for the MBA. In theory, everyone answers a certain number of questions correctly on the exam, and receives the empirical percentage associated with those correct answers. Except, there’s a curve… and you’re not technically graded against yourself or the exam, but rather, the rest of the class. Thus, if everyone flubs the mid-term, you could end up with a decent grade by just being better than the average. That was my strategy in Economics, to make sure I wasn’t the dumbest person in the class. In reality, I managed to do really well on that mid-term, only missing a few questions, even without the comparison to the rest of the class. This example generally backs up what many people seem to think: grades and GPA are really not a strong measure of a person’s intelligence or work ethic. By “not being the dumbest”, someone could end up with at least a 3.5 GPA. This is probably somewhat off-set by the fact that a high GPA indicates that they weren’t the dumbest person in every class they took, which might give a semi-accurate measure of the person’s intelligence and/or work ethic.
My latest struggle deals with sample size and statistical significance (I know, marketers everywhere are coiling in horror at those words, as am I!). I’m trying to determine our referral lead sources for the business, so I’ve asked the sales reps to survey their customers when they go on a sales call. On one hand, I’ve got a really small sample size, so my results aren’t actually statistically significant, meaning I can’t really draw worthwhile conclusions. On the other hand, it’s a survey that is directly targeted to and answered by our customers, meaning that if what they say is true, it’s a good representation of how our customer base actually behaves. So, now I’m back to the marketer’s dilemma: WHY? Why do people read this magazine or that magazine? Why does this ad appeal to one segment but not the other, and how influential is segment A over segment B? Should I start re-allocating my advertising dollars if a publication suddenly sky rockets in the survey results? I’m much more leary of changing the spending, since 1 or 2 responses can “significantly” change the data.
Last, I think survey numbers in general are a little fuzzy. Did you control for different factors, like lifestyle, age, product-type, etc.? I’ve seen a lot of studies that quote statistics, but statistics are easy to skew. I’m currently trying to aggregate data to determine the “real” response to our ads. What is the best way to change them to improve our numbers in the survey? Is the survey sample really indicative of our customers’ thoughts and behaviors? I think the aggregate data is very telling, and the moderators also give you real comments from real participants, which helps immensely. I’ve found the comments to be much more helpful than the numbers in determining why our ads did not score as expected.
So, while I’m currently chasing the numbers, I still think the numbers need to supplement comments, conversation, and human observation. I think the numbers do their best to tell the whole truth, but nothing but the truth… the numbers are gathered by people, so it’s going to have some slant from someone!