Everywhere I look, there are numbers, and pressure to provide numbers. Fill out this survey. Fill out another for a chance to win $1000 worth of groceries. Tell us how you liked this book. Tell us how you liked your flight. Tell us how the service was at the bank. Rate your purchase.
And that’s just the beginning. The President’s popularity is down – or up. This television program will return next season because the numerical ratings are up, that other one… so long. Advertising rates are tied to ratings as well, and because the attention spans of Americans are down, negative sensational news or quick laugh or quick action entertainment get higher numbers, and higher numbers mean higher profits.
All the stock-tracking systems show continuous numbers on public companies, the stock price by the minute, the latest P/E ratio, ratings by up to a dozen or so different services. The state of the economy is measured by the numbers of GDP or inflation by the CPI numbers [or some variant thereof] or the unemployment rate… always the numbers.
Why numbers? Because for the data to be effectively aggregated and analyzed, it has to first be quantified numerically.
All these numbers convey a sense of accuracy and authenticity, but how accurate are they? And even when they are “accurate” in their own terms, do they really convey a “true” picture?
I have grave doubts. As an author, I have access to Bookscan numbers about my sales, and, according to Bookscan, their data are 75-80% accurate. According to Bookscan, I’m only making about 25-30% of what my publisher is paying me. Now, my publisher is a good publisher, with good people, but Macmillan isn’t going to pay me for books it doesn’t sell. That, I can guarantee, and a number of other authors have made the same point. For one thing, Bookscan data represents print sales in bookstores and other venues that are point of sale outlets, which Walmart and Costco aren’t. Nor are F&SF convention booksellers, and ebook data isn’t factored in. So those “authoritative” numbers aren’t nearly as accurate as Bookscan would have one believe.
Similar problems arise in education. My wife the professor also feels inundated by numbers. There’s the pressure to retain students, because the retention and graduation numbers are “solid,” but there’s no real way to measure in terms of numbers the expertise of a singer or the ability of a music teacher to teach. And the numbers from student evaluations [as shown by more than a few studies] track more closely to a professor’s likeability and easy grading than the professor’s ability to teach singing, teaching, and actual thinking. A student switches majors because they’re not suited, and even if that student graduates in another field, the major/department in which the student began is penalized with lower “retention” numbers, which, in effect, penalizes the most demanding fields, especially demanding fields that don’t reward graduates with high paying jobs.
Yet, the more I look around, the more people seem to be relying on numbers, often without understanding what those numbers represent, or don’t represent. And there’s a real problem when decisions are made by executives or administrators or politicians who don’t understand the numbers, and from what I’ve seen, all too many of them don’t understand those numbers. We see this in the environmental field, where politicians bring snowballs into Congress and claim that there can’t be global warming, or suggest that a mere one degree rise in overall world ambient temperature is insignificant [it’s anything but insignificant, but the data and the math are too detailed for a blog post].
The unemployment numbers are another good example. The latest U.S. unemployment rate is listed at 4.5%, down from 10% in October of 2009. Supposedly, a five percent unemployment rate signifies full employment. Except… this number doesn’t include the 20% of white males aged 25-54 who’ve dropped out of the labor force. Why not? Because they’re not looking for work. If you included them, the unemployment rate would be around 17%.
Yet, as a nation, in all fields, we’re relying more and more on numbers that all too many decision-makers don’t understand… and people wonder why things don’t turn out the way they thought.
Numbers are wonderful… until they’re not.
Great point about the need to understand the full story that the numbers represent. As a business person, I and my colleagues have endless conversations around our metrics: What is really being measured? What metrics need to be included to tell the complete story (without becoming so complex that senior managers’ eyes glaze when they’re presented). Numbers, metrics and statistics are powerful tools when used properly. But like any powerful tool, you can do yourself and others a lot of harm if you misuse it. As the saying goes, “Figures don’t lie…but liars can figure.”
On unemployment figures, In the UK if you retire early you are not labelled as unemployed, but economically inactive. When you reach 65 (and this age is niw rising) the term used is ‘retired’.
So people who are able to retire early are implicity assumed to be not doing their bit for the economy.
I’ve been officially ‘retired’ for over a decade, but I still get paid for writing and I still get royalties on sales of one book – including foreign sale so I’m still doing my bit for the balance of payments.
My data show that you are 100.1% correct…
Seriously though, this is – perhaps inextricably – tied into the overwhelming amount of information readily available, which has been met with what I find something petrifying. Rather than encouraging a deeper analytical engagement with complex issues, it seems like the overwhelming response – across academia, media, and naturally politics – has been to embrace oversimplification and operate accordingly.
In court, I’ve had judges tell me flat out that they were not going to rule in certain legal issues because they preferred for the issues to be addressed either unnecessarily at trial (i.e. more people talking rather than reading precedents and even undisputed evidence) or worse yet by an appellate court. Everything is a volume industry, and the perceived panacea against being overwhelmed seems to be simply to skim ever more shallowly.
I don’t mean to attack numbers per se – I’m more likely to read articles with numbered sections than potentially rambling pieces dealing with the same subject matter.
But it feels like when everyone simply wants a simple answer to every question, there are a litany of sources eager to provide simplistic answers.
Thoughtful as always 🙂 I’d like to add a troubling trend that I have seen as a computer industry analyst recently. The field of business surveying online is afaik dominated by a tool called SurveyMonkey. This is cheap and easy to use, but fits everything into a linear “1-5” grading format among other things. As a result, for example you may wish to complain about a poorly designed customer experience, but the form only allows you to complain about the poor schmo who must handle the interaction with you over the phone. Or, you think that climate change should be a key topic of discussion in a political poll, but SurveyMonkey won’t let you say why — or, often, mention the topic.
While surveying in my industry has always been problematic, SurveyMonkey seems to have brought “data blinders” to a new level. I know, for example, that much of the problem with help desks and questions is (a) with phones, poorly designed “decision trees” that may, for example, not allow you a question that crosses multiple systems, and (b) with online help, canned responses that answer the wrong question. However, SurveyMonkey will inevitably not allow a cross-system “rating” and will not allow you to critique design.
I will add that when I had to survey, I found it most useful to add an “other” category to questions, to give a chance for the respondent to write in detail, and especially to ask “Is there anything I should have asked but didn’t?” It was amazing how the meaning of the data changed when you listened to what the respondent really wanted to tell you.
To be fair to SurveyMonkey, which I use a fair amount, it has the capability to have any question structure that you can have in a paper survey – but many users are too lazy to do anything other than take the easy preset formats.
I have myself lost count of the number of surveys I’ve abandoned halfway through because there’s a question which is compulsory to answer, but doesn’t have my answer or anything like it in the list of options! Being someone who sends out surveys, I try to be good about answering ones from suppliers I deal with, but refuse to pick something I don’t agree with just to get to the end.
All surveys should give options for ‘none of the above’ and ‘refused to answer’, with a box at the end for comments. Those that don’t are not worth the bother of taking because they are designed badly so won’t give a true picture.
I am in the process of creating a survey for our village on neighbourhood planning, and I predict that the majority of the effort will be to process the responses to the “Any other comment” section attached to each question, i.e., the analogue response which will not fit into any of the digital options.
People like to give their view, not taken down pre-prepared paths which can then be quickly analysed by computer. I am with John Prigent on this one, although it is going to be a pain to consolidate the responses and get the real feelings of the village.