Mark Twain is reputed to have said that, on average, a man with his head in the oven and his feet in a bucket of ice water is comfortable. Today, that aphorism is more worth heeding than ever. Everyone seems to be obsessed with numbers, but most people really fail to understand all the numbers they so blithely cite or follow.
For example, in Cedar City, in January the relative humidity is often over 70%. Sounds really humid, doesn’t it? It’s not. Not in the slightest. The average high temperature is 42 degrees Fahrenheit, the average low 17F, and the altitude is close to 6,000 feet. At those temperatures, the maximum amount of water the air can hold [at 100% relative humidity] is between 2 and 4 grams per kilogram of air, and with the higher altitude, that kilogram of air is larger than at sea level, which means the water vapor is even more diffuse. By comparison, on a mild spring day, at sea level, with the temperature at 70F, and a relative humidity of 50%, each kilogram of air would hold 8 grams. So 50% percent relative humidity at 70F means twice as much water vapor as 100% relative humidity at 42F. Of course, that’s why it’s called relative humidity, and why it doesn’t mean near as much in the winter as in the summer.
In terms of income, averages can be extremely deceptive. In 2014, the mean [or average] U.S. family income was $72,641. That doesn’t sound so bad, but the median [the midpoint income, with half the incomes above and half below] family income was $59,939. And neither the median nor the mean indicates that 15% of American families, or roughly forty-five million people, have incomes below the poverty level of $23,500 for a family of four or $11,770 for a single individual [before federal and state benefits], that 66% of all Americans earn less than $41,000, or that half of all income was earned by the 20% of families earning over $100,000.
EPA estimated mileage numbers are another case where it helps to know what’s behind the numbers. The EPA test protocol is based on the car model in question being driven at legal highway speeds 45% of the time and in city traffic 55% of the time. Virtually all cars get better mileage at highway speeds than in local traffic; so if you drive exclusively in the city and suburbs, your vehicle is almost never likely to reach the EPA estimated mileage figures. Nor will it reach those figures if you’re one of those drivers who drive at speeds in excess of 80 mph.
Another problem with numbers is that far too many organizations are so obsessed with quantifying performance that they insist on quantifying the unquantifiable. My wife the voice and opera professor faces this every year, and each year the quantification demands get stronger and the insistence on a wider range of objective performance data gets louder… and the accompanying paperwork gets more involved and more time-consuming. One of the basic problems with rating voice performance is that, to begin with, unless a singer can match pitch, sing on key, and in the proper tempo and rhythm, they fail. Above that basic level of performance, objective quantification becomes close to impossible. Beyond that level there are no objective standards that apply across the board. Some professional singers are limited to two octaves or so; some few can sing a range of four. How does one quantify the richness or timbre of a voice, or the phrasing, or the breathing? What about the occasional voices that are unique, that go beyond mere technique? But the educational mavens want numbers! The same is true of writing. I’ve seen a great deal of writing over the years that is grammatically correct… and terrible. I’ve seen great storytellers with terrible grammar. Objectively weighing writing through a set of rubrics or “objective” parameters is close to useless – except for weeding out those who can’t write at all.
So why are we so obsessed with numbers when it’s very clear, at least to me, that there are places for numbers and places where relying on numbers makes no sense?
One reason is because, as a society, we fear what we think is the “tyranny of subjectivity,” of relying on personal and professional judgment that can be warped by factors unrelated to the quality [or lack thereof] of what is being measured or judged. Numbers seem so much more “impartial.” The problem is that they can be just as biased in their own way… and very few people seem to realize that. Except Mark Twain, who also said, “There are lies, damned lies, and statistics.” Yet we are swamped in a sea of statistics demanded by more and more institutions and organizations, and government bureaucracies who all seem to think that the numbers, and only the numbers, hold all the answers.
I’m a bit of a fence-sitter on this issue because I can see it from both sides.
For those in a position of leadership/administration solid information to base their policies/decisions on is essential. If they cannot measure the benefits/effects of their decisions they cannot know if those decisions/policies are worthwhile or not.
On the other hand I agree that there are many things that are difficult if not impossible to objectively quantify. So I’m not really sure what the solution is.
One thing I think you could quantify with performers such as singers is consistency. Humans aren’t perfect so even the best performers must occasionally fail to hit a note or lose time. Even elite athletes don’t always give their best performance in every competition so I imagine singers would be no different. A singer who more consistently performs at their best even though their best might not be the greatest may be preferable to a singer whose best is better yet gives their best performance far less consistently.
If one could identify all the dimensions of measurement (of which consistency might be one), one could in principle assign numbers to each. I don’t know how likely it is that for anything nontrivial, one could identify all the truly independent and significant dimensions, let alone both repeatably and efficiently quantify values on each.
In some cases though, the exercise might be interesting, even if never to actually be applied. As you suggest, different profiles of strengths might be better suited to different problems; and if nothing else, an understanding of the complexity of meaningful objective measurements might be used to enlighten much more consise subjective measurements (preferably by a small odd-numbered group rather than one person).
The notion of reducing evaluation of complex human performance issues to something nearly automatic is probably unrealistic, much as it offers repeatability and avoids appearance of bias. Carrying it to the point of a single number verges on the absurd.
And yet…when we make decisions, are they really rational at all? Or are they simply pattern-matching based on PREVIOUS after-the-fact rationalizations, for which we then again provide an after-the-fact rationalization? There’s some reason to believe that might be the case, esp. if rapid decisions are required. An understanding of the difficulties might at least improve the quality of the rationalizations, and thus, of subsequent decisions. 🙂
Ok, that’s the human part. The programmer in me says we’re all just machines, even if (as per a recent article) some of our thought processes are better modeled by quantum theory than by classical theory (which is not to imply anything about the underlying mechanisms). Perhaps there’s an analogy to uncertainty; an inherent limit to measurement, or quantities that are indeterminate until measured. Yet within those limits, one can manage quite a bit of repeatability with a sufficiently accurate model; and for a relatively limited performance measurement, that model should be able to be considerably simpler than the people in question, if still beyond present understanding.
So perhaps embracing the complexity, and for now bypassing the impractical parts while still being aware of them, might lead to future machine-aided evaluation techniques that would eventually have increased relevance, lower overhead, and greater repeatability. They would of course be somewhat intrusive…
I would contend that it’s at heart an issue of trust. If I am a federal bureaucrat in charge of overseeing voting rights, I want some serious hard numbers confirming that people are voting in the proper proportions. If I am a manager with lackluster employees, I want numbers to confirm that those employees are performing correctly.
Which all sounds reasonable, but comes down to me not trusting someone or someones to behave the way I want them to, and not being in a position to effectively understand what’s happening.
There have been a few attempts to require educational administrators to actually enter the classroom/lecture theatres as an instructor for at least a semester so that they will have to experience the results decisions requiring statistical data collection, etc. have on their workload. When the time required to perform data collection administrivia equals or exceeds the time required to thoroughly plan and actually present lessons and lectures and guided projects then something is wrong with the expectations. I suspect many data collection projects would disappear if the person requiring the information actually had to gather it along with the others assigned to the task.
I do understand the need for specific data collection to justify certain programs, but all too often the effective use of the data is very limited or even nonexistent. Perhaps the 2 guiding principles for brainstorming possible solutions, “is it feasible?” and “is it desirable” should be used more frequently.
I run into this issue myself at work. How do you quantify a police officer’s performance — without creating quotas of some sort? Is the officer that writes 25 tickets perfomring better than the one who only writes 10? What if that 25 is written in the first couple of days of an evaluating period — and the officer does noting but hide for the rest, because he knows he’s got his numbers, while the guy writing 10 is out every day, and writes those over the entire period? Is a traffic ticket equal to a misdemeanor arrest? How many tickets does a DWI equal? What about a felony arrest? Is the officer doing a better job who knows every business and the staff on the street — but seldom writes a ticket? Is a patrol officer better who follows up a lot cases on their own — but is often busy and unavailable for new calls because of that, or who takes a very thorough initial report and moves on to the next call promptly, leaving follow up to the detective squad?
The simple truth is that some things can’t be quantified that easily. Even some of the more obvious measurable things aren’t always good indicators on their own. To steal a comparison… 2 people can be given ingredients of equal quality, and take the same time to do the job… but a master pastry chef is going to turn out a delicious apple pie, while a toddler only turns out a pile of inedible mess.
It would be worth your while to read Malcolm Galdwell’s book “Outliers”, particularly if you are inclined to ponder the meaning behind the saying “lies, damn lies, and statistics”. Note that this is the same Malcolm Gladwell who recently wrote an article condemning the large elite universities for their disproportional focus on their enormous endowments to the detriment of their students.
I’ve read Outliers. Gladwell effectively makes the point that not all numbers are the same; their meaning depends on context.