Archive for the ‘General’ Category

Numbers… and Meaning

Everywhere I look, there are numbers, and pressure to provide numbers. Fill out this survey. Fill out another for a chance to win $1000 worth of groceries. Tell us how you liked this book. Tell us how you liked your flight. Tell us how the service was at the bank. Rate your purchase.

And that’s just the beginning. The President’s popularity is down – or up. This television program will return next season because the numerical ratings are up, that other one… so long. Advertising rates are tied to ratings as well, and because the attention spans of Americans are down, negative sensational news or quick laugh or quick action entertainment get higher numbers, and higher numbers mean higher profits.

All the stock-tracking systems show continuous numbers on public companies, the stock price by the minute, the latest P/E ratio, ratings by up to a dozen or so different services. The state of the economy is measured by the numbers of GDP or inflation by the CPI numbers [or some variant thereof] or the unemployment rate… always the numbers.

Why numbers? Because for the data to be effectively aggregated and analyzed, it has to first be quantified numerically.

All these numbers convey a sense of accuracy and authenticity, but how accurate are they? And even when they are “accurate” in their own terms, do they really convey a “true” picture?

I have grave doubts. As an author, I have access to Bookscan numbers about my sales, and, according to Bookscan, their data are 75-80% accurate. According to Bookscan, I’m only making about 25-30% of what my publisher is paying me. Now, my publisher is a good publisher, with good people, but Macmillan isn’t going to pay me for books it doesn’t sell. That, I can guarantee, and a number of other authors have made the same point. For one thing, Bookscan data represents print sales in bookstores and other venues that are point of sale outlets, which Walmart and Costco aren’t. Nor are F&SF convention booksellers, and ebook data isn’t factored in. So those “authoritative” numbers aren’t nearly as accurate as Bookscan would have one believe.

Similar problems arise in education. My wife the professor also feels inundated by numbers. There’s the pressure to retain students, because the retention and graduation numbers are “solid,” but there’s no real way to measure in terms of numbers the expertise of a singer or the ability of a music teacher to teach. And the numbers from student evaluations [as shown by more than a few studies] track more closely to a professor’s likeability and easy grading than the professor’s ability to teach singing, teaching, and actual thinking. A student switches majors because they’re not suited, and even if that student graduates in another field, the major/department in which the student began is penalized with lower “retention” numbers, which, in effect, penalizes the most demanding fields, especially demanding fields that don’t reward graduates with high paying jobs.

Yet, the more I look around, the more people seem to be relying on numbers, often without understanding what those numbers represent, or don’t represent. And there’s a real problem when decisions are made by executives or administrators or politicians who don’t understand the numbers, and from what I’ve seen, all too many of them don’t understand those numbers. We see this in the environmental field, where politicians bring snowballs into Congress and claim that there can’t be global warming, or suggest that a mere one degree rise in overall world ambient temperature is insignificant [it’s anything but insignificant, but the data and the math are too detailed for a blog post].

The unemployment numbers are another good example. The latest U.S. unemployment rate is listed at 4.5%, down from 10% in October of 2009. Supposedly, a five percent unemployment rate signifies full employment. Except… this number doesn’t include the 20% of white males aged 25-54 who’ve dropped out of the labor force. Why not? Because they’re not looking for work. If you included them, the unemployment rate would be around 17%.

Yet, as a nation, in all fields, we’re relying more and more on numbers that all too many decision-makers don’t understand… and people wonder why things don’t turn out the way they thought.

Numbers are wonderful… until they’re not.

Your Pain Doesn’t Count

Right now, statistics show that, in the United States, working class men without college educations in the 25-54 age group now have the lowest workforce participation levels ever, with one of five not even being in the workforce. This cohort is the only segment of the U.S. population that has shown an actual decrease in life expectancy, a marked increase in illness and suicide, and a declining earnings level – bringing it close to the same lower statistical levels as less-educated minority males, who have not shown any declines (but no significant improvement in recent years, either). In fact, the mortality level for middle-aged, non-college-educated white males is now thirty percent higher than for blacks, the first time in history that any cohort of white males has had a higher death rate than blacks of the same age.

This hollowing-out of the middle class workers who used to get paid far better wages than at present for semi-skilled work has led to a great upwelling of anger among them, and that anger was focused against Hillary Clinton and the Democrats in the last election and largely in support of Donald Trump. These men, and their families, are angry, and they’re hurting and lashing out at pretty much anyone and anything they think is getting a “better deal” from government and industry. They’re essentially claiming that no one is hearing their pain.

I understand that. What I don’t understand is why this group is so angry at women and minorities.

Here in the United States we’ve had two groups that have been minimalized and denied rights ever since the U.S. was founded, and while one group technically and legally received the right to vote over a century and a half ago, in practice that right was denied in one way or another in most of the country until little more than fifty years ago. The other group not only didn’t even get the legal right to vote until the twentieth century, and for much of U.S. history in many parts of the country did not even have the effective rights to hold property.

African-Americans and women remain comparatively disadvantaged to this day, no matter what stories individual white males can come up with anecdotally. I can recall stories about black men who lived in shacks but owned Cadillacs, but I didn’t learn until I was older that was because all too often they were denied decent or, sometimes, any real property. There was an ancestor in my wife’s family who had a hard time in North Carolina. It might have been because he was mulatto, part-black. Then he moved to Kentucky and passed as white. He became a very successful farmer and was one of the first to own a car, largely, I suspect, because he passed as white. He didn’t change; the community acceptance did.

After watching three wives and six daughters – and they are privileged compared to many women – battle gender discrimination in a wide range of occupations and fronts over the past fifty years, after watching how men gamed the federal government civil service system to benefit males, after seeing how much easier it was for me to raise four children for several years as a single father than it was for single women, and after living and working for more than twenty years in the extremely patriarchal culture of Utah, otherwise known as the semi-sovereign theocracy of Deseret, I tend to lose patience with people who complain about “reverse discrimination.”

Bur regardless of my impatience or what all too many people seem to believe, the plain fact is that all three of these groups are hurting and that the current political system is pitting them against each other. What’s worse is that each of these groups is pretty much ignoring the other’s pain. Is this really going to improve the situation or help any of them… or the United States?

The Violence Addiction

If one compares movies or television shows of the 1960s to those of today, it’s fairly obvious that the level of action, especially violence, and the frequency of violence have increased dramatically. So has the graphic depiction of that violence. I’m far from the first to have noticed this; it’s become almost a cliché.

Nor I am the first to have pointed out that exposure to so much violence tends to inure those who watch it to violence, both real and entertainment violence. What’s paradoxical about all this is that, as Steven Pinker and a number of scholars have pointed out, on average, life, particularly in the U.S., is far less violent today than it ever has been, yet on screen it’s more violent than it’s ever been, and U.S. parents, also interestingly enough, are far more worried about such violence occurring to their children than ever before, even though violence against children in the U.S., especially for children of the middle and upper classes, is markedly lower, rather than higher.

Yet, as a society, we seem to be becoming more and more addicted to violence in entertainment. Some scholars have pointed out as well that the continuing increase in violent public entertainment was a hallmark of the declining Western Roman Empire. Today, violent acts caught and broadcast via the internet seem to spark copy-cat actions.

All of this would seem to suggest that the emphasis on violence in public media and entertainment is anything but a welcome trend, yet it continues to increase with each television season dripping with more blood and action-packed violence than the previous season, certainly suggestive of a societal addiction to voyeur-violence.

At the same time, there’s an aspect to this that is generally overlooked, and one that, to me, is equally worrisome, if not even more troubling. As many of my readers know, for the most part I tend to keep graphic violence to a comparative minimum. I certainly acknowledge that it exists and is a part of any society or culture in some form, but what is overlooked about violence is that the vast majority of violence is a symptom of other factors or a reaction to another’s violence.

Thus, the concentration of attention on the violence itself, or the application of violent force to stop otherwise unchecked violence (while it may be necessary) tends to overshadow or minimize the causes of the initial violence.

But then, trying to solve problems that lead to violence just doesn’t play well on the screen, and it doesn’t have the satisfying crunch of seeing the so-called villain pulverized at the end of a great action sequence.

Can’t… Don’t… or Won’t

My wife just received an email from a student seeking to be a music major at the university. The student wanted to accept the scholarship that the department had offered, but wanted to know how to do so. My wife doesn’t know whether to be frustrated, amused, appalled, or enraged, if not all four. Why?

Because the letter offering the scholarship and setting forth the terms is sent in duplicate. All the student has to do is sign one copy, accepting the scholarship and its terms, and return it. Or, if the terms aren’t acceptable or the student decides to go elsewhere, rejecting the scholarship. The letter states all that precisely. This is not exactly complicated. Neither are the simple written scholarship requirements.

One of the terms that is spelled out in the scholarship letter is that to receive a Music Department scholarship, a student has to major in music. And every year there are several students who fail to follow the written requirements for their scholarship, even after being explicitly told both verbally and in writing what music department courses to take and in what order… and they lose their scholarships because they didn’t read the requirements or bother to follow directions. And there are those who register to major in other disciplines and then are shocked to learn that they don’t get a scholarship unless they major in music.

The department offers several levels of scholarships. The ones that cover all tuition for four years essentially have two major conditions: major in music, taking the requisite courses, and maintain a 3.5 grade average. Despite having high ACT/SAT scores, and good high school grades, there are always a few students who don’t seem to have read or understood those two requirements… and lose their scholarships.

Then there are the ones who try to register for courses that have pre-requisites, without having taken the earlier courses, or the ones who wait until their senior year for a course that’s only given every other year, despite the fact that this is noted in print in more than a few places. And then, of course, some administrators pressure the professors to make special accommodations. My wife doesn’t, but a few do.

All this conveys a strong impression that a great number of high school graduates don’t read, or don’t comprehend what they read… or don’t bother to. Pretty much every member of the Music Department, and any other department, has noticed this trend. Students will ask questions, such as, “What’s required for my jury [or gateway or recital]?” Seemingly not a bad question, except the requirements are listed in the syllabus and in the Voice Handbook. Students are also told the requirements verbally, and repeatedly. Did I mention that a great number of them don’t listen, either?

All of which brings up some questions: Just what aren’t these students being taught by their parents and/or their high schools about responsibility and consequences? How do so many of them get to the point of nearly legal majority without being held accountable? And why do so many colleges and universities make it even harder for professors to hold students accountable?

Over-Modeling

There’s an old saying along the lines of, if the only tool you have is a hammer, everything looks like a nail. The newest version of this seems to be, if a “model” works in one setting, it works everywhere.

I’ve already railed about the inapplicability of “the business model” to education and the arts, but the “model” problem goes far beyond that. There’s also the “peer review” model, which is a mainstay of the scientific community, and I don’t have a problem with its proper use in the pursuit of better science, especially in areas where there’s hard factual evidence. But the peer review concept is also creeping into education and elsewhere… and it’s incredibly easy to abuse in situations where the background conditions and the environment differ markedly. Peer review in such areas as music, art, theatre, and dance becomes essentially a race for awards of some sort. It also distorts education, because one “sterling” piece of artwork, one concert, a high ranking in vocal or instrumental competition by one or two students – none of these reflect accurately the value or depth of the education received (or not received) by all students in a given program. Neither do tests, but legislators and administrators keep searching for “hard benchmarks,” regardless of how flawed or inapplicable they may be.

The same problem applies with cost-effectiveness models. One of the big problems with the F-35 is that it can do everything “pretty well” and very little really well, which was the result of attempting to develop a single cost-effective aircraft that could be used by all four service branches. So we have a very expensive aircraft that does nothing in a superior fashion.

In education, cost-effectiveness gets translated into “the most graduates for the lowest cost” or “the best education for “X” dollars,” neither of which is particularly effective at producing high school or college graduates who can write, think, and calculate without extensive electronic aids [which usually don’t help].

Even in business, cost-effectiveness can be over-applied. Cost-effective production of existing products, undertaken to avoid more expensive product improvement or the introduction of newer products, can be the road to bankruptcy. And sometimes, the opposite is also true; it depends on the situation.

Models and methods are tools, and just like physical tools, knowing when to use them, and when not to, is critical. But then, that requires critical thought, and not just hopping on the bandwagon of the latest and greatest new technique, model, paradigm, or the like.

Harassment and Scandal

Bill O’Reilly is now out at Fox News, following by a few months the ouster of Roger Ailes, each removal the result of the revelation of a long and continuing pattern of inappropriate sexual behavior including sexual harassment. Does such behavior, as well as the long-term retention of two such individuals by the highly conservative Fox organization, have anything to do with the political outlook of Republicans and conservatives?

Certainly, a great number of liberals think so, especially some in my own family, but are conservatives really more likely to behave badly in the sexual/gender area than are liberals?

I can name a number of liberals and Democrats who have engaged in what most would call sexual improprieties, going all the way back to President Grover Cleveland, who fathered a child out of wedlock, or Franklin Roosevelt who had several affairs while in public office, or John Kennedy, or Bill Clinton. On the Republican side, the most obvious were Warren Harding who had a fifteen year affair with Carrie Fulton Phillips, Dwight Eisenhower who had a brief affair with his military driver, or Nelson Rockefeller,who had quite a few indiscretions, but the list of sexual political improprieties among national political figures is long and it includes roughly the same numbers of Republicans and Democrats. What it doesn’t include is many women [I could only find one out of more than 100 Republicans and Democrats listed for sexual crimes and improprieties], which suggests that women in power are either far less likely to engage in sexual indiscretions or less likely to be found out.

On the issue of harassment, however, conservatives and Republicans seem to do more of that, or at least they’ve been found out and charged with it more often. In addition to the Ailes and O’Reilly cases, there was Senator Bob Packwood (R-ORE) who resigned in 1995 under a threat of public senate hearings related to 10 female ex-staffers accusing him of sexual harassment. As candidates for the Republican Presidential nomination, both Donald Trump and Herman Cain were charged with sexual harassment. Clarence Thomas was also accused of sexual harassment. Then there was the Oklahoma state representative who used state funds to pay off a judgment against him for sexually harassing a staffer, or the Texas Congressman who tried to gut the Office of Congressional Ethics, who had earlier been charged with sexual harassment by an employee whom he later fired, or the Wisconsin state assembly Republican majority leader who was convicted of sexual assault, or… the list is very long.

Democrats certainly aren’t blameless, starting with Teddy Kennedy, former Senator Brock Adams, Congressmen Tim Mahoney, Jim Bates, and Mel Reynolds, but the list of actual Democratic harassers is about a fourth the size and length of the list of Republican harassers.

And then I came across an interesting chart of criminal misconduct by Presidential administration. Since Richard Nixon, whose administration resulted in 76 criminal indictments, 55 convictions and 15 prison sentences for members of his administration, there have been four Republican administrations and three Democratic administrations. The Democratic administrations had three criminal indictments, one conviction, and one prison sentence. The Republican administrations had 44 criminal indictments, 34 convictions, and 19 prison sentences.

The way it seems to stack up is that political viewpoint doesn’t make much difference in terms of consensual or semi-consensual sexual indiscretions, but the Republican/conservative outlook seems to result in more abuses of power and position.

But, from what we’ve seen recently, is that really surprising? Or is it that Democrats are really better politicians and are better at sexual persuasion?

All That Different?

Because human beings don’t have chlorophyll and a few other physio-chemical adaptations, for us to survive, we need to eat either other forms of life or the products of other forms of life. We’ve bred forms of both plant and animal life to provide food for us, and we’ve become better and better at it.

But there’s an underlying assumption behind our agricultural achievements, and that assumption is that human beings are not only superior to other forms of life on earth, but that we are fundamentally different in the way we interact with our environment.

One of the early beliefs was that human beings were the only tool-users on the planet. Now, after a raft of studies over the past fifty years or so, we know that there are quite a few other species that make and use tools. While those tools are incredibly crude compared to our tools, they are tools, and for a species to make and use a tool requires a certain amount of thinking and forethought beyond blind instinct or environmentally programmed responses. We’ve also discovered that animal tool use is, at least in a number of cases, “cultural,” in that some groups of a species use tools and others don’t, or make different tools.

Then came the questions dealing with whether animals could actually think, especially in dealing with “theory of mind” matters, that is, is the ability to attribute beliefs, intents, desires, pretending, knowledge, etc., to oneself and others and to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own. Experiments with mirrors and images have shown that certain species do indeed have that ability. Crows, ravens, elephants, and certain primates behave in ways that show they are very much aware of possible differences and mental motivation and states of others of their species and sometimes, even of other species.

But what we’ve learned doesn’t stop there. For a long time, most biologists dismissed the idea that plants did anything but grow and reproduce in some fashion. In the last few decades, however, they’ve discovered that plants aren’t nearly as simple as once had been thought. Experiments have shown that plants of the same species communicate with each other, and can warn other plants about insect attacks and other changes in the environment. They can also muster defenses against certain attacks. Unhappily, at times these defenses can be fatal if the attackers also adapt, as in the case of the spruce and pine bark beetles, who are attracted to both the warning signals and pitch secreted by the trees in an effort to repel beetles.

At the same time, more and more experiments and evidence show that plants do learn and adapt to changes in their environment. An evolutionary ecologist at the University of Western Australia, Monica Gagliano, actually trained plants to grow in specific directions based on which way a fan blew.

What’s the bottom line of all this? That while human beings are currently the best tool-users and thinkers on the planet, we’re not the only ones, and that we’re not fundamentally different from the rest of life, just better at taking advantage of all other life-forms – except maybe bacteria and viruses, but that’s another blog.

Plastic Perfect

On Tuesday, I laughed, if ruefully, at one of the headlines in the local paper – “Plastic Surgery High in Utah” – especially after reading the article, in which researchers noted that Utah had one of the highest rates of cosmetic plastic surgery, especially breast implants and “tummy tucks.” The researchers did observe that plastic surgery rates are greater in areas where women’s higher education levels lag more behind that of men than the national average, and one was even bold enough to suggest that it might have something to do with the Mormon faith, and the emphasis on “female perfection.”

Might have something to do with the LDS faith? Is that an understatement! This is the state where the rate of Prozac usage by married women is the highest in the nation. This is the only state where the achievement of higher education rates by women has essentially hit a stone wall, or ceiling – call it the LDS celestial glass ceiling. And, after all, with all those women having five children and their husbands still clamoring for Barbie-doll-figures, how could women not feel pressured into having a tummy-tuck? Or certain other “enhancements”?

As I’ve noted before, I walk, with occasional short stretches of running, most mornings, and the time I set out varies by as much as two hours, but no matter what time I walk, whether it’s at 6:30 or 8:30, or occasionally later, who do I see walking and running? Women, and most of them are decades younger than I am, often pushing baby strollers of the type suited to being propelled by more than walking speeds. Gym memberships are predominantly female as well. I do see a very few men, but those few are gray or white haired, likely out there on doctor’s orders.

But bring this up among the “faithful,” just like the “holy number” of preferred children, and it’s emphatically denied, even as the cult of the plastic perfect continues to dominate the lives of young LDS women.

Who Knew…

That the first true Greek language came about because someone wanted to write down the orally transmitted works of Homer, but couldn’t because none of the existing languages in that part of the Mediterranean had any vowels – and you can’t accurately transcribe poetry [or song, either, my wife the voice professor informs me] without vowels. So this original transcriber (according to Archaeology magazine) took the vowelless Phoenician alphabet and added Greek vowels, and within a hundred years ancient Greeks became literate on a wide scale.

Now, for purists, there were two prior Greek languages, known as Linear A, which has yet to be deciphered/translated, and Linear B, but it is likely Linear A was without vowels, and Linear B was definitely without vowels and was exclusively used by a very limited number of bureaucrats and merchants for record-keeping, primarily of commodities and taxes. Definitely not for poetry or literature, or even science fiction or fantasy. Roughly a hundred to two hundred years before the Greek introduction of vowels, the same transition took place in ancient Israel [so, yes, the Jews were first to add vowels to the Phoenician alphabet, but word, literally, traveled slowly in those days].

Apparently, the Greek version of language with vowels was more effective than the Hebrew version, possibly because even then entertainment topped scripture, but that also might have been because Alexander the Great conquered more territory and imposed Greek on more people. The Romans, the great practical engineers, adopted/stole everything Greek, including the idea of vowels, but streamlined and simplified the alphabet in the Latinate letters that the majority of the world uses today.

And that’s why, when I include poetry and flowery language in my books, to the dismay of the action-preferred readers, everyone can read it… all because [take your pick], ancient Hebrews wanted more descriptive language in their scriptures or ancient Greeks wanted to be able to preserve the works of Homer.

Road Kill

A report released last month by the Governors Highway Safety Association shows that the number of pedestrians killed in traffic jumped eleven percent last year, to nearly 6,000, the largest single-year increase in pedestrian fatalities ever, and the highest number in more than two decades.

And this wasn’t just because the number of traffic deaths went up due to increased driving. While overall traffic deaths increased six percent in 2016, reversing slightly a ten year decline, pedestrian deaths increased by nearly 12%. But it wasn’t just in those two years. Since 2006, pedestrian deaths have increased from 11% of all traffic fatalities in 2006 to over 15% in 2016, an increase of 25%. The increase in pedestrian deaths over the past decade occurred at time when total traffic deaths dropped by almost 17%. According to a number of sources, the greatest component of this increase is distracted walking.

Over the past year or so, I’ve occasionally commented on the increasing functional stupidity of students and others who blithely cross streets, their heads in their cell phones, not paying attention to traffic or much else. Well… now there’s some evidence that there is a cost to such stupidity, and that those who engage in it are candidates for the Darwin Awards, whose not-quite-tongue-in-cheek criterion for receiving the award states, “In the spirit of Charles Darwin, the Darwin Awards commemorate individuals who protect our gene pool by making the ultimate sacrifice of their own lives. Darwin Award winners eliminate themselves in an extraordinarily idiotic manner, thereby improving our species’ chances of long-term survival.”

Lack of intelligence around moving vehicles isn’t, unhappily, confined to homo sapiens, as a recent report in Royal Society Open Science confirms, by noting that the highest percentage of birds killed by moving vehicles were those with the smallest brains relative to their overall size.

In short, small brains makes it more likely that birds will die as road kill.

I have to wonder if we’d find the same thing if we looked at pedestrian traffic deaths.

Equal Pay

On January 29, 2016, the Obama Administration proposed a change to EEOC reporting requirements. Currently, all employers with 100 or more workers, roughly 60,000 employers with 63 million employees, already complete the EEO-1 form on an annual basis, providing demographic information to the government about race, gender, and ethnicity, but the proposed change would require employers to complete a revised EEO-1 form that included salary and pay information.

Almost immediately, the business community objected, claiming that the additional information was unnecessary, useless, and a burden. The EEOC made revisions to the proposal, which included defining “pay” as the total W-2 compensation paid to an employee, since businesses already have to compile and report that figure, and issued the revised rule in September, 2016, while extending the compliance date from March 2017 to March 31, 2018.

Business interests, led by the U.S. Chamber of Commerce are pressing the Trump administration hard to revoke the rule, saying that there’s no merit in the requirement. Trump’s Director of the Office of Management now says the matter is under review.

This is despite a huge amount of data that would appear to indicate the opposite, that, in particular, there is significant overall pay discrimination based on race and gender. The difficulty is that while statistics show that women are paid roughly twenty percent less than men, those are aggregate statistics, and both sides dispute them for different reasons.

What I find interesting is the Chamber of Commerce statement that the data would be useless. It seems to me that the data could be incredibly useful. It would go a very long way to either establishing or rejecting the idea that gender and racial pay discrimination exists.

In earlier comments, some businesses objected to the use of W-2 total compensation in the report, claiming that “base pay” was more accurate. Equal Pay advocates countered by pointing out that bonuses and other additional compensation go far more often to white males, and that total compensation – the measure adopted in the final rule – was a more accurate indicator.

The Chamber of Commerce’s opposition, at least to me, smacks of trying to keep everyone in the dark about what’s happening in the pay area, especially since business has to make the basic report anyway. It’s similar to the idea that, if the government stops funding climate research, global warming will just go away… but then, the head in the sand attitude has always been a favorite of those who don’t want things to change.

NOTE: At court hearing last Friday [April 7th], a U.S. Department of Labor regional director announced that, in investigating Google, the DOL had “found systemic compensation disparities against women pretty much throughout the entire workforce.” Google, of course, vehemently denied the charges. This was the second Silicon Valley tech company that DOL had charged with such gender pay discrimination, the first being Oracle earlier this year.

Outsiders

Recently, I’ve run across a number of articles, including some of the scholarly variety, which address the issue of “false news.” Several of them have made the points that so-called human “rationality” evolved to facilitate cooperation, not necessarily rational analyses of facts, and that the majority of human beings will accept “false news” that facilitates their inclusion in their belief group and reject verified facts that are in conflict with group beliefs.

If this is so, and from what I’ve observed, it seems to be for large groups of people, it gives rise to another question: How have human beings ever managed to evolve and develop a technological society?

The first response that came to my mind was: That’s why progress has been so slow and spotty, because you need consensus for a new way to become part of society.

One of the corollaries to this is that groups that more easily accept new and better changes will be able – still cooperatively – to outcompete groups or societies that don’t. And history tends to show that this is in fact true. When the Chinese culture essentially and gradually closed itself off to outside influences, symbolized by the government decree to destroy all ocean going ships in 1525 A.D., that marked the beginning of the long, slow, and inexorable decline of China, ending with the effective destruction of the “traditional” culture in the early twentieth century.

As I’ve noted in various previous blogs, a great number of more “modern” inventions, including the mechanical computer embodied in the antikithera mechanism, were actually developed and forgotten hundreds if not thousands of years before some society finally adopted them. Some were discarded because they were seen as uneconomic, others because people didn’t want to change existing ways of doing things, but what’s often overlooked is that economic factors aren’t entirely “rational,” but also part of a belief structure.

Some twenty-five years ago, my wife pointed out to an executive in the retail clothing world that there was a growing number of older women with money and taste who wanted professional and tasteful clothing not designed for twenty and thirty-year olds. The executive told her that there was no market for such clothing. I now know of several large retail firms making hundreds of millions, if not billions, from that market… but in the late 1980s and early 1990s, the group-belief in the clothing industry was that there was no market.

That’s where outsiders come in. They’re the people who are at least marginally part of society but who really aren’t part of a group, the ones who can set aside non-functional group “beliefs” and come up with changes.

Being an inside outsider can be dangerous, particularly in areas of belief. Many of the first theologians who started the movement that became the Reformation, like Jan Hus, ended up being executed. Although Alfred Wegener proposed the theory of continental drift in 1915, he was ignored and ridiculed for more than fifty years before conclusive evidence vindicated him.

But even inside outsiders can be trapped by believers, because, if they’re successful, they tend to attract a group of people who either share the same beliefs, or profess to share those beliefs, and, in time, that reduces the former outsider’s objectivity. Companies started by outsiders, such as Microsoft, Apple, or even Walmart, often have this problem after a while. One of Edison’s great advantages was that, for the most part, but not always, he relied on what he could prove or disprove, an attitude that often goes against group beliefs.

The problem, of course, is that for every outsider with a good idea, there are a dozen with bad ideas [which is often why they’re outsiders]. The fact that bad new ideas always outnumber good new ideas may also be why stable societies tend to be conservative. It’s also, I suspect, why when a society incorporates too much change too quickly the results are almost always disastrous, or close to it. Yet, without change, cultures stagnate and collapse… or are taken over or conquered by other cultures.

All of this is why I’m speculating, and it’s only a speculation, that societal/cultural success depends over the long run on the successful use and management of “outsiders” and their ideas.

Books – Getting There or Being There

There are so many different ways to categorize or analyze books that anything I write is likely to have been said or written many times before, but the other day something struck me, in an analytical sense, that I’ve known so intuitively that I never really ever verbalized it. It was simply that there are some books that one reads merely to get to the end in order to find out what happens, or who did what in what fashion, and there are others where each page is a delight, and one is disappointed when the book ends. The first kind of book is about “getting there,” and the second is more about “being there.”

Just as there are both kinds of books, and a great many that fall in the middle, readers also tend to fall along that spectrum as well.

Personally, I tend to like books that incorporate both aspects, and, obviously, I try my best to create both feelings, but my books, I suspect, tend to have a strong component of “being there,” and I’m reminded of that when I see reviews or comments by readers who complain about too little action or not enough battles or too many meal scenes.

But there’s more to “being there” than just language or description of mundane events. As a former Navy search and rescue pilot, I can’t help but recall the description of Naval Aviation that instructors brought up more than a few times – “ninety-nine percent routine boredom and one percent pure terror.” I also held a variety of fairly senior staff positions in national politics over nearly twenty years, from the Nixon Watergate years through Reagan years and some of the first Bush presidency. There were some tense moments there, about which the less said in this or any other public forum, the better, but the bottom line was the same. Pulse-pounding, heart-stopping action is rare and infrequent, as is political tension and true drama… and both are usually caused because someone’s screwed up the “being there” and routine parts of life [as we’re now seeing in the U.S. political arena at present].

That’s another reason why I write the way I do, because I like showing just how that can happen, and how disaster so often comes as a result of carelessness, thoughtlessness, lack of understanding, or incompetence in dealing with the routine. Seeing the protagonist fix the disaster, of course, is what most readers enjoy, but I’ve found in many novels that there’s little detail or creation involved in what causes the disaster, and that often there is problem after problem that, when considered for more than a moment, come off as improbable.

That’s why it helps to have at least at bit of “being there” because it makes the “getting there” more enjoyable and a deeper read… at least in my view.

The Wrong Healthcare Issue

Right now, the House Republicans are fighting to get enough votes to pass their bill to repeal and replace the Affordable Care Act, aka “Obamacare.” The Democrats are staunchly opposed. Both sides are arguing over the affordability of healthcare and access to healthcare insurance.

As far as I can see, they’re both circling around wrong tree, chasing each other’s tails. Insurance is only a symptom of the greater problem, and trying to deal with symptoms is not only expensive, but will also postpone dealing with the real problem, which continues to worsen. That problem? Healthcare costs. People need insurance because healthcare costs in the U.S. are effectively the highest in the world, and the vast majority of Americans don’t get as good healthcare as nations spending far less on healthcare.

In 2015, U.S. health care costs were $3.2 trillion, making healthcare one of the largest U.S. industries, nearly eighteen percent of Gross Domestic Product, but fifty-five years ago, healthcare only comprised five percent of GDP.

Part of the reason for the cost increase is emergency room treatment, the most expensive single aspect of current healthcare, making up one-third of all health care costs in America. And a significant proportion of emergency room care occurs because people can’t get or afford other treatment for various reasons.

Another component of rising costs is the continuing increase in the costs of drugs and medical devices. According to Forbes, the healthcare technology industry was the most profitable U.S. industry sector of all in 2015, notching an average profit margin of 21%, with the most profitable company of all being Gilead Sciences with a 53% profit margin. And no wonder, given that the list price for the top-20-selling drugs in the U.S. averages more than twice as much as those drugs as in the E.U. or Canada.

While the pharmaceutical industry pleads high research and development costs, a GlobalData study showed that the ten largest pharmaceutical companies in the world in 2013 spent a total of $86.9 billion on sales and marketing, as opposed to $35.5 billion on research and development, almost two and a half times as much on marketing as R&D. Those ten companies had an average profit margin of 19.4%, ranging individually from 10% to 43%, with half making 20% or more. And since Medicare is prohibited by law from negotiating drug prices for its 55 million beneficiaries, the program must pay whatever price drug makers set.

The U.S. medical technology market exceeds $150 billion a year in sales, and in 2015 the gross profit margin for the medical equipment and supplies industry averaged 12.1%, according to data from CSImarket.com.

Studies of doctors’ compensation show that over the past twenty years, that, in general, physician compensation has increased far less than all other components of healthcare. In fact, annual earnings actually declined for the typical physician between 2000 and 2010. Annual earnings for physician assistants and pharmacists have increased at a greater rate. More to the point, as a percentage of total national healthcare costs, U.S. physician wages are small – approximately 9% – a number among the lowest in the developed world.

Hospitals’ costs have increased significantly, but not because they’re making money. A Health Affairs study analyzed hospital income and costs of more than 3,000 hospitals nation-wide and found that fifty-five percent of hospitals lost money on each patient they served in 2013. This does raise the question of whether non-profit hospitals are paying more and more, possibly too much, for high-priced administrators apparently required by the bureaucratic and legal maze generated by the interweaving of private and public medical systems, government regulations, and insurance company requirements. Studies indicate that administrative costs make up twenty to thirty percent of the United States health care bill, far higher than in any other country. American insurers, meanwhile, spent $606 per person on administrative costs, more than twice as much as in any other developed country and more than three times as much as many, according to a study by the Commonwealth Fund.

Then add to that the skyrocketing costs of malpractice insurance and often excessive court judgments in medical tort claims cases.While the amount is subject to dispute, it’s not inconsiderable and also adds to costs.

Unfortunately, neither the Affordable Care Act nor any proposed Republican replacement will do anything to deal with what I’ve mentioned, and what I’ve mentioned are only the most obvious causes of ever-increasing health care costs.

Political Messaging

Everyone who follows media in the entire world likely knows that President Trump sends messages via Twitter. What’s been almost lost in the Twitter-storm, and the swirling claims and counter-claims about Russian influence in one form or another, is another, and far more ominous message.

In the United States, indeed anywhere, one form of “reality” is not what necessarily is, but what people believe is so. If people believe that foreign aid comprises twenty percent of federal spending or that public television and radio constitute five percent of the budget, then for them, that is reality, regardless of the facts. Unfortunately for actual reality, a majority of Trump supporters hold such beliefs, despite hard dollar figures to the contrary. So when Trump’s proposed budget proposes massive cuts to federal programs such as those whose total budgets in reality only comprise than five percent of federal spending, Trump’s followers truly believe that he is trying to make a significant cut in federal spending, while the observations by those who understand the numbers and the federal budget that such cuts cripple worthwhile programs while not really addressing the actual debt and deficit are largely ignored or minimized as just being politics as usual.

The problem is that Trump has no interest in confronting reality. His interest is, as is the interest of all promoters and snake-oil salesmen, to sell people on his version of reality, or to affirm their illusory version of reality to increase his own power and image. He also understands, as apparently the mainstream media doesn’t, that repetition turns anything into popular “truth.”

This is something that the mainstream media still doesn’t seem able or willing to confront. It’s one thing to argue about what national spending priorities should be. It’s another to put forth a spending plan designed solely to appease and appeal to one’s supporters, as Trump has, while totally ignoring fiscal reality. Unfortunately, even a sizable fraction of the GOP members of Congress seems unwilling to come to grips with this, and that’s understandable because Trump will turn on “defectors” and because a majority of Republicans also want to believe in Trump’s version of reality.

The media should be pointing out, daily, and loudly, that the numbers don’t add up. Have you seen a headline claiming “Trump Budget Based on Lies”? Or: “EPA Head, Oil Industry Cause 5,000 Earthquakes.” Or: “Trump Buys Off McConnell.” All of those are legitimate headlines, but you haven’t seen them, and you likely won’t, because if they show up, Trump will accuse them of being crooked liars, or the equivalent.

There seems to be a media assumption that people will see the truth on their own. Really? In a nation that requires remote controls for their televisions, ten second sound-bites, and news by Twitter, maximum 128 characters? Add to that the fact that most media publishes Trump’s proposals without strong critical analyses and worries more about his criticism than letting the public know what is occurring.

Establishing “truth” by repetition is currently winning… and all of us are losing.

Taking Credit

There are people who accomplish good or great things, and there are those who take credit for those accomplishments. As most intelligent individuals know, often the person who gets credit isn’t the one who actually did the work. Also, sometimes more than a few individuals take credit for something that was never accomplished or completed.

Over the course of my life I’ve certainly seen a lot of such instances. One of the best things – or the worse – about being a writer is that when a book is published you get the credit – or criticism. In my case, either, depending upon your point of view, is warranted, because I personally write every word that’s published, except for the few words corrected by my editor. I have been known to borrow/steal ideas from my wife, but the words are my own.

Not all books, however, are necessarily written by name on the spine of the book. While the original “Ellery Queen” mysteries were written by Frederic Dannay and Manfred Bennington Lee, more than twenty of the later Ellery Queen novels were ghost-written by others, including SF author Jack Vance.

Likewise, particularly in politics and often in business and academia, credit or blame is often taken by or placed on the wrong people. President Herbert Hoover didn’t cause the Great Depression, nor did Franklin Roosevelt end it [although he did make great efforts and did his best to mitigate its effects until the economic recovery caused by WWII kicked in]. Bill Clinton got credit for the economic recovery actually primed by the first President Bush.

Then there are the people who labor long and hard and slowly build something from virtually nothing, such as Fred Adams, who created the now well-known Utah Shakespeare Festival [which was good enough to win a Tony several years ago as the best regional theatre in the U.S.]. His wife Barbara did half the work, but only those who knew Fred and Barbara know that because Fred was not only a great builder, but a great showman. There’s also a well-known fantasy author whose wife contributed to every book, but whose name only appeared on the last few.

In more than a few cases, those who build an organization, a cause, a business just aren’t self-promoters, and often, because of that, others take credit… or the individual never gets credit.

More often than not, why someone gets credit, deserved or undeserved, is because they’re a good self-promoter, and there’s nothing wrong with that in itself – unless the self-promoter steals the credit from someone else. What is equally wrong is when the rest of us let the self-promoters who are stealing credit from those who deserve it reward the deceptive self-promoter.

National Identity and Anger

A recent poll from The Associated Press-NORC Center for Public Affairs Research showed that seventy percent of Americans felt that the country was “losing its identity.” Unfortunately, what the poll also revealed was that Americans couldn’t agree on what were the important components of that “identity.”

Although there are some points of agreement among Democrats, Republicans and independents about certain aspects of what makes up the country’s identity, such as a fair judicial system and rule of law, the freedoms enshrined in the Constitution, and the ability to get good jobs and achieve the American dream, recent political developments make it clear that the consensus on these points is overshadowed by the differences.

Fifty-seven percent of Republicans thought one of the most important parts of the national identity was a Christian belief structure, as opposed to twenty nine percent of Democrats. On the other hand, sixty-five percent of Democrats thought that that the mixing of global cultures in the U.S. was important, compared to thirty-five percent of Republicans.

According to the poll, seventy-four percent of Democrats say that the ability of immigrants to come to the U.S. to escape violence and persecution is very important, as opposed to fifty-five percent of Republicans. Forty-six percent of Republicans agreed the culture of the country’s early European immigrants was very important, versus twenty-five percent of Democrats.

Putting these findings together suggests that, in general, Republicans think that the national identity should be based on an enshrined Christian faith and the Anglo-centric patriarchal culture of the first immigrants, while Democrats emphasize a more global-culture, welcoming to immigrants, and more concerned with the present than the past. Obviously, that’s an oversimplification, but there’s still a basic conflict, almost between the past and the present.

That conflict was definitely revealed in the last election, with the Republicans essentially claiming that the country was turning from its white, European, and totally Christian roots, and that such a turn was destroying and/or diminishing not only the United States, but the position of middle-class white American males.

As both the AP-NORC Poll and the Women’s March on Washington [with millions of women in hundreds of cities and towns across the country] showed this Republican “traditional” society is not endorsed by a significant percentage of the country.

Yet the Founding Fathers attempted to hold together thirteen colonies of very different belief structures, some with the [to me] abhorrent idea that slavery was morally acceptable, and they crafted a government based on shared principles that did not require a specific religious belief, or indeed, any belief in a supreme deity at all. For the time, this was an extraordinarily radical enterprise, so radical that the American Revolution equally merits the title of the Anglo-American Civil War.

So why is there so much disagreement about national identity and national priorities?

The election results and the vitriolic rhetoric from the right reflect, among other things, that there are fewer and fewer well-paid unskilled and semi-skilled jobs, and those jobs already lost to out-sourcing and technology, but mainly to technology, removed some eight million largely white men from the middle class. Those men and their families and relatives look to a past of more secure and prosperous employment and believe that the country has lost its way… and its traditional identity, and they’re angry.

On the other hand, there are over forty million African Americans in the U.S., and while the Civil War that resulted in their freedom ended over 150 years ago, those blacks still face discrimination and other barriers to rights equal to other white ethnicities. After 150 years they’re angry, and getting angrier, especially given the number of young black males killed and incarcerated, particularly when study after study shows discrimination still exists and that blacks receive harsher jail sentences for the same offense as do whites… among other things.

Educated women of all ethnicities are angry that they do not receive even close to equal pay for the same jobs as men and that the male-imposed glass ceilings in business, government, and politics still remain largely unbroken.

Since women and minorities are getting more and more vocal, and since minorities are becoming a bigger and bigger share of the American population, I foresee some very “interesting” years ahead, and I’d suggest that the largely white male Congress consider those facts very carefully.

The “Other” Culture

There are several definitions of “culture.” One is the development of microorganisms in an artificial media. Another is “the refinement of mind, morals, or taste.” A third is “the cultivation of plants or animals.” But there are two other definitions that tend to get overlooked: (1) the specific period or stage in the development of a civilization and (2) the sum total of the attainment and learned behavior patterns of any specific period or group of people regarded as expressing a way of life. The second of those latter definitions is the one that tends to get overlooked in government and politics, and yet the problems caused by the “learned behavior patterns” of smaller groups within a society represent one of the principal reasons for societal unrest.

That is largely because quite a few nations, including the United States, are in fact composed of various subcultures. In the U.S., those subcultures, especially those disliked by the majority, are often minimized or denigrated in racial or religious terms. An important point, and one consistently ignored by ideologues, businesses, and particularly politicians, is that “culture,” as exemplified by learned patterns of behavior, trumps “race” or religion. By that I mean that the good or bad traits of group or subgroup of people have virtually nothing to do with their religion or their skin color or ethnicity. What determines how people act is their “learned patterns of behavior.”

And while religion is definitely a learned behavior, how people of a certain religion act can and does vary enormously from cultural group to cultural group. It also varies over time. Some 500 years ago, good “Christian” countries in Europe were slaughtering each other in a fashion even more brutal than that in which the Sunni and Shia factions of Islam are now doing. Yes, religion is a critical part of “culture,” but it ranges from being the primary determinant of a culture to being merely one of many factors, and in the history of certain civilizations, the impact of a religion can and has changed the culture drastically.

As I’ve also noted before, likely more than a few times, history is filled with examples of both great and failed societies and nations identified as being predominantly of one race or religion. There have been great empires in all parts of the world – except, so far, Antarctica, and there have been failed societies everywhere in the world, regardless of race or religion.

Certain cultural practices seem to work better than others, one of which is that cultures that allow religion to control society tend to stagnate and become ever more brutal. Cultures with great income inequality tend to be more likely to be oppressive, and a greater percentage seem to have either de jure or de facto polygamy. A good sociologist could likely carry this much farther, but the basic point is that it’s not only morally wrong to claim that a given race or ethnicity or religion is “stupid” or “inferior” (or any other number of pejorative terms), but also such unthinking “type-casting” totally misses the point. Culture – not race, genes, skin color, or religion – determines how people behave. More to the point, one can change a toxic culture [although it takes time] and a beneficial culture is always only a cultural change or two away from becoming toxic.

The Threat from Radical Islamic Terrorists

I’m fed up with the propaganda from the White House about the “overwhelming” danger to U.S. citizens from radical Islamic terrorists. Yes, there are radical Islamic terrorists, but here in the United States, radical Islamic terrorists are far less of a threat than home-grown right-wing terrorists. Overseas, that’s another question, which is why so many law-abiding members of the Islamic faith want to get to the U.S. – or did before Donald Trump became President.

While consensus on hard numbers is difficult to come by, and numbers vary by source, whatever the source, those numbers suggest that radical Islamic terrorists are not the major threat to Americans – not even close. Other Americans are.

On terms of terrorist attacks in the United States, the numbers are lopsided, to say the least. According to a study by the United States Military Academy’s Combating Terrorism Center, domestic right-wing extremists averaged 337 attacks in the U.S. in the decade after 9/11, accounting for 254 fatalities, while, depending on the study and the definitions, between a total of 20 and 24 terrorist attacks in the U.S. were carried out by Islamic radicals with between and 50 and 123 fatalities.

In the last several years, the vast majority of police deaths have come from domestic extremists. An ADL report states that of the forty-five police officers killed by extremists since 2001, ten were killed by left-wing U.S. extremists, thirty-four by right-wing U.S. extremists, and one by domestic Islamic extremists.

As far as Trump’s proposed travel ban goes, there has not been one single terrorist attack on U.S. soil in the last four decades that has been carried out by citizens of any the seven countries on Trump’s ban list. Out of the 240,000 Americans who have been murdered since the attacks on the Twin Towers in 2001, exactly 123 of those deaths were linked to Muslim-American extremists. In other words, .05123 percent of the murders in the United States in a sixteen year period were carried out in the name of radical Islam. Even figures from the right-wing National Review only list 88 deaths in the U.S. from radical Islamic terrorists since 2009.

Yet at the same time that Trump is citing the danger from radical Islamic terrorists, reports have surfaced that he plans to shut down Homeland Security’s watch list for domestic extremists. Not only that, but bowing to the NRA, he decided to void an Executive Order by former President Obama that would have put people declared to be mentally incompetent by a court on a do-not-buy list for firearms. The NRA argued that mentally incompetent people should have the same right to firearms as anyone else.

And we’re worrying about Islamic terrorists?

Education and the Business Model

More and more state legislators across the country are demanding that education be run in a more business-like fashion. While greater efficiency is likely necessary in many educational systems, running higher education “like a business” is not only counter-productive, but it’s more likely to create more problems than it purports to solve – and business-like approaches so far haven’t shown much success at the university level.

One of the tools employed by both business and educational systems run by the “business model” is to reduce costs by reducing the number of employees and their compensation in relation to “output.” In the business area, this has given us outsourced manufacturing or high-tech automated manufacturing or, on the other hand, in retailing, lots and lots of underpaid, part-time employees without benefits. In education, a similar change is occurring, particularly in higher education, where university faculties have shifted from those primarily comprised of full-time dedicated professors to faculties where the majority of teaching faculty are part-time adjuncts, many of them far less qualified or experienced than seasoned full-time faculty. At the same time, administrations spouting the “business model” mantra have burgeoned.

At virtually all public universities, administrative overhead and full-time administrative positions have increased two to threefold over the past twenty plus years, while full-time faculty positions have decreased, except in the case of some smaller state universities that have expanded massively so that full-time positions have increased somewhat, even though the percentage of full-time positions has decreased to the same level as at other state universities, if not more.

The chief reason for this emphasis on part-time teaching positions is cost. As I’ve noted before, fifty years ago, on average, state legislatures supplied the bulk of higher education funding. In 1974, states provided 78% of the cost of educating a student. Today, although total funding is actually higher, because almost four times as many students attend college today, the amount of state funding per student averages around 20%, although it varies widely by state, and in some cases it is around 10%.

For example, almost until 1970, California residents could attend the University of California [Berkeley] tuition-free. Today, tuition and fees for in-state students are around $15,000 a year. This trend, if anything, is accelerating. Just since 2008, state funding of higher education has dropped by 20% per student

The response by legislatures, predictably, is to push for more efficiency. Unhappily that has translated into “get lower costs however you can.” The problem with this is that the emphasis, no matter what administrators say, is to turn out the most graduates at the lowest cost. Universities also tend to phase out departments with small numbers or high costs, and expand departments with large numbers and low costs, even if students that major in that area have difficulty getting jobs.

In addition, political pressure, both to “keep” students in school for budgetary reasons and to graduate a higher percentage of students, has inexorably decreased the academic rigor of the majority of publicly funded universities and colleges. This, in turn, has led to more and more businesses and other employers demanding graduate degrees or other additional qualifications, which further increases the tuition and debt burden on students. That’s scarcely “economic” on a societal basis because it pressures students to aim for high income professions or high income specialties in a profession, rather than for what they’re good at doing and what they love. It’s also created an emphasis on paper credentials, rather than the ability to do a job. On top of that, it’s meant more highly qualified individuals are avoiding professions such as teaching, library science, music, art, government civil service, and others; and those professions, especially teaching, are being filled by a greater percentage of less highly qualified individuals.

The end result of the combination of stingy state legislatures and the “business model” is less rigorous academic standards and watered down curricula at the majority of public colleges and universities, skyrocketing student debt, a smaller and smaller percentage of highly qualified, excellent, and dedicated full-time professors, and a plethora of overpaid administrators, the majority of whom heap even more administrative requirements on full-time teaching faculty.

No efficient business actually operates this way, and why higher education gets away with calling what it’s doing “the business model” has baffled me for more than two decades.