Archive for the ‘General’ Category

The Curse of the Visual

The other day, my wife and I were discussing a basic change in music, one represented by the fact that very few of the younger generation can listen to complex music [anything that contains more than five non-repeating bars and a simplistic rhythm] and the fact that opera, musical theatre, popular music, and even music videos all now require elaborate and often excessive visual effects, and that so much music all sounds alike.  This goes beyond just music.  An ever-increasing proportion of the youthful population cannot listen to a teacher – or anyone else – for more than a very few minutes before tuning out. Just how as a society did we get to that point?

I’d submit that it has occurred as a result of the intersection of two factors.  The first is that sight is the strongest and most rapid of all human senses.  The second is the development of high-level, high-speed visual technology that reinforces and strengthens the dominance of human sight. What people hear, especially human speech, must be heard, translated, and then essentially reformulated. This takes more time and effort than seeing.  The same process exists with music lyrics, which must be heard and then felt.

All of this excessive reliance on the visual has a far greater downside than most Americans seem able to realize.  There’s now a huge effort to persuade teenagers in particular not to text and drive, for example, but so far, at least, the deaths from driving and texting continue.  The transit authority in Salt Lake has asked the legislature to make “distracted walking” a criminal misdemeanor because of the numbers of injuries and deaths involving people absorbed in cellphones walking into the path of light rail transit cars. Almost every school day, my wife has to stop or slow drastically to avoid hitting college students involved in texting crossing streets, oblivious to traffic.

Although a huge percentage of American teenagers have cellphones or the equivalent, comparatively few of them talk for long periods on them. Instead, they text. While there are text symbols for emotions, those symbols represent what the sender wants them to represent, not necessarily what the sender actually feels… and they make misrepresentation far easier.  Just look at how many teenagers, especially females, have been deceived through the internet and texting by people whom they would have dismissed instantly in person.

The entertainment industry has responded to the change in perception by emphasizing the visual. There are now very few if any overweight singers in opera, musical theatre, or popular music.  Popular music tour shows rely as much, if not more, on elaborate lighting, costumes, and pyrotechnics as on singing. Musical theatre has come to rely more and more on spectacle.  Music is becoming secondary to the visual, and complex lyrics are largely a thing of the past, unless occasionally accompanied by a monotonous beat in rap.

In a sense, even ebooks are a part of this trend – words on a lighted page that can be turned more quickly than a printed page, with speed skimming the prevalent and preferred way of reading, rather than an appreciation of depth. More and more, I see comments from readers that indicate that they don’t understand the innuendoes or the allusions in dialogue.  This isn’t surprising, since fewer and fewer young people can actually verbally express complex thoughts conversationally… or apparently want to, since in walking across most college campuses, no one is talking to those around them, but instead walking, hunched over, texting madly.  In fact, it’s so common that one scientific publication noted a new repetitive motion syndrome – “texting neck.”  It’s just my opinion, but when people are texting so much that it creates an adverse medical condition, it’s healthy neither personally nor societally.

Nor is it good for society when people are more interested in the visual appeal of musicians than in their musical excellence.  Nor is it healthy when fewer and fewer people can and will carry on face-to-face in-depth conversations.

But all those are symptoms of the curse of the visual, of overdosing on sight, if you will, fueled by the high-tech wizards of silicon cities across the world, more interested in the profits reaped from fueling the addiction than in the societal and physiological damage created.

 

 

 

Dramatic Fantasy — The Implications

The author Daniel Foster observed [in Wagner’s Ring Cycle and the Greeks] that an epic poet’s protagonist embodied the virtues and values of an entire society while the protagonists of a lyric poet embodied specific virtues accepted as exemplary traits for an individual. Foster also made the point that lyric poets whose protagonists’ values differed as society changed became less relevant and less widely read, as did those whose referents became less familiar. 

While Foster used the Greek poet Pindar as his historical example, his observation, it seems to me, also applies to novels today.  While some fantasy labeled as epic meets his definition, much of current large-scope fantasy presents values often at variance with the idea of a single unified culture represented so often in traditional epic works, and situations where the individual is pitted against the culture rather than acting as its champion against outsiders.

At the same time, over the past twenty years or so in my intermittent teaching and continual observation, I’ve seen that poets of the first half of the twentieth century have been read less and less, and, more important, when read, are understood less and less.  Part of that loss of understanding certainly lies in the loss of meaning of the references and allusions, because today’s young people are such a culture of the present that the majority of them know very little of the culture of as little as a single generation past, and without an understanding of what those references represent, the poetry loses much of its power. Most contemporary verse appears to appeal to shallow but universal feelings, interestingly enough, even as most novels pit an individual against at least some “universal” societal values. 

This trend in contemporary novels also exemplifies a change in basic societal values in the United States, or at least in the idea that there are some basic societal values that trump individual freedom of action. The belief held by many that the right to bear any kind of weapons is one example of this turn away from the idea that a society represents certain universals. Instead, we have ideological splintering, where various segments of society each believes that society should adopt its universals.

According to Foster, the composer Richard Wagner believed that the evolution of the poetic tradition ran from epic forms to lyric and finally to dramatic, where, in the dramatic form, the writer’s protagonists portray an out and out struggle against societal norms while still striving to live out individual virtues – in essence, a totally futile struggle because, in the end, without societal standards, there is no society.

I’m most likely overgeneralizing, but it seems to me that we’re seeing this conflict today in what is being published in current fantasy and, to a lesser degree, in science fiction.  One could actually characterize the fascination with zombies as a metaphor – with zombies representing a dead and somehow alien past that the protagonists are struggling against.  Vampires are a bit more ambiguous.  Are they the blood-sucking past drawing life from the vital present? Or are they the misunderstood new future nourished by the past?  Either way, both sub-sub-genres – as well as that of werewolves – represent a dramatic conflict embodying the premise that a society with unified and widely accepted common values is a thing of the past, and this represents a major change in western cultural values, largely among the younger readers… possibly another manifestation of both the generational gap and why the poets of the past no longer speak to the readers of the present.

 

 

 

A Cost of Privilege?

The most disturbing aspect of the latest mass shooting in Aurora, to me, is the fact that, on paper at least, James Holmes was a comparatively privileged young man… as were both of the Columbine High School shooters ten years ago. We’re not talking about poor oppressed minorities, but about young people who grew up in moderately affluent family situations.  In the case of Holmes, he was even an honor student at the University of California, Riverside, but he couldn’t get a job better than minimum wage, and he entered a doctoral program in neuroscience at the University of Colorado School of Medicine, where he struggled and then dropped out.  Somewhere around that time, he began to buy weapons and ammunition.

So why would a quiet young man from a comparatively privileged background commit such a terrible crime?  I’d submit that one of the key factors was precisely that background.

As I’ve expressed more than a few times, the continual expression of the Lake Wobegon theme [the place where all the children are above average] is not only false, but has been incredibly damaging to the younger generations.  Because they’re not all outstanding.  By definition, only a small percentage can be well above average, and the perks and privileges and jobs are going to go to that small percentage.  Even if a greater number of young people are brighter than their parents – which I doubt, but even if it is so – it doesn’t matter.  The positions at the top are limited.  They are in any society, and more education doesn’t mean better opportunities.  It means that college graduates essentially have the same opportunities as high school graduates had two to three generations earlier.

As noted by Joel I. Klein, the head of the New York City School system in 2010, “In 1950 high school dropouts made up 59% of the United States workforce, with just 8% represented by college graduates. As recently as 2005, these numbers have nearly reversed: 32% of workers have a college degree, while 8% are high school dropouts.”

This change in work-force composition has several ramifications.  First, an undergraduate college degree is likely not going to be the passport to a high paying job that it was in past generations. According to initial reports, that was one of the frustrations expressed by Holmes, that even with a bachelor’s degree in neuroscience, he could only find McDonald’s level jobs.

In addition, the dumbing down of both high school and collegiate undergraduate curricula and requirements has resulted in an entire generation of young people, of whom only a tiny percentage have been truly tested, and who have been told time after time how special they are.  In general, they’ve been shielded from failure and told they’re wonderful. In essence, not until his mid-twenties did Holmes discover that he really wasn’t that special and that the world didn’t care. The fact that our culture also values “personality” over technical and subject matter excellence, no matter what anyone says, adds even more fuel to the fire for those who are bright and socially awkward, as Holmes was said to be.

The pattern manifested by Holmes – and others – is familiar to forensic psychologists.  While not all young people who are alienated, depressed, and angry are violent, it appears that almost universally the violent are alienated, depressed, and angry. In the case of Holmes and the Columbine killers, and Ted Kaczynski, the Unabomber, it is highly likely that a key motivating factor is anger by those from a privileged background who couldn’t deal with failure and wanted to blame others for it. They believed they deserved more, and the fact that they hadn’t gotten what they wanted must have been the fault of others.  Hadn’t everyone told them how special they were?

Now… there will be years of study, and debate and counter-debate, but I’d be very surprised if anyone actually discusses the issues I’ve raised.  After all, how could we go wrong as a society by telling our wonderful children how special they are?

 

Economic Myths and Half-Truths

Although business/economics has become the foundation of western culture, its practitioners have circulated and tend to believe a number of myths and truisms, many of which are in fact half-truths.

You get what you pay for.

No.  You can’t get what you don’t pay for, but between over-inflated prices of certain goods (ranging from luxury products to certain prescription drugs and other aspects of health care), counterfeits, and cheap knock-offs, you often don’t get what you pay for.

 High pay is required to assure competence, especially in upper management.

High pay attracts people who are motivated by money, and higher than average pay is required for people with specialties that require long and expensive training, but there’s an upper limit, and this half-truth varies greatly in different segments of society.  More than a few studies have shown that comparatively lower-paid CEOs who are not “personalities” in general out-perform the highest-paid CEOs.  In addition, a significant percentage of the highest-paid money managers actually lose more money over time for their clients than gain it.  Likewise, money-motivated competence varies tremendously across fields.  A professional academic musician generally has more education than any MBA, but makes a fraction of the income of those MBAs in business and usually at least 30%  less than a business professor with an MBA and more like 50% less than a law professor with a J.D.  The same salary differentials apply to other academics teaching “humanities,” as well as most teachers.

Supply and demand always works better than regulation.

 This is a half-truth because it’s true as far as it goes, but doesn’t consider the implications, or what is meant by “better.”  Supply and demand is indeed the most “efficient” way to determine the allocation of goods and services, but that efficiency doesn’t take into account other values.  In a total free-market economy, in a famine, those who have money will pay higher prices for food… and will survive.  The poorest would not.  In addition, in a high-tech society, as noted above, even the most sophisticated consumer cannot determine the quality of certain goods, such as drugs, some beverages, even some foods, and therefore may well pay more for goods than “true” supply and demand would require. We’ve seen a similar issue in health care, where the “supply” of certain health care services costs more than many people can afford, which is one [but not the only] reason why tens of millions of Americans cannot afford health care.

The greater the risk, the greater the reward.

It is often true, in the case of dividend-paying stocks and bonds, that higher-risk issues have to pay out more than less risky ones, but this analogy truism breaks down in society.  Fire-fighters and police officers certainly face far greater risks than hedge fund managers, but they make a small fraction of the income that financial professionals do.  In professional sports played by both genders, such as basketball or golf, the risks are the same, but the males make more.  Now, this is justified historically by the argument that the demand for watching males is higher, and that can’t be disputed, but that points out that all of these myths/truisms are anything but absolute, even though they’re all too often dragged out as absolutes, especially by business people in pursuit of the bottom line and more of everyone else’s money.

The business model works better.

This half-truth has recently been promoted as the answer to virtually every ill in public institutions ranging from schools and universities, to municipalities, charities, public hospitals, and prisons. And, of course, the question is, again, what is meant by “better.”  In education, the business model has been applied in terms of teacher-pupil ratios or in higher-education, what disciplines are most “cost-effective.”  Unsurprisingly, the hard sciences and the performing arts are the least cost-effective educational disciplines, because the sciences require expensive equipment and additional laboratory sessions and the performing arts require intensive one-on-one training, especially in vocal music. While good financial management is clearly a necessity in any organization handling significant resources, the bottom line of the business model is to cut unnecessary expenses, and services/products which do not cover their costs, and to maximize revenues.  The business imperative is to look out for the business, and only to look beyond the business as necessary to assure its profitability and survival.

Public institutions, by their nature, provide goods and services that society has deemed necessary, even if not “profitable” for the specific institution.  That is why they are public institutions. Public hospitals are mandated to provide health care to people who will never pay their bills.  Schools must handle problem students and disabled students whose education is anything but profitable or cost-effective from the business standpoint.  Fire-fighters will often spend more time and effort putting out a fire than a structure is worth, even when no others are threatened.

So… the next time someone starts spouting these economic “truths,” it wouldn’t hurt to think about just how “true” they are in the case in point, especially if it’s a politician doing the spouting.

 

Half-Truths

Senator Mike Lee of Utah protested the representation of his position on the television show “Newsroom.” On the show the lead character, a news anchor, states that Lee is for the repeal of the Fourteenth Amendment.  He also says that Lee has a double-digit lead over Senator Bennett, the most conservative member of the Senate. For those who actually follow politics, the show is only partially correct.  Lee only favors repealing the part of the Fourteenth Amendment that allows citizenship to any child born in the United States of foreign-born parents here illegally, and Bob Bennett never got to a primary election because he didn’t even get 20% of the votes in the Republican state caucus.  The problem the writers of the show faced was that trying to explain what really happened would have lost most of the audience.  So they opted for a simplification that was essentially true to the spirit of the situation, showing Lee’s ultra-conservatism and his appeal to the far-right Republicans, but, factually, it was a misstatement, resulting in a half-truth, if you will.

This whole tempest in a Utah teapot, however, raises a much larger issue.  How does one raise vital issues in a complex world with a twenty-second attention span without either losing the majority in the details or oversimplifying into half-truths that can often be misleading? In the case of Mike Lee, the half-truth is partly incorrect, but not misleading.  He is now in all probability the most right-wing senator serving in the Senate, and if not, so close to it that in political terms it makes little difference.

Although I’ve criticized the opponents of the Affordable Health Care Act for their misleading statements and half-truths, the fact is that, for all its virtues, its supporters have also engaged in a campaign of half-truths, because the act won’t solve all of the health insurance problems facing the United States.  Even the individual mandate features won’t force full coverage, because the fines imposed for not having coverage are most likely to cost non-compliers less than insurance would, for those who could afford insurance, and for those who cannot, it’s rather difficult to obtain funds from those who have none.

Senators and U.S. representatives who head to Washington promising to balance the federal budget and get spending under control are spouting half-truths, if not total falsehoods, because no senator and no representative can do that by himself or herself.  Any successful legislation requires in these days 60% of the Senate and a majority of the House of Representatives, and all the rhetoric in the world won’t change that.

Part of the problem is the complexity of the world in which we live.  As I’ve noted before, we all prefer simple answers and explanations, but most of the problems we face don’t have simple answers.  The tax code, for example, is a complete nightmare of complexity.  Why?  Because straight and simple taxes are often unfair and fall disproportionately on certain individuals or people who live in different places or under differing circumstances. New industries might never develop without certain tax breaks, and so Congress, almost as soon as an income tax was made constitutional, began to amend and change the tax code, both in the interest of “fairness” and in order to encourage and discourage certain behaviors. Those who wanted those changes certainly didn’t tell the “whole truth.”  They said what they hoped would get what they wanted.

In the end, everyone wants the “other guy” to tell the whole truth, but not to tell it themselves, and that hasn’t changed a lot since the dawn of government, and certainly not since the founding of the United States, but too many half-truths result in fundamental misunderstandings and problems in a time of greater complexity and greater ramifications arising from all too many business, political, and technological changes.

That said… will half-truths persist?  Of course. They’ll even multiply, based on the all too human need for a simplicity that doesn’t exist in a modern world.

 

Happiness

The other day I was reading a report on the results of a psychological survey.  I can’t say that the results shocked me, but they were interesting.  Controlling for all other controllable factors, those people who are the happiest are married, religious, conservative extremists.  The next most happy are married, religious, liberal extremists, but there are a whole lot fewer of them, because very few extreme liberals are also religious.  And needless to say, according to the study, the most unhappy are unmarried, non-religious moderates.

The study didn’t attempt to analyze the reasons behind those findings, but from what I’ve seen, people tend to be the happiest when their lives are the most stable.  Being married, especially for a long time, makes for stability.  So does a philosophical mindset that remains stable and undisturbed by facts contrary to that mindset, and extremists almost never consider that which might upset their beliefs.  Likewise, for the religious, religion provides great stability and comfort.

Now… it’s no secret that the American political process has become polarized, and each of the major parties has come to embrace platforms and issues tending toward the extreme.  Yet, as each election in recent years has come around, there’s been an apparent groundswell of voters saying that the parties don’t represent them, that they’re more middle of the road. I’m beginning to wonder about that.

And there’s another factor involved.  When I was a much younger man, in high school and in college, when young people were asked what they wanted to be, most had fairly concrete ideas, not that many didn’t change their minds.  They wanted to be pilots, doctors, electricians, even plumbers, and some even wanted to be President. Today, when I or my wife asks college students what they want to be, the single largest response, dwarfing all others, is:  “I want to be happy.”

So does anyone who is sane, I think.  Who really would want to be unhappy [although I’ve known a few people in that category]?  But the problem with that response is twofold.  First, in practical terms, happiness isn’t really a goal;  it’s a mindset and response to what else you’re doing in life.  Second, if happiness does become a goal, what makes it most possible?  Apparently, a mindset that, over time, is incompatible with a representative democratic republic comprised of a population with a growing economic and ethnic diversity.

Just a few thoughts….

 

Intellectual Property Piracy: A Few [More] Thoughts

Given a number of reactions to my last blog, as well as the ongoing discussions, I realized, if rather belatedly, that two aspects of the whole question of the piracy of items mainly of intellectual property, e.g., movies, books, and music, seem to have been overlooked, or at least greatly minimized.  The first aspect is the fact that people regard items largely embodying intellectual property as fundamentally different from other items of property that are also bought and sold in commerce. Many people, as indicated by a number of comments on this blog, tend to regard the purchase of a piece of music or a book as a permanent license to that music or book, with no requirement to purchase another copy when the first is no longer available or useable.

My wife is a singer and a professor of voice and opera.  She has original, i.e., bought and paid for, sheet music for over 5,000 songs, largely from opera, oratorio, art song and the like, or music theatre.  Sheet music is expensive.  And at the end of every school year, she has to replace some of that sheet music, some because it’s old and literally falling apart, or otherwise damaged or unreadable, and some because it has “disappeared,” in one way or another.  Voice students who enter competitions must supply one or two [depending on the competition] original pieces of sheet music for each song that they have on their competition entry sheet.  Use of copies disqualifies them.  Often the student’s teacher supplies one copy [legally borrowed from the teacher], and the student supplies the other. It doesn’t matter how many times my wife or a student has bought a particular piece of sheet music; copies are not allowed.

Likewise, and perhaps this marks my mindset, there certain books that I’ve bought several copies of over the years when the previous copy deteriorated or was damaged or lost.  I didn’t feel that I “owned” the right to Roger Zelazny’s Lord of Light when the dachshund dragged it off the lowest shelf of the bookcase and used it as a chew toy.  I bought another.

When we buy a pastry from a fancy bakery, we don’t think that we should get all subsequent versions of that pastry for free… or reduced rates.  So why do people think that books or music are different?  Goods in trade are goods in trade.  Admittedly, probably the only reason other goods aren’t pirated is because there’s either no way for an individual to make a copy or the cost of making it would exceed the cost of buying it.

What’s different about ebooks, music, and movies is that we as a society have reached the point where these items can be copied cheaply and almost undetectably.  Because the cost of copying has become so cheap, at least some people, and that “some” becomes tens of millions in the aggregate, equate the cost of duplication to the value of the item.

One commenter justified making pirate copies of DVDs because his videos were damaged in moving by events beyond his control.  Well… by that token, shouldn’t all of us be able to pirate or get knock-off copies of anything that’s been damaged or stolen by others [and I’m not talking about insurance, because you pay for insurance]?  Another claimed that the movie industry was flooding the market, and that publishers were doing the same, but in point of fact there are fewer movies released annually today than there were in the years between 1930 and 1960, and in F&SF, the number of books released by major and specialty publishers has remained fairly constant for about ten years.  What that commenter was essentially saying was that people can’t afford all that is out there, and because the quality is uneven, they pirate. Not having the money to afford goods or because the quality is uneven is a valid excuse to pirate?

Several other commenters made the point that the drop-off in paperback sales is due to high prices.  That’s frankly bullshit… or a cop-out, if not both. I went back and looked at the paperback prices of my fantasy novels in 1995.  That was before the big drop-off in paperback sales began. Paperback versions of my fantasy novels were then priced at $6.99.  My latest paperback novels are priced at $7.99.  Over this period U.S. inflation, measured by the U.S. CPI, has been just about exactly 50%, while the price of paperbacks has gone up 14.3%. In real dollar purchasing power, paperback books [at least mine] now cost less in real purchasing power than they did 17 years ago. Pricing shouldn’t be an excuse. Now, it may very well be that many would-be readers today don’t want to pay as much as either they or their predecessors once did… but please… it’s not the price “increases.”  And by the same token, an ebook priced at $7.99 today would “cost” $5.34 in 1995 dollars.

Another possibility for the drop-off is simply that reading skills have declined. Studies show that a greater percentage of the population has difficulty concentrating on long reading passages, and if reading is a chore, then it’s not enjoyable, and those people won’t read as much. Reading also takes a lot longer for a pay-off/satisfaction…and we have become a more “instant” society.  So… in terms of price, people may well not wish to pay as much for a book as they once did… but it’s not because the books are more expensive, but because people wish to pay less, and that’s not the same thing… and, again, justifying piracy or theft because the price is more than one wants to pay is in fact intellectually dishonest, not to mention illegal.

Finally, the second, and equally disturbing, aspect of intellectual property piracy is that it effectively devalues the creators of such property, particularly in the case of authors, in comparison to other occupations in society, not because the worth of those creations has changed, but because the easy of pirating them has increased manyfold. Twenty years ago, most F&SF books sold 30,000- 100,000 copies in paperback, if not more.  Today, the same authors and authors of the same level of popularity and ability are only selling 10,000 – 40,000, and their ebook sales only make up a fraction of the difference.  Is the product worth that much less?  I don’t think so, but I’ve seen author after author vanish as their sales decreased below the level of profitability.  I’ve mentioned this on and off for the past four years, and people nod and agree, but paperback book sales have plummeted, and hardcover sales have declined, and ebook sales have not made up the difference.

Buying a book equals unlimited free lifetime copies?  Not until I get free unlimited lifetime pizzas for purchasing one pizza.

 

High Tech Dishonesty

I hate to suggest it, but there’s more and more evidence out there that either high technology users are more dishonest than the rest of the population or high technology has a greater attraction for the dishonest… if not both. The June  20th edition of Scientific American reports on the results of a study on movie piracy, and it turns out that the movies with the highest percentage of piracy are science fiction and high-tech thrillers, and that the annual cost of such piracy in just those genres exceeds a quarter of a billion dollars.

There are literally scores of bit-torrent sites advertising my books for free, at times even before the first hardcover release, so many sites that it would take almost all my time just to even contact them to demand they stop making the books available.  And frankly, I don’t care what a handful of authors say about the wide-spread dissemination of their work resulting in greater sales of their newer works.  In point of fact, most authors have suffered significant losses from internet piracy.  An admittedly random survey of such sites also indicates, as with movies, that a significantly disproportionate number of titles fall into the F&SF area.

Part of this, I’m convinced, is that high-tech oriented people are, in general, less patient.  They want it NOW.  Many of them have little patience for the quirks and foibles of marketing and for the reality that some people in any field, including bookselling, are not as competent as they could be, nor are these individuals particularly understanding of what goes into producing information.  I’ve even seen gripes that ebooks are being priced higher than bargain or remaindered versions of hardcovers.  Alas, I also know authors, some of long-standing in the field, who fail to understand this and go around spouting the wonders of the internet, without comprehending the costs to themselves and to the field.

Then, there’s the “information wants to be free” group, who, as I’ve discussed before, all too often really just mean, “I don’t want to pay for any information.”  Sometimes, this is disguised under the idea, such as with ebooks, that because the marginal cost of transmitting and disseminating the information is so low, the prices charged for information [books] are too high. In this regard, I’d like to point out one small matter.  I manage to write a little over two books a year, and it takes me roughly five months to write a fantasy novel. How would any of you who “justify” using torrents or other illegal sources to get my work for nothing like to feel that users of five months of your labor shouldn’t have to pay anything?  That doesn’t count the services of cover artists, proof-readers or editors.

Now, again, I must stress, I am NOT against technology. I am against its abuse. As part of this same trend, internet scams and other high-tech enabled crimes have skyrocketed over the past decade so much so that no enforcement authority really has any idea just how prevalent this is.  There are only estimates, some possibly accurate. I must get 20-30 of these daily, most but not all trapped by my spam filters.  And the behavior and business ethics, or lack of same, by internet and tech entrepreneurs such as Bezos and Zuckerburg doesn’t do much for presenting a case for high-minded behavior in the tech arena, either.

Much of this, I realize, is simply that high-tech offers greater opportunities for everything, and dishonesty is part of those opportunities. The second part is that, because of the impersonality of high-tech, particularly the internet, it becomes easier for those inclined to cut corners or be dishonest to rationalize their behavior, i.e., authors make lots; they won’t miss the sale of a few books; anyone who’s stupid enough to fall for the phishing scheme deserves to lose their money; the entertainment moguls charge too much for movies – and so it goes. It’s still rationalizing dishonesty, and it’s anything but a healthy direction for society, and it particularly distresses me to learn that a disproportionate amount of it comes from the F&SF –oriented sector.

 

Flash Culture?

A few weeks ago I came across an article in a magazine that I thought had at least a vestige of culture and sophistication.  The article claimed that rap singer Kanye West was an “American Mozart,” and I didn’t bother to finish it. Now, I will admit that I’ve only heard perhaps two songs, if that, by Kanye West, and I don’t care for rap, because every rap song I’ve tried to listen to comes across as essentially hip and violent with a monotonous driving beat.  I do know that the man has designed special Nike shoes that sell for something like $245 a pair.  But really, jumped up sneakers for $245?

The follow-up is that the latest edition of that magazine contained a letter to the editor objecting to the characterization of West as an “American Mozart,” to which the writer of the article had replied to the effect that West was indeed that, since he was appealing to the culture of today, just as Mozart had appealed to that of Vienna in the late eighteen century.

After I pulled my jaw back in place, I thought about the whole thing. To begin with, Mozart was never an eighteenth century “pop star,” even in just Vienna, or even just in the court of Emperor Josef.  According to compilations I’ve seen, four other composers had more performances of their works, and to greater acclaim and popularity.  The “pop music” hero of the time was more likely to be Salieri, not Mozart.

So, I wouldn’t have objected nearly so much, if the writer had characterized Kanye West as an “American Salieri,” here and highly popular, and then likely to be forgotten, because his work is essentially forgettable – not necessarily for lack of talent [although I will leave that judgment to others], but because the very form in which he works, like the popular works of Salieri and other popular composers of that time, lacks the breadth, depth, and sweeping sophistication of a Mozart or a Beethoven, or even of a Liszt [who was both a classical and popular sensation of the nineteenth century].

And what’s the point of this comparison?  It’s not a niggling about Kanye West, but a reflection of a far larger concern – that we are fast becoming a “flash” culture with little understanding of what is transitory and what may be permanent, and even less knowledge or understanding of our past, historical or artistic or technical. I understand that the person in the street, if you will, might not understand the historical nuances and references, but to me, it’s disturbing that a writer featured in a magazine which prides itself on reporting on “culture” apparently has neither that knowledge nor that understanding.

This lack of understanding, unhappily, goes well beyond culture.  According to surveys taken by  the American Revolution Center, sixty percent of Americans could identify the number of children of reality TV couple John and Kate Gosselin, but more than a third could not tell in what century the American Revolution took place. More Americans know the names of Michael Jackson’s hit songs than that the Bill of Rights is part of the U.S. Constitution. A shocking 70% don’t even know what the Constitution actually is. Only 20% of Americans understand the principle of the scientific method.  More than 40% believe that antibiotics are effective against viruses.  Forty percent believe dinosaurs existed at the same time as human beings, and forty-five percent don’t know how long it takes the earth to orbit the sun.

But ask them about pop songs, and they know… so long as they’re current. Most college freshmen in a popular music course in my wife’s university didn’t know who Frank Sinatra or Judy Garland were.

Welcome to the world of the flash culture.

 

Messages/Facebook

For reasons I won’t go into, I do not have a personal Facebook page.  Nor will I join LinkedIn or any other social network or media. I have so far been able to respond to all emails, as well as any inquiries posted on the “Questions for the Author” section of the site — provided, of course, that a valid email address is provided.  I cannot and will not respond through Facebook or social media, however, and, since I’ve recently received some messages which can only be replied to by Facebook, I thought I should make this clear.

Technology and the Tool-User

Modern technology is a wonder.  There’s really no doubt about that.  We can manipulate images on screens. We can scan the body to determine what might be causing an illness.  We can talk to people anywhere in the world and even see their images as they respond.  We can produce tens of millions of cars and other transport devices so that we aren’t limited by how far our legs or those of an animal can take us.  We can see images of stars billions of light years away.

But… technology has a price.  In fact, it has several different kinds of prices.  Some are upfront and obvious, such as the prices we pay to purchase all the new and varied products of technology, from computers and cell phones to items as mundane as vacuum cleaners and toaster ovens. Others are less direct, such as the various forms of pollution and emissions from the factories that produce those items or the need for disposal and/or recycling of worn-out or discarded items.  Another indirect cost is that, as the demand for various products increases, often the supply of certain ingredients becomes limited, and that limitation increases the prices of other goods using the same ingredients.

But there’s another and far less obvious price to modern technology.  That less obvious price is that not only do people shape technology, but technology shapes and modifies people.  This has worried people for a long time in history. Probably the invention of writing had some pundits saying that it would destroy memory skills, and certainly this issue was raised when the invention of the printing press made mass production of books possible.  In terms of the impact on most human beings, however, books and printing really didn’t change the way most people perceived the world to a significant degree, although it did raise the level of knowledge world-wide to one where at least the educated individuals in most countries possessed similar information, and it did result in a massive increase in literacy, which eventually resulted in a certain erosion of  the power of theological and ruling elites, particularly in western societies… but the impact internally upon an individual’s perception was far less limited than the doomsayers prophesied.

Now, however, with the invention of the internet, search engines, and all-purpose cellphones providing real-time, instant access to information, I’m already seeing significant differences in the mental attitudes of young people and the potential for what I’d term widespread knowledgeable ignorance.

While generations of students have bemoaned the need to learn and memorize certain facts, formulae, processes, and history, the unfortunate truth is that some such memorization is required for an individual to become a thinking, educated individual.  And in certain professions, that deeply imbedded, memorized and internalized knowledge is absolutely necessary.  A surgeon needs to know anatomy inside and out.  Now, some will say that computerized surgeons will eventually handle most operations. Perhaps…but who will program them?  Who will monitor them? Pilots need to know things like the critical stall speeds of their aircraft and the characteristics of flight immediately preceding a potential stall, as well as how to recover, and there isn’t time to look those up, and trying to follow directions in your ears for an unfamiliar procedure is a formula for disaster.

In every skilled profession, to apply additional knowledge and to progress requires a solid internalized knowledge base.  Unfortunately, in this instant-access-to-information society more and more young people no longer have the interest/skills/ability to learn and retain knowledge. One of the ways that people analyze situations is through pattern-recognition, but you can’t recognize how patterns differ if you can’t remember old patterns because you never learned them.

Another variation of this showed up in the recent financial meltdowns, the idea that new technology and ideas always trump the old.  As one veteran of the financial world observed, market melt-downs don’t happen often, perhaps once a generation, and the Wall Street “whiz-kids” were too young to have experienced the last one, and too contemptuous of the older types whose experience and cautions they ignored… and the reactions of all the high-speed computerized tradeing just made it worse.

A noted scholar at a leading school of music observed privately several months ago that the school was now getting brilliant students who had difficulty and in some cases could not learn to memorize their roles for opera productions. In this electronic world, they’d never acquired the skill.  And in opera, as well as in live theatre, if you can’t memorize the music and the words… you can’t perform.  It’s that simple.   This university has been in existence over a century… and never has this problem come up before.

And what happens when all knowledge is of the moment, and electronic – and can be rewritten and revised to suit the present?  When memory is less trusted than the electronic here and now? You think that this is impossible?  When Jeff Bezos has stated, in effect, that Amazon’s goal is to destroy all print publications and replace them all in electronic formats? And when the U.S. Department of Justice is his unwitting dupe?

But then, who will remember that, anyway?

Solutions and Optimism

Believe it or not, I really am a cheerful and optimistic sort, but the reaction to some of my latest blogs brings up several points that bear repeating, although some of my readers clearly don’t need the reminders, because their comments show understanding.  First, a writer is not just what he or she writes. Second, critical assessment, particularly if it’s accurate, of an institution or a societal practice is not always “negative.”  Third, solutions aren’t solutions until and unless they can be implemented.

Readers can be strange creatures, even stranger than authors, at times.  I know an author who writes about the experiences of a white trash zombie.  She’s a very warm person and not at all either white trash or a zombie.  And most readers have no problem understanding that.  Yet, all too often, some readers have great difficulty in understanding that just because a writer accurately portrays a character with whose acts or motivations they disagree it doesn’t necessarily mean the character represents the author.  I’ll admit that some of my characters do embody certain experiences of mine – especially those who are pilots of some sort or involved in government – but that still doesn’t mean that they’re me.  Likewise, just because I point out what I see as problems in society doesn’t mean that I’m a depressed misanthrope.

As I and others have said, often, the first step to solving a problem is recognizing it exists. On a societal level, this is anything but easy. Successful societies are always conservative and slow to change, but societies that don’t change are doomed.  The basic question for any society is how much and how fast to change, and the secondary questions are whether a change is necessary or inevitable… or beneficial, because not all change is for the best.

One of the lasting lessons I learned in my years in Washington, D.C., is that there is usually more than one potential and technically workable solution to most problems.  At times, there are several. Very, very, occasionally, there is only one, and even then there is the possibility of choosing not to address the problem.  And every single solution to a governmental problem has negative ramifications for someone or some group so that addressing any problem incorporates a decision as to who benefits and who suffers. Seldom is there ever an easy or simple solution.  And, of course, as voters we don’t get to choose that solution; we only get to vote for those who will, and often our choice isn’t the one who gets elected.

For that reason, my suggested course of action is almost never to vote for any politician who promises a simple or easy solution.  If two candidates promising simple solutions are running, vote for the one who incites less anger or whose solution is “less simple.”

This electoral emphasis on simplicity has always been present in American politics, but in the past, once the campaign was over, politicians weren’t so iron-clad, and didn’t always insist on a single simple answer/solution. I saw the beginning of the change in the late 1970s, and it intensified in the Reagan Administration. For example, when I was at the Environmental Protection Agency, there was a large group of people who were totally opposed to hazardous waste landfills or incinerators – anywhere.  In addition, and along the same lines, to this day, we don’t have a permanent repository for spent nuclear fuel.  I’m sorry, but in a high tech society with nuclear power plants, you need both.  The waste isn’t going away, and the products we use and consume generate those wastes.  Right now there is NO technology that can generate high tech electronics without creating such wastes, and to make matters worse, the cleaner the technology, the more expensive it is, which is why a lot of electronic gear isn’t manufactured in the USA.  Likewise, the immigration problem won’t go away so long as the United States offers the hope of a better life for millions of people.  We can’t effectively seal the borders.  Nor can we deport all illegal aliens, not without becoming a police state along the lines of Nazi Germany or Stalinist Russia. There are no simple solutions that are workable.  Period.

The current legislative gridlock in Washington, D.C., reflects the iron-clad insistence by each party, and especially, I’m sad to say, the Republicans, that their “solution” is the only correct one.  It’s not a solution if roughly half the people in the country, or half the elected representatives [or a minority large enough to block legislation], oppose it, because it’s not going to get adopted, no matter what its backers claim for it.  In practice, in our society, any workable solution requires compromise.  When compromise fails, as it did over the issue of slavery, the result can only be violence in some form. Unhappily, as I’ve said before, the willingness to work out compromise solutions has declined. In fact, politicians willing to compromise are being branded as traitors.  So are politicians who try to forge alliances across party lines.  So… my suggested solution is to vote for officials who are open to compromise and vigorously oppose those who claim that compromise is “evil” or wrong, or un-Democratic, or un-Republican.  No… it’s not a glamorous and world-shaking solution. But it’s the only way we have left to break the logjam in government.  Until lots of people stop looking for absolute and simple solutions and start agitating for the politicians to work together and hammer things out… they won’t.  Because the message given to every politician out there right now has been that compromise kills political careers.

So we can all stick to our hard and fast principles – and guns, if it comes to that – and watch nothing happen until everything falls apart… or we can reject absolutist politics and get on with the messy business of politics in a representative democratic republic.

 

Older and Depressed?

The other day one of my readers asked, “Is there anything positive you can talk about or have you slid too far down the slope of elder grouchiness and discontent?”  That’s a good question in one respect, because I do believe that there is a definite tendency, if one is intelligent and perceptive, to become more cynical as one gains experience.

Psychological studies have shown, however, that people who suffer depression are far more accurate in their assessments of situation than are optimists, and that may be why optimism evolved – because it would be too damned hard to operate and get things done if we weighed things realistically.  For example, studies also show that entrepreneurs and people who start their own businesses invariably over-estimate the chances of their success and vastly underestimate their chances of failure.  This, of course, makes sense, because why would anyone open a business they thought would fail?

There’s also another factor in play. I spent nearly twenty years in Washington, D.C., as part of the national political scene, and after less than ten years I could clearly see certain patterns repeat themselves time after time, watching as newly elected politicians and their staffs made the same mistakes that their predecessors did and, over the longer term, watching as each political party gained power in response to the abuses of its predecessor, then abused it, and tried to hold on by any means possible, only to fail, and then to see the party newly in power immediately begin to abuse its power… and so on. It’s a bit difficult not to express a certain amount of “grouchiness and discontent,” especially when you offer advice based on experience and have it disregarded because the newcomers “know better”… and then watch them make the same kind of mistakes as others did before them.  My wife has seen the same patterns in academia, with new faculty and new provosts re-inventing what amounts to a square wheel time after time.

It’s been said that human knowledge is as old as written records, but human wisdom is no older than the oldest living human being, and, from what I’ve seen, while a comparative handful of humans can learn from others, most can’t or won’t.  And, if I’m being honest, I have to admit that for the early part of my life I had to make mistakes to learn, and I made plenty. I still make them, but I’d like to think I make fewer, and the ones I make are in areas where I don’t have the experience of others to guide or warn me.

The other aspect of “senior grouchiness,” if you will, is understanding that success in almost all fields is not created by doing something positively spectacular, but by building on the past and avoiding as many mistakes as possible. Even the most world-changing innovations, after the initial spark or idea, require following those steps.

I’m still an optimist at heart, and in personal actions, and in my writing, but, frankly, I do get tired of people who won’t think, won’t learn, and fall back on the simplistic in a culture that has become fantastically complex, both in terms of levels and classes of personal interactions and in terms of its technological and financial systems. At the same time, the kind of simplicity that such individuals fall back on is the “bad” and dogmatic kind, such as fanatically fundamental religious beliefs and “do it my way or else,”  as opposed to the open and simple precepts, such as “be kind” or “always try to do the right thing.”  I’m not so certain that a great portion of the world’s evils can’t be traced to one group or another trying to force their way – the “right way,” of course, upon others.  The distinction between using government to prohibit truly evil behavior, such as murder, abuse of any individual, theft, embezzlement, fraud, assault, and the like, and forcing adherence to what amounts to theological beliefs was a hard-fought battle that took centuries to work itself out, first in English law, and later in the U.S. Constitution and legal system.  So when I see “reformers” – and they exist on the left and the right – trying to undermine that distinction that is represented by the idea of separation of church and state [although it goes far beyond that], I do tend to get grouchy and offer what may seem as depressing comments.

This, too, has historical precedents.  Socrates complained about the youth and their turning away from the Athenian values… but within a century or so Athens was prostrate, and the Athenians never did recover a preeminent position in the world. Cicero and others made the same sort of comments about the Roman Republic, and in years the republic was gone, replaced by an even more autocratic empire.

So… try not to get too upset over my observations. After all, if more people avoided the mistakes I and others who have learned from experience point out, we’d all have more reasons to be optimistic.

 

The Republican Party

Has the Republican Party in the United States lost its collective “mind,” or is it a totally new political party clinging to a traditional name – whose traditions and the policies of its past leaders it has continually and consistently repudiated over the past four years?

Why do I ask this question?

Consider first the policies and positions of the Republican leaders of the past.  Theodore Roosevelt pushed anti-trust actions against monopolistic corporations, believed in conservation and created the first national park. Dwight D. Eisenhower, general of the armies and president, warned against the excessive influence of the military-industrial complex and created the federal interstate highway system.  Barry Goldwater, Mr. Conservative of the 1970s, was pro-choice and felt women should decide their own reproductive future.  Richard Nixon, certainly no bastion of liberalism, espoused universal health insurance and tried to get it considered by Congress and founded the Environmental Protection Agency.  Ronald Reagan, cited time and time again by conservatives, believed in collective bargaining and was actually a union president, and raised taxes more times than he cut them.  The first president Bush promised not to raise taxes, but had the courage to take back his words when he realized taxes needed to be increased.

Yet every single one of these acts and positions has now been declared an anathema to Republicans running for President and for the U.S. House of Representatives and the Senate.  In effect, none of these past Republican leaders would “qualify” as true card-carrying Republicans according to those who now compose or lead the Republican Party.  A few days ago, former Florida governor and Republican Jeb Bush made a statement to the effect that even his father, the first President Bush, wouldn’t be able to get anything passed by the present Congress.

President Obama is being attacked viciously by Republicans for his health care legislation, legislation similar to that signed and implemented by Mitt Romney as governor of Massachusetts and similar in principle to that proposed by Richard Nixon.

Now… I understand that people change their views and beliefs over time, but it’s clear that what the Republican Party has become is an organization endorsing what amounts almost an American version of fascism, appealing to theocratic fundamentalism, and backed by a corporatist coalition, claiming to free people from excessive government by underfunding or dismantling all the institutions of government that were designed to protect people from the abuses of those with position and power.  Destroy unions so that corporations and governments can pay people less.  Hamstring environmental protection in the name of preserving jobs so that corporations don’t have to spend as much on environmental emissions controls. Keep taxes low on those making the most.  Allow those with wealth to spend unlimited amounts on electioneering, if in the name of  “issues education,” while keeping the names of contributors hidden or semi-hidden.  Restrict women’s reproductive freedoms in the name of free exercise of religion. Keep health care insurance tied to employment, thus restricting the ability of employees to change jobs.  Allow consumers who bought too much housing to walk away from their liabilities through bankruptcy or short sales (including the honorable junior Senator from Utah), but make sure that every last penny of private student loan debt is collected – even if the students are deceased.

The United States is a representative democratic republic, and if those calling themselves Republicans wish to follow the beliefs and practices now being spouted, that’s their choice… and it’s also the choice of those who choose to vote for them.

But for all their appeal to “Republican traditions,” what they espouse and propose are neither Republican nor traditional in the historic sense,  But then, for all their talk of courage and doing the hard jobs to be done, they haven’t done the first of those jobs, and that’s to be honest and point out that they really aren’t Republicans, and they certainly aren’t traditional conservatives, no matter what they claim.

The Derivative Society?

Once upon a time, banking and investment banking were far less complex than they are today, especially recently.  In ancient times, i.e., when I took basic economics more than fifty years ago, banks used the deposits of their customers to lend to other customers, paying less to their depositors than what they charged those to whom they made loans.  Their loans were limited by their deposits, and banks were required to retain a certain percentage of their assets in, if you will, real dollars.  Even investment banks had some fairly fixed rules, and in both cases what was classed as an asset had to be just that, generally either real property, something close to blue chip securities, municipal, state, or federal notes or bonds, or cash. With the creeping deregulatory legislation that reached its apex in the 1990s, almost anything could be, with appropriate laundering, otherwise known as derivative creation, be classed as someone’s asset.

And we all know where that led.

And for all the furor about derivatives, and the finger-pointing, something else, it seems to me, has gone largely unnoticed.  The fact is that our entire society, especially in the United States, has become obsessed with derivatives in so many ways.

What are McDonald’s, Wendy’s, Burger King, Applebee’s, Olive Garden, Red Lobster, Chili’s, and endless other restaurant chains, fast-food and otherwise, but derivatives.  What happened to unique local restaurants?  The ones with good inexpensive food often became chains, deriving their success from the original.  The others, except for a few handfuls, failed.  Every year it seems, another big name chef starts a restaurant franchise, hoping to derive success and profit from a hopefully original concept [which is becoming less and less the case].

Department stores used to be unique to each city.  I grew up in Denver, and we had Daniels & Fisher, with its special clock tower, the Denver Dry Goods [“The Denver”], and Neustaeder’s.  Then the May Company took over D&F, and before long all the department stores were generic. In Louisville, where my wife was raised, there were Bacon’s, Kaufmann’s, Byck’s, Selman’s, and Stewart’s. Not a single name remains.

Even Broadway, especially in musical theatre, has gone big for remakes and derivatives. Most of the new musicals appear to be remakes of movies, certainly derivative, or re-dos of older musicals. Every time there is a new twist on TV programming the derivatives proliferate.  How many different “Law and Order” versions are there?  Or CSI?  How many spin-offs from the “American Idol” concept?  How many “Reality TV” shows are there?  Derivative after derivative… and that proliferation seems to be increasing. Even “Snow White” has become a derivative property now.

In the field of fantasy and science fiction writing, the derivatives were a bit slower in taking off, although there were more than a few early attempts at derivatives based on Tolkien, but then… somewhere after Fred Saberhagen came up with an original derivative of the Dracula mythos, vampires hit the big-time, followed by werewolves, and more vampires, and then zombies.  Along the way, we’ve had steampunk, a derivative of a time that never was, fantasy derivatives based on Jane Austin, and more names than I could possibly list, and now, after the “Twilight” derivatives, we have a raft of others.

Now… I understand, possibly more than most, that all writing and literature derives from its predecessors, but there’s a huge difference between say, a work like Mary Robinette Kowal’s Shades of Milk and Honey, which uses the ambiance of a Regency-type culture and setting in introducing a new kind of fantasy [which Kowal does] and a derivative rip-off such as Pride and Prejudice and Zombies or Emma and the Werewolves.  When Roger Zelazny wrote Creatures of Light and Darkness or Lord of Light, he derived something new from the old myths.  In a sense, T.S. Eliot did the same in The Wasteland or Yeats in “No Second Troy.”  On the other hand, I don’t see that in John Scalzi’s Redshirts, which appears to me as a derivative capitalization on Star Trek nostalgia.

How about a bit more originality and a lot fewer “literary” derivatives?  Or have too many writers succumbed to the lure of fast bucks from cheap derivatives? Or have too many readers become too lazy to sort out the difference between rip-off and robbery whole-cloth derivatives and thoughtful new treatments of eternal human themes?

 

Coincidences?

We’ve all been there, I think, on the telephone discussing something important to us or with someone important to us… and no one else is home, when the doorbell rings, or another call comes through, with someone equally important, or both at once.  Now, it doesn’t matter that no one has called or rung the doorbell for the previous two hours and no one will for another hour or two.  What is it about the universe that ensures that, in so many cases, too many things occur at the same time?

I’m not talking about those which aren’t random, but can be predicted, like the political calls that occur from five or six in the evening until eight o’clock, or the charitable solicitations that are timed in the same way [both conveniently excepted from the do-not-call listing]. I’m talking about calls and callers and events that should be random, but clearly aren’t.  Sometimes, it’s merely amusing, as when daughters located on different coasts call at the same time.  Sometimes, it’s not, as when you’re trying to explain why you need the heating fixed now, and your editor calls wanting an immediate answer on something… or you’re discussing scheduling long-distance with your wife and you ignore the 800 call that you later find out was an automated call, without ID, informing you that your flight for six A.M. the next morning has been cancelled… and you don’t find out until three A.M. the next morning when you check your email before leaving for the airport… and end up driving an extra 60 miles to the other airport. There’s also the fact that, no matter what time of the afternoon it is, there’s a 10-20% chance that, whenever I’m talking to my editor, either FedEx, UPS, or DHL will appear at the door [upstairs from my office] needing a signature… and we don’t get that many packages [except from my publisher] and I spend less than a half hour a week on the phone with my editor.

I know I’m not alone in this.  Too many people have recounted similar stories, but the logical types explain it all away by saying that we only remember the times these things happen, but not the times that they don’t.  Maybe… but my caller I.D. gives the times for every incoming call, and when I say that there haven’t been any calls for two or three hours, and then I get three in three minutes… it doesn’t lie – not unless there’s a far grander conspiracy out there than I even wish to consider.  And why is it that I almost always get calls in the ten minutes or so a day when I’m using the “facilities”?  No calls at all in the half hour before or after, of course.

This can extend into other areas – like supermarket checkout lines. The most improbable events occur in all too many cases in whatever line I pick.  The juice packet of the shopper in front of me explodes all over the conveyor belt.  The checker I have is the only one not legally able to ring up beer, and the manager is dealing with an irate customer in another line.  The register tape jams.  The credit/debit card machine freezes on the previous customer, just after I’ve put everything on the belt.

Now… to be fair, it sometimes works the other way. There was no possible way I ever could have met my wife.  None [and I won’t go into the details because they’d take twice the words of my longest blog], but it happened, and she’s still, at least occasionally, pointing out that it had to be destiny… or fate.  Well… given how that has turned out, I wouldn’t mind a few more “improbable” favorable coincidences, but… they’re pretty rare.  Then again, if all the small unfavorable improbabilities are the price for her… I’ll put up with them all.

 

The Next Indentured Generation?

The other day I received a blog comment that chilled me all the way through.  No, it wasn’t a threat.  The commenter just questioned why state and federal government should be supporting higher education at all.

On the surface, very much on the surface, it’s a perfectly logical question. At a time of financial difficulty, when almost all states have severe budget constraints, if not enormous deficits, and when the federal deficit is huge, why should the federal government and states be supporting higher education?

The question, I fear, arises out of the current preoccupation with the here and now, and plays into Santayana’s statement about those who fail to learn the lessons of history being doomed to repeat them. So… for those who have mislaid or forgotten a small piece of history, I’d like to point out that, until roughly 1800, there were literally only a few handfuls of colleges and universities in the United States – less than 30 for a population of five million people. Most colleges produced far, far fewer graduates annually than the smallest of colleges in the USA do today.  Harvard, for example, averaged less than 40 graduates a year.  William & Mary, the second oldest college in the United States, averaged 20 graduates a year prior to 1800.  Although aggregated statistics are unavailable, estimates based on existing figures suggest that less than one half of one percent of the adult population, all male, possessed a college education in 1800, and the vast majority of those graduates came from privileged backgrounds.  Essentially, higher education was reserved for the elites. Although more than hundred more colleges appeared in the years following 1800, many of those created in the south did not survive the Civil War.

In 1862, Congress created the first land-grant universities, and eventually more than 70 were founded, based on federal land grants, primarily to teach agricultural and other “productive” disciplines, but not to exclude the classics. By 1900, U.S. colleges and universities were producing 25,000 graduates annually, out of a population of 76 million people, meaning that only about one percent of the population, still privileged, received college degrees, a great percentage of these from land grant universities supported by federal land grants and state funding.  These universities offered college educations with tuition and fees far lower than those charged by most private institutions, and thus afforded the education necessary for those not of the most privileged status.  Even so, by 1940, only five percent of the U.S. population had a college degree.  This changed markedly after World War II, with the passage of the GI bill, which granted veterans benefits for higher education. Under the conditions which existed after WWII until roughly the early 1970s, talented students could obtain a college degree without incurring excessive debt, and sometimes no debt at all.

As we all know, for various reasons, that has changed dramatically, particularly since state support of state colleges and universities has declined from something close to 60% of costs forty years ago to less than 25% today, and less than 15% in some states.  To cover costs, the tuition and fees at state universities have skyrocketed.  The result? More students are working part-time and even full-time jobs, as well as taking out student loans.  Because many cannot work and study full-time, the time it takes students to graduate takes longer, and that increases the total cost of their education. In 2010, 67% of all graduating college seniors carried student loan debts, with an average of more than $25,000 per student.  The average student debt incurred by a doctor just for medical school is almost $160,000, according to the American Medical Association.

Yet every study available indicates that college graduates make far more over their lifetime than those without college degrees, and those with graduate degrees generally fare even better.  So… students incur massive debts.  In effect, they’ll become part-time higher-paid indentured servants of the financial sector for at least 20 years of their lives.

The amounts incurred are far from inconsequential.  Student debt now exceeds national credit card debt [and some of that credit card debt also represents student debt, as well]. The majority of these costs reflect what has happened when states cut their support of higher education, and those costs also don’t reflect default rates on student loans that are approaching ten percent.

As a result, college graduates and graduates from professional degree programs are falling into two categories – the privileged, who have no debt, and can choose a career path without primarily considering the financial implications and those who must consider how to repay massive debt loads.  And as state support for higher education continues to dwindle, the U.S, risks a higher tech version of social stratification based on who owes student loans and who doesn’t.

So… should the federal and state governments continue to cut  support of higher education? Are such cuts a necessity for the future of the United States?  Really?  Tell that to the students who face the Hobson’s Choice of low-paying jobs for life or student loan payments for life.  Or should fewer students attend college?  But… if that’s the case, won’t that just restrict education to those who can afford it, one way or another?

The Tax Question

These days an overwhelming number of political figures, especially conservatives and Republicans, continue to protest about taxes and insist that taxes should be lowered and that federal income taxes, at the very least, should be left at the lower levels set during the administration of the second President Bush. Although many conservatives protest that taxes are being used for “liberal” social engineering, the fact is that there are so many “special provisions” embodied in the tax code that such “engineering” runs from provisions purported to help groups ranging from the very poorest to the very wealthiest.  In addition, much of the complexity of the tax code arises from generations of efforts to make it “fairer.”

For all that rhetoric, the basic purpose of taxes is to pay for those functions of government that the elected representatives of past and present voters have deemed necessary through the passage of federal laws and subsequent appropriations.  Or, as put by the late and distinguished Supreme Court Justice Oliver Wendell Holmes, Jr., “Taxes are what we pay for a civilized society.”

Grumbling about taxation has been an American preoccupation since at least the 1700s when the American colonists protested the British Stamp Tax and later the tax on imported British tea.  In the case of the tea tax, the colonists were paying more for smuggled tea than for fully taxed British tea, which has always made me wonder about the economic rationality of the Boston Tea Party, and who really was behind it… and for what reason, since it certainly wasn’t about the price of British tea.

Likewise, my suspicions are that the current furor about taxes, and federal income taxes in particular, may not really be primarily about taxes themselves, but a host of factors associated with taxes, most of which may well lie rooted in the proven “loss aversion” traits of human beings.  Put simply, most of us react far more strongly to events or acts which threaten to take things from us than to those which offer opportunities, and in a time when most people see few chances for economic improvement, loss aversion behavior, naturally, becomes stronger.  And most people see higher taxes, deferred Social Security retirement ages, and higher Medicare premiums as definite losses, which they are.

What’s most interesting about this today is that the leaders of the conservative movements and the Republican party are generally from that segment of society which has benefited the most in the past twenty years from the comparative redistribution of wealth to the uppermost segment of American society and yet they are appealing to those members of society who feel they have lost the most through this redistribution – the once more highly paid blue collar workers in the old automotive industries and other heavy manufacturing areas of the U.S. economy.  The problem with this appeal is not that it will not work – it definitely will work, especially if economic conditions do not improve – but that the policies espoused by the “keep taxes low/cut taxes” conservatives won’t do anything positive to benefit the vast majority of those to whom these conservatives are appealing.  They will, of course, greatly benefit the wealthy, but the comparative lack of federal/state revenues is already hurting education, despite the fact that both conservatives and liberal both agree that improved education is vital for today’s and tomorrow’s students if they are to prosper both economically  and occupationally.  The lack of money for transportation infrastructure will only hamper future economic growth, as will the lack of funding to rebuild and modernize our outdated air transport control system and a number of other aging and/or outdated infrastructure systems.

The larger problem is, of course, that the conservatives don’t want government to spend money on anything, and especially not anything new, while the liberals have yet to come up with a plan for anything workably positive… and, under those circumstances, it’s very possible that “loss aversion” politics, and the anti-taxation mood, will dominate the political debates of the next six months… which, in the end, likely won’t benefit anyone.

 

Cleverness?

Over the years, every so often, I’ve gotten a letter or review about one of my books that essentially complains about the ruthless nature of a protagonist, who is supposed to be a good person.  These often question why he or she couldn’t have done something less drastic or resolved the situation they faced in a more clever fashion.  I realized, the other day, after seeing a review of Imager’s Intrigue and then receiving an email from another writer who was disappointed that Quaeryt couldn’t be more “clever” in his resolution of matters and less reliant upon force exactly what my grandmother had meant in one of her favorite expressions.  She was always saying that some businessman or politician was “too clever by half.”

So, I believe, are some writers.  I try not to be excessively clever, because it’s highly unrealistic in the real world, but it’s difficult when there’s an unspoken but very clear pressure for authors to be “clever.”  My problem is that I’m moderately experienced in how the “real world” operates, and seldom is a “clever” solution to anything significant or of major import a truly workable solution. As I and numerous historians have pointed out, in WWII, with a few exceptions, the Germans had far more “clever” and advanced technology.  They lost to the massive application of adequate technology.  In Vietnam, the high-tech and clever United States was stalemated by the combination of wide-scale guerilla warfare and political opposition within the USA.  Despite the application of some of the most sophisticated and effective military technology ever deployed, the U.S. will be fortunate to “break even” in its recent military operations in the Middle East… and given the costs already and the loss of lives for what so far appear to be negligible gains, it could be argued that we’ve lost.  I could cite all too many examples in the business world where “clever” and “best” lost out to cheaper and inferior products backed by massive advertising.  The same sort of situations are even more prevalent in politics.

“Clever,” in fact, is generally highly unrealistic as a solution to most large scale real-world problems.  But why?

Because most problems are, at their base, people problems, it takes massive resources to change the course of human inertia/perceived self-interest. That’s why both political parties in the United States mobilize billions of dollars in campaign funds… because that’s what it takes, since most people have become more and more skeptical of any cleverness that doesn’t fit their preconceptions…  partly because they’re also skeptical of the “clever” solutions proposed by politicians.  It’s why most advertising campaigns have become low-level, not very clever, saturation efforts.  Military campaigns that involve national belief structures and not just limited and clearly defined tactical goals also require massive commitments of resources – and clever just gets squashed if it stands in the way of such effectively deployed resources.

That’s why, for example, in Imager’s Intrigue, Rhenn’s solutions are “clever” only in the sense that they apply massive power/political pressure to key political/military/social vulnerabilities of his opponents.  Nothing less will do the job.

I’m not saying that “clever” doesn’t work in some situations, because it does, but those situations are almost always those where the objectives are limited and the stakes are not nearly so high.  That makes “clever” far more suited to mysteries, spy stories, and some thrillers than to military situations where real or perceived national interests or survival are at stake.

 

The Ratings-Mad Society

The other day, at WalMart, where I do my grocery shopping, since, like or not, it’s the best grocery store in 60 miles, the check-out clerk informed me that, if I went to the site listed on my receipt and rated my latest visit to WalMart, I’d be eligible for a drawing for a $5,000 WalMart gift card.  The next day, at Home Depot, I had a similar experience. That doesn’t include the endless ratings on Amazon, B&N, and scores of retailers, not to mention U-Tube, Rate Your Professor, and the student evaluations required every semester at virtually every college or university. Nor does it include the plethora of reality television shows based on various combinations of “ratings.”

It’s getting so that everything is being rated, either on a numerical scale of from one to five or on one from one to ten.  Have we gone mad?  Or is it just me?

Ratings are based on opinions.  Opinions are, for the overwhelming majority of people, based on their personal likes and dislikes… but ratings are presented for the most part as a measurement of excellence.

Yet different people value different things. My books are an example. I write for people who think and like depth in their fiction… and most readers who like non-stop action aren’t going to read many of my books, and probably won’t like them… and those are the ones who give my books one star with words like “boring”… or “terminally slow.”  By the same token readers who like deep or thoughtful books may well rate some of the fast-action books as “shallow” [which they are by the nature of their structure] or “improbably constructed” [which is also true, because any extended fast-action sequence just doesn’t happen often, if ever, in real life, and that includes war].

Certainly, some of the rationale behind using ratings is based on the so-called wisdom of crowds, the idea that a consensus opinion about something is more accurate than a handful of expert opinions.  This has proven true… but with two caveats – the “crowd” sampled has to have general knowledge of the subject and the subject has to be one that can be objectively quantified.

The problem about rating so many things that are being rated is that for some – such as music, literature, cinema, etc. – technical excellence has little bearing on popularity and often what “the crowd” rates on are aspects having nothing to do with the core subject, such as rating on appearance, apparel, and appeal in the case of music or special effects in the case of cinema.

Thus, broad-scale ratings conceal as much as they reveal… if not more.  Yet everyone with a product is out there trying to get some sort of rating? Obviously, those with a product want a high rating to enhance the salability of their product or service.  But why do people/consumers rely so much on ratings?  Is that because people can’t think?  Or because that they’re so inundated with trivia that they can’t find the information or the time they need to make a decision?  Or because the opinion of others means more than their own feelings?

Whatever the reason, it seems to me that, in the quest for high ratings, the Dr. Jekyll idea of applying the wisdom of the crowd has been transformed into the Mr. Hyde insanity of the madness of the mob.