Archive for the ‘General’ Category

The Instant Disaster Society?

Last Thursday, the stock market took its single biggest one day drop in its history, somewhere slightly over a thousand points, as measured by the Dow Jones Industrial Index.  While the market recovered sixty to seventy percent of that drop before the close Thursday, the financial damage across the world was not inconsiderable.  Did this happen because Greece is still close to a financial meltdown, or because economic indicators were weak?   No… while the leading cause or precipitating factor may have been a typographical error – a trader entered a sell order for $16 BILLION of exchange futures, instead of a mere $16 million, there are a number of other possibilities, but the bottom line [literally] was that, whatever the cause, all the automated and computerized trading engines immediately reacted – and the market plummeted.  Later, the NASDEQ canceled a number of trades, but that was long after the damage had been done.

From the Terminator movies onward, there have been horror stories about computers unleashing doomsday, but the vast majority of these have concerned nuclear and military scenarios – not world economic collapse.  While I don’t fall into the “watch out for those evil computers” camp, I have always been and remained greatly concerned about the growth and uses of so-called “expert systems” – in all areas of society, largely because computers are the perfect servants – they do exactly what their programming tells them to do, even if the result will be disastrous.

For example, Toyota is now having all sorts of problems with runaway acceleration.  When this first occurred, my question was simple enough:  Why didn’t the drivers either shift into neutral or turn off the ignition.  Apparently, it turns out, at least some of them may not have been able to, not quickly, because they had keyless ignition systems.  Yet the automakers are talking about cars that will be not only keyless but also totally electronic, that is, even the shifting will be electronic and not physical/manual.  And if the electronics malfunction, exactly how will a driver be able to quickly “kill” the system?  Let’s think that one over for a bit.

President Obama and the health care reformers want all medical records to be electronically available, both for cost-saving purposes and for ease of access.  The problem with that kind of ease of access is that it also offers greater ease of hacking and tampering, and, I’m sorry, no system that offers the kind of ease the “reformers” are proposing can be made hacker-proof.  The access and security requirements are mutually antithetical. Years ago, Sandra Bullock starred in a movie called “The Net,” and while many of the computer references are outdated and almost laughable, one aspect of the movie was not and remains all too plausibly real.  At least two characters die because their medical records are hacked, and changed.  In addition, national databases are manipulated and identities switched.  Now… the computer experts will say that these sorts of things can be guarded against… and they can be, but will they?  Security costs money, and good security costs a lot of money, and people use computers to cut costs, not to increase them.

As far as economics go, now that an “accident” has shown just how vulnerable securities markets are to inadvertent manipulation, how long before some terrorist or other extremist group figures out how to duplicate the effect?  And then all the programmed trading computers will blindly execute their trades… and we’ll get an even bigger disaster.

Why?

Because we’ve become an instant-reaction society, and electronic systems magnify the effect of either system glitches or human error. Those programmed securities trading computers were designed to take advantage of market fluctuations on a micro if not a nano-second basis.  For better or worse, they make decisions faster than any human trader could possibly make them – and they do so based on data that may or may not be accurate.

We’re seeing the same thing across society.  Today’s young people are being trained to react, rather than to think.  Instead of letters or even email, they use Twitter.  Instead of bridge or old fashioned board games like Risk or Diplomacy, they prefer fast-acting, instant reaction videogames with a premium on speed.  More and more of the younger generation cannot form or express complex concepts, even as technology is taking us into an ever more complex world.  Business has a greater and greater emphasis on short-term gain and profits.  People want instant satisfaction.

The societal response to the increase in speed across society is to use computers and electronic systems to a greater and greater extent – but, as happened last Thursday, what happens when one’s faithful and obedient electronic servants do exactly what their inputs dictate that they’re supposed to do – and the result is disaster?

Do we really want – and can our society survive – a world where a few high-speed mistakes can destroy more than a trillion dollars worth of assets in seconds… or do even worse damage than that?  Not to mention one where thinking is passé… or for the old fogies of an earlier generation… and where all that matters is instant [and shallow] communications and short-term results that may well result in long-term disaster.

Stupid Questions/Bureaucratic Catch-22s

A few weeks ago, the Canadian science fiction writer Peter Watts was convicted of “assaulting” U.S. border guards because he failed to listen/heed instructions to remain in his car when he was pulled over for a search at a border crossing.  Although the guards’ testimony that Watts had physically assaulted them was refuted, Watts was found guilty because, under the law, failure to follow instructions constituted “assault,” although the only action he took was to be stupid enough to get out of his car when he was told not to.  While he was fined and given a suspended sentence, as a now-convicted felon, Watts will henceforth be denied entry to the United States, and, if he were careless enough to sneak in and were discovered, he’d be in much more serious trouble.  While more than a few readers and supporters were outraged at Watts’s treatment, Watts and others were even more outraged at a law that classes “failure to obey” the same as assault.

Unfortunately, this sort of legal trickery and legerdemain has a long and less than honorable history in the United States, and probably elsewhere in the world.  The American justice establishment has found a number of indirect ways to place people in custody and otherwise convict and sentence them.  Perhaps the most well-known was the conviction of the gangster Al Capone, not for the murders, fraud, and mayhem he perpetrated, but for, of all things, income tax evasion.

In 1940 the Congress passed, and the president signed the Alien Registration Act, otherwise known as the Smith Act, which made illegal, among other things, either the membership in any organization which advocated the violent overthrow of the U.S. government or even helping anyone who belonged to such an organization.  In effect, that meant the government could legally prosecute anyone who had ever been a member of the Communist party or anyone who ever helped anyone who had ever been a member of that party with any party-related activities, no matter how trivial. Initially, the Act was used only against those who had actually been involved in such activities, but in the late 1940s, the FBI and Senator Joe McCarthy and the House Committee on UnAmerican Activities charged thousands of Americans with violation of the provisions of the Smith Act. If someone admitted helping another who had belonged to the Communist Party, they could theoretically spend up to 20 years in jail.  If they denied it and proof was found otherwise, they were guilty of perjury and could also go to jail.  Eventually, the Supreme Court declared many of the more far-reaching interpretations and prosecutions under the law unconstitutional, but not before hundreds of people had been sent to jail or had their lives and livelihoods destroyed, either directly or indirectly, for what often amounted to association with friends and business associates.

Flash to the present.  According to the Salt Lake Tribune, the U.S. Customs and Border Protection Form No. 1651-0111 asks the following questions:

Have you ever been or are you now involved in espionage or sabotage, or in terrorist activities, or genocide, or between 1933 and 1945, were involved in persecutions associated with Nazi Germany or its allies?

Are you seeking entry to engage in criminal or immoral activities?

Now… it’s a safe bet that no one will ever check the “yes” box following either one of these questions, and many people will ask why the government bothers with asking such stupid questions.

The government knows no one will ever admit to either set or acts or intentions.  But… if anyone is ever caught even doing something immoral, not necessarily illegal, if the prosecutors can’t come up with as much evidence as they’d like to lock someone away, they can dig out the handy-dandy form and charge the “entrant” in question with perjury, etc.  It’s effectively a form of after-the-fact bureaucratic insurance.

Personally, I can’t say that it exactly reinforces my confidence in American law enforcement’s ability to find and prosecute the worst offenders when every immigrant who even shop-lifted or visited an escort service could be locked away.  But then, they did lock up Big Al, even if they couldn’t prove a thing against him on the worst crimes he ordered or committed.  So… maybe I shouldn’t complain.  Still… Peter Watts is now a felon for what amounts to stupidity, or at the least, lack of common sense, although he never threatened anyone or lifted a hand against either guard.

Conservative Suicide/Stupidity?

As many of you know, I live in Utah, and as most of you may not, I was the Legislative Director for William Armstrong, one of the most conservative congressmen and senators of his time, as well as the staff director for Ken Kramer, his successor in the House – also one of the most conservative congressmen, not to mention being Director of Legislation and Congressional Relations for the U.S. EPA during the first Reagan administration.  These days, however, even as a registered Republican, I seldom vote for Republicans, and what follows may explain one of the reasons why.

Utah’s two U.S. senators are Bob Bennett and Orin Hatch, both conservative Republicans, and according to the various political ratings, they’re among the most conservative in the Senate.  BUT… they’re not “perfect,” with Bennett receiving “only” an 84% rating and Hatch only an 88% rating from the ultra-conservative American Conservative Union. According to recent polls, over 70% of the GOP delegates to the Utah state Republican convention believe that both Hatch and Bennett should be replaced because they’re not conservative enough.  Bennett is up for re-election and probably will not even win his party’s nomination.  He might not even survive this week’s coming party convention.

Now… although I certainly don’t believe in or support many of their policies and votes, I can see where others might… and might wish for all their votes to follow “conservative” principles – but to throw out a three-term conservative incumbent over such ratings?  Does it really make any sense?

No… it doesn’t, and that’s not because I’m a great fan of either senator.  I’m not.  But here’s why replacing Bennett – or Hatch – is totally against the so-called conservatives’ own best interests.

First, the ratings are based on “political litmus test” votes, often on issues that indicate ideology and don’t represent votes on bills that actually might make a difference.  Second, the “difference” between Bob Bennett’s 84% rating and a perfect 100% rating represents all of four votes taken over the entire year of 2009.  Second, seniority in the Senate represents power.  It determines who chairs or who is the ranking minority member on every committee and subcommittee, and that helps determine not only what legislation is considered, but when it’s considered, and what’s actually included in it.  The Senate is an extremely complex body, and it takes years even to truly understand its workings.  To toss out an incumbent who is predominantly conservative, but not “perfectly” conservative, in favor of a challenger who may not even win an election, but who, if he does, has little knowledge of the Senate, and less power, is not an act of conscience, but one of stupidity.  Third, no matter how conservative [or how liberal] a senator is, each senator is restricted by the rules of the body to voting on what is presented. In the vast, vast, majority of cases, that means that the vote of an “imperfect” conservative can be no different from that of a “perfect” conservative.

I can certainly see, and have no problem, with conservatives targeting a senator who seldom or never votes in what they perceive as their interest, but to remove a sitting senator with power and influence who votes “your way” 80-90% of the time in favor of someone who may not win the election, and who will have little understanding or power if he does… that, I have to say, is less than rational.

In the interests of fairness, I will point out that the left wing of the Democratic Party is also guilty of the same sort of insane quest for ideological purity, and that the majority of Americans are fed up with these sorts of extremist shenanigans.  But in the current political climate, where most Americans are fed up with Congress, they may well vote to throw whoever’s in office right out of office… along with Bob Bennett.  And then, next year, when legislative matters are even worse from their point of view… they’ll be even angrier… even though almost none of the voters will admit that everyone wants more from government, in one way or another, than anyone wants to pay for – except for those on the extreme, extreme right, and they want no government at all… and that’s a recipe for anarchy in a world as technologically and politically complex as ours.

Reality or Perception?

The growth of high-technology, particularly in the area of electronics, entertainment, and communications, is giving a new meaning to the question of what is “real.”  As part of that question, there’s also the issue of how on-line/perceptual requirements are both influencing and simultaneously diverging from physical world requirements.

One of the most obvious impacts of the instant communications capabilities embodied in cell-phones, netbooks, laptops, and desktops is the proliferation of emails and text messages.  As I’ve noted before, there’s a significant downside to this in terms of real-world productivity because, more and more, workers at all levels are being required to provide status reports and replies on an almost continual basis.  This constant diversion encourages so-called “multitasking,” which studies show actually takes more time and creates more errors than handling tasks sequentially – as if anyone in today’s information society is ever allowed to handle tasks sequentially and efficiently.

In addition, anyone who has the nerve or the foolhardiness to point this out, or to refrain from texting and on-line social networking, is considered out of touch, anti-technology, and clearly non-productive because of his or her refusal to “use the latest technology,” even if their physical productivity far exceeds that of the “well-connected.”  No matter that the individual has a cellphone and laptop with full internet interconnectivity and can use them to obtain real physical results, often faster than those who are immersed in social connectivity, such individuals are “dinosaurs.”

In addition, the temptations of the electronic world are such, and have created enough concern, that some companies have tried to take steps to limit what on-line activities are possible on corporate nets.

The real physical dangers of this interconnectivity are minimized, if not overlooked.  There have been a number of fatalities, even here in Utah, when individuals locked into various forms of electronic reality, from Ipods to cellphones, have stepped in front of traffic and trains, totally unaware of their physical surroundings.  Given the growth of the intensity of the “electronic world,” I can’t help but believe these will increase.

Yet, in another sense, the electronic world is also entering the physical world.  For example, thousands and thousands of Asian young men and women labor at various on-line games to amass on-line virtual goods that they can effectively trade for physical world currency and goods.  And it works the other way.  There have even already been murders over what happened in “virtual reality” communities.

The allure of electronic worlds and connections is so strong that hundreds of thousands, if not millions, of students and other young people walk past those with whom they take classes and even work, ignoring their physical presence, for an electronic linkage that might have seemed ephemeral to an earlier generation, but whose allure is far stronger than physical reality.…

Does this divergence between the physical reality and requirements of society and the perceptual “reality” and perceived requirements of society herald a “new age,” or the singularity, as some have called it, or is it the beginning of the erosion of culture and society?

Important Beyond the Words

Despite all the “emphasis” on improving education and upon assessment testing in primary and secondary schools, education is anything but improving in the United States… and there’s a very good reason why.  Politicians, educators, and everyday parents have forgotten one of the most special attributes that makes us human and that lies behind our success as a species – language, in particular, written language.

An ever-increasing percentage of younger Americans, well over a majority of those under twenty, cannot write a coherent paragraph, nor can they synthesize complex written information, either verbally or in writing, despite all the testing, all the supposed emphasis on “education.”  So far, this has not proved to be an obvious detriment to U.S. science, business, and culture, but that is because society, any society, has always been controlled by a minority.  The past strength of U.S. society has been that it allowed a far greater percentage of “have-nots” to rise into that minority, and that rise was enabled by an educational system that emphasized reading, writing, and arithmetic – the three “Rs.”   While mastery of more than those three basics is necessary for success in a higher-technology society, ignoring absolute mastery in those subjects for the sake of knowledge in others is a formula for societal collapse, because those who can succeed will be limited to those whose parents can obtain an education for their children that does require mastery of those fundamental basics, particularly of writing.  And because in each generation, there are those who will not or cannot truly master such basics, either through lack of ability or lack of dedication, the number of those able to control society will become ever more limited and a greater and greater percentage of society’s assets will become controlled by fewer and fewer, who, as their numbers dwindle, find their abilities also diminish.  In time, if such a trend is not changed, social unrest builds and usually results in revolution.  We’re already seeing this in the United States, particularly in dramatically increased income inequality, but everyone seems to focus on the symptoms rather than the cause.

Why writing, you might ask.  Is that just because I’m a writer, and I think that mastery of my specialty is paramount, just as those in other occupations might feel the same about their area of expertise?  No… it’s because writing is the very foundation upon which complex technological societies rest.

The most important aspect of written language is not that it records what has been spoken, or what has occurred, or that it documents how to build devices, but that it requires a logical construct to be intelligible, let alone useful. Good writing requires logic, both in structuring a sentence, a paragraph, or a book.  It requires the ability to synthesize and to create from other information.  In essence, mastering writing requires organizing one’s thoughts and mind.  All the scattered facts and bits of information required by short-answer educational testing are useless unless they can be understood as part of a coherent whole.  That is why, always, the best educational institutions required long essay tests, usually under pressure.  In effect, such tests both develop and measure the ability to think.

Yet the societal response to the lack of writing, and thus thinking, ability has been to institute “remedial” writing courses at the college entry level.  This is worse than useless, and a waste of time and resources.  Basic linguistics and writing ability, as I have noted before, are determined roughly by puberty.  If someone cannot write and organize his or her thoughts by then, effectively they will always be limited.  If we as a society want to reverse the trend of social and economic polarization, as well as improve the abilities of the younger generations, effective writing skills have to be developed on the primary and early secondary school levels.  Later than that is just too late.  Just as you can’t learn to be a concert violinist or pianist beginning at age eighteen, or a professional athlete, the same is true for developing writing and logic skills.

And because, in a very real sense, a civilization is its written language, our inability to address this issue effectively may assure the fall of our culture.

The Failure to Judge… Wisely

In last Sunday’s education supplement to The New York Times, there was a table showing a sampling of U.S. colleges and universities and the distribution of grades “earned” by students, as well as the change from ten years earlier – and in a number of cases, the change from twenty or forty or fifty years ago.  Not surprisingly to me, at virtually every university over 35% of all grades granted were As.  Most were over 40%, and at a number, over half of all grades were As.  This represents a 10% increase, roughly, over the past ten years, but even more important it represents a more than doubling, and in some cases, a tripling of the percentage of As being given from 40-50 years ago.  Are the teachers 2-3 times better?  Are the students?  Let us just say that I have my doubts.

But before anyone goes off and blames the more benighted university professors, let’s look at society as a whole.  Almost a year ago, or perhaps longer, Alex Ross, the music critic for The New Yorker, pointed out that almost every Broadway show now gets a standing ovation, when a standing ovation was relatively rare some fifty years ago.  When I was a grade-schooler, there were exactly four college football bowl games on New Year’s eve or New Year’s day, while today there are something like thirty spread over almost four weeks.  Until something like half a century ago, there weren’t any “divisions” in baseball.  The regular season champion of the American League played the regular season champion of the National League.  It’s almost as though we, as a society, can’t accept the judgment of continual success over time.

And have you noticed that every competition for children has almost as many prizes as competitors – or so it seems.  Likewise, there’s tremendous pressure to do away with grades and/or test scores in determining who gets into what college.  And once students are in college, they get to judge their professors on how well they’re being taught – as if any 18-21 year truly has a good and full understanding of what they need to learn [admittedly, some professors don’t, but the students aren’t the ones who should be determining this].  Then we have the global warming debate, where politicians and people with absolutely no knowledge and understanding of the mechanics and physics of climate insist that their views are equal to those of scientists who’ve spent a lifetime studying climate.  And, of course, there are the intelligent design believers and creationists who are using politics to dictate science curricula in schools, based on their beliefs, rather than on what can be proven.

And there’s the economy and business and education, where decisions are made essentially on the basis of short-term profit figures, rather than on the longer-term… and as a result, as we have seen, the economy, business, and education have all suffered greatly.

I could list page after page of similar examples and instances, but these all point out an inherent flaw in current societies, particularly in western European societies, and especially in U.S. society.  As a society, we’re unwilling or unable, or both, to make intelligent decisions based on facts and experience.

Whether it’s because of political pressure, the threat of litigation, the fear of being declared discriminatory, or the honest but misguided belief that fostering self-esteem before establishing ability creates better students, the fact is that we don’t honestly evaluate our students.  We don’t judge them accurately.  Forty or fifty percent do not deserve As, not when less than thirty percent of college graduates can write a complex paragraph in correct English and follow the logic [or lack of it] in a newspaper editorial.

We clearly don’t judge and hold our economic leaders, or our financial industry leaders, to effective standards, not when we pay them tens, if not hundreds, of millions of dollars to implement financial instruments that nearly destroyed our economy.  We judge those running for political office equally poorly, electing them on their professed beliefs rather than on either their willingness to solve problems for the good of the entire country or their willingness to compromise to resolve problems – despite the fact that no political system can survive for long without compromise.

Nor are we, again as a society, particularly accurate in assessing and rewarding artistic accomplishments, or lack of them, when rap music, American Idol and “reality” shows draw far more in financial reward and audiences than do old-fashioned theatre, musical theatre [where you had to be able to compose and sing real melodies], opera, and classical music, and where hyped-up graphic novels are the fastest-growing form of  “print” fiction.   It’s one thing to enjoy entertainment that’s less than excellent in terms of quality;  it’s another to proclaim it excellent, but the ability to differentiate between popularity and technical and professional excellence is, again, a matter of good judgment.

In fact, “judgment” is becoming the new “discrimination.”  Once, to discriminate meant to choose wisely;  now it means to be horribly biased.  The latest evolution in our current “newspeak” appears to be that to judge wisely on the basis of facts is a form of bias and oppression.  It’s fine to surrender judgment to the marketplace, where dollars alone decide, or to politics, where those who are most successful in pandering for votes decide… but to decide based on solid accomplishment – or the lack thereof, as in the case of students who can’t read or write or think or in the case of financiers who lose trillions of dollars – that’s somehow old-fashioned, biased, or unfair.

Whatever happened to judging wisely?

Jeremiads

Throughout recorded history runs a thread whereupon an older and often distinguished figure rants about the failures of the young and how they fail to learn the lessons of their forebears and how this will lead to the downfall of society.  While many cite Plato and his words about the coming failure of Greek youth because they fail to learn music and poetry and thus cannot distinguish between the values of the ancient levels of wisdom ascribed to gold, silver, and bronze, such warnings precede the Greeks and follow them through Cicero and others.  They also occur in other cultures than in western European descended societies.

Generally, at the time of such warnings, as with the case of Alcibiades with Socrates, there are generally two reactions, one usually from the young and one usually from the older members of society.  One is: “We’re still here; what’s the problem; you don’t understand that we’re different.”  The other is: “The young never understand until it’s too late.”

I’ve heard my share of speeches and talks that debunk the words of warning, and generally, these “debunkers” point out that Socrates and Cicero and all the others warned everyone, but today we live at the peak of human civilization and technology.  And we do… but that’s not the point.

Within a generation of the time of Plato’s reports of Socrates’ warnings, Greece was spiraling down into internecine warfare from which it, as a civilization, never fully recovered.  The same was true of Cicero, but the process was far more prolonged in the case of the Roman Empire, although the Roman Republic, which laid the foundation of the empire, was essentially dead at the time of Cicero’s execution/murder.

The patterns of rise and fall, rise and fall, of cultures and civilizations permeate human history, and so far, no civilization has escaped such a fate, although some have lasted far longer than others.

There’s an American saying that was popular a generation or so ago – “From shirt-sleeves to shirt-sleeves in four generations.”  What it meant was that a man [because it was a society even more male-dominated then] worked hard to build up the foundation for his children, and then the next generation turned that foundation into wealth and success, and the third generation spent the wealth, and those of the fourth generation were impoverished and back in shirt-sleeves.

To build anything requires effort, and concentrated effort requires dedication and expertise in something, which requires concentration and knowledge.  Building also requires saving in some form or another, and that means forgoing consumption and immediate satisfaction.  In societal terms, that requires the “old virtues.”  When consumption and pleasure outweigh those virtues, a society declines, either gradually or precipitously.  Now… some societies, such as that of Great Britain, for years pulled themselves back from the total loss of “virtues.”

But, in the end, the lure of pleasure and consumption has felled, directly or indirectly, every civilization.  The only question appears to be not whether this will happen, but when.

So… don’t be cavalier about those doddering old fogies who predict that the excess of pleasure-seeking and self-interest will doom society.  They’ll be right… sooner or later.

The Continued Postal Service Sell-Out

Once, many, many years ago, I was the legislative director for a U.S. Congressman who served on the Appropriations subcommittee overseeing the U.S. Postal Service.  Trying to make sense out of the Postal Service budget – and their twisted economic rationalizations for their pricing structure – led to two long and frustrating years, and the only reason I didn’t lose my hair earlier than I eventually did was that the USPS comprised only part of my legislative duties.

The latest cry for cuts and service reductions may look economically reasonable, but it’s not, because the USPS has been employing the wrong costing model for over forty years. The model is based on structuring costs, first and primarily, on first class mail, and then treating bulk mail and publications as marginal costs, and setting the rates, especially for bulk mail, based on the marginal costs.

Why is this the wrong model?

First, because it wasn’t what the founding fathers had in mind, and second, because it’s lousy economics.

Let’s go back to the beginning.  Article I, section 8, clause 7 of the U.S. Constitution specifically grants Congress the power “to establish Post Offices and Post roads.”  The idea behind the original Post Office was to further communications and the dissemination of ideas.  There was a debate over whether the Post Office should be allowed to carry newspapers, and a number of later Supreme Court decisions dealt with the limits on postal power, especially with regard to free expression, with the Court declaring, in effect, that the First Amendment trumped the Post Office power to restrict what could be mailed.  During the entire first century after the establishment of the Post Office and even for decades after, the idea behind the Post Office was open communications, particularly of ideas.

The idea of bulk mail wasn’t even something the founding fathers considered and could be said to have begun with the Montgomery Ward’s catalogue in the 1870s, although the Post Office didn’t establish lower bulk mail rates until 1928.  As a result, effectively until after World War II, massive direct bulk mailings were comparatively limited, and the majority of Post Office revenues came from first class mail. Today, that is no longer true.  Bulk mail makes up the vast majority of the U.S. Postal Service’s deliveries, and yet it’s largely still charged as if it were a marginal cost – and thus, the government and first class mail users are, in effect, subsidizing advertising mail sent to all Americans.  Yet, rather than charging advertisers what it truly costs to ship their products, the USPS is proposing cutting mail deliveries – and the reason why they’re talking about cutting Saturday delivery is because – guess what? – it’s the lightest delivery day for bulk mail.

I don’t know about any of you, but every day we get catalogues from companies we’ve never patronized and never will.  We must throw away close to twenty pounds of unwanted bulk mail paper every week – and we’re paying higher postage costs and sending tax dollars to the USPS to subsidize even more of what we don’t want.

Wouldn’t it just be better to charge the advertisers what it really costs to maintain an establishment that’s to their benefit?  Or has the direct mail industry so captured the Postal Service and the Congress that the rest of us will suffer rather than let this industry pay the true costs of the bulk mail designed to increase their bottom line at our expense?

Being A Realist

Every so often, I come head-to-head with an unsettling fact – being a “realistic” novelist hurts my sales and sometimes even upsets my editors.  What do I mean?   Well… after nearly twenty years as an economist, analyst, administrator, and political appointee in Washington, I know that all too many of the novelistic twists and events, such as those portrayed by Dan Brown, are not only absurd, but often physically and or politically impossible.  That’s one of the reasons why I don’t write political “thrillers,” my one attempt at such proving dramatically that the vast majority of readers definitely don’t want their realism close to home.

Unfortunately, a greater number don’t want realism to get in the way, or not too much in the way, in science fiction or fantasy, and my editors are most sensitive to this.  This can lead to “discussions” in which they want more direct action, while I’m trying to find a way to make the more realistic indirect events more action-oriented without compromising totally what I have learned about human nature, institutions, and human political motivations.  For example, there are reasons why high-tech societies tend to be lower-violence societies, but the principal one is very simple.  High-tech weaponry is very destructive, and societies where it is used widely almost invariably don’t stay high-tech.  In addition, violence is expensive, and successful societies find ways to satisfy majority requirements without extensive violence [selective and targeted violence is another question].

Another factor is that people seeking power and fortune wish to be able to enjoy both after they obtain them – and you can’t enjoy either for long if you’ve destroyed the society in order to be in control. This does not apply to fanatics, no matter what such people claim, but the vast majority of fanatics don’t wish to preserve society, but to destroy – or “simplify” – it because it represents values antithetical to theirs.

Now… this sort of understanding means that there’s a lot less “action” and destruction in my books than in most other books dealing with roughly similar situations and societies, and that people actually consider factors like costs and how to pay for things.  There are also more meals and meetings – as I’m often reminded, and not always in a positive manner – but meals and meetings are where most policies and actions are decided in human society.  But, I’m reminded by my editors, they slow things down.

Yes… and no…

In my work, there’s almost always plenty of action at the end, and some have even claimed that there’s too much at the end, and not enough along the way.  But… that’s life.  World War II, in all its combat phases, lasted slightly less than six years.  The economics, politics, meetings, meals, treaties, elections, usurpations of elections, and all the factors leading up to the conflict lasted more than twenty years, and the days of actual fighting, for any given soldier, were far less than that. My flight instructors offered a simple observation on being a naval aviator:  “Flying is 99 percent boredom, and one percent sheer terror.”  Or maybe it was 98% boredom, one percent exhilaration, and one percent terror.

On a smaller and political scale, the final version of Obama’s health care bill was passed in days – after a year of ongoing politicking, meetings, non-meetings, posturing, special elections, etc.   The same is true in athletics – the amount of time spent in training, pre-season, practices, etc, dwarfs the time of the actual contest, and in football, for example, where a theoretical hour of playing time takes closer to three hours, there’s actually less than fifteen minutes of actual playing time where players are in contact or potential contact.

Obviously, fiction is to entertain, not to replicate reality directly, because few read to get what amounts to a rehash of what is now a very stressful life for many, but the question every writer faces is how close he or she hews to the underlying realities of how human beings interact with others and with their societies.  For better or worse, I like my books to present at least somewhat plausible situations facing the characters, given, of course, the underlying technical or magical assumptions.

Often my editors press for less realism, or at least a greater minimization of the presentation of that realism.  I press back.  Sometimes, it’s not pretty. So far, at least, we’re still talking to each other.

So far…

Pondering Some “Universals”

When a science fiction writer starts pondering the basics of science, especially outside the confines of a story or novel, the results can be ugly.  But…there’s this question, and a lot of them that arise from it, or cluster around it… or something.

Does light really maintain a constant speed in a vacuum and away from massive gravitational forces?

Most people, I’m afraid, would respond by asking, “Does it matter?”  or “Who cares?”

Physicists generally insist that it does, and most physics discussions deal with the issue by saying that photons behave as if they have zero mass at rest [and if I’m oversimplifying grossly, I’m certain some physicist will correct me], which allows photons to travel universally and generally invariably [again in a vacuum, etc.] at the speed of light, which is a tautology, if one thinks about it.  Of course, this is also theoretical, because so far as I can determine, no one has ever been able to observe a photon “at rest.”

BUT… here’s the rub, as far as I’m concerned.  Photons are/carry energy.  There’s no doubt about that.  The earth is warmed by the photonic flow we call sunlight.  Lasers produce coherent photonic flow strong enough to cut metal or perform delicate eye surgery.

Second, if current evidence is being interpreted correctly, black holes are massive enough to stop the flow of light.  Now… if photons have no mass, how could that happen, since the current interpretation is that the massive gravitational force stops the emission of light, suggesting that photons do have mass, if only an infinitesimal and currently unmeasurable mass.

These lead to another disturbing [at least for me] question.  Why isn’t the universe “running down”?  Don’t jump on me yet.  A great number of noted astronomers have asserted that such is indeed happening – but they’re talking about that on the macro level, that is, the entropy of energy and matter that will eventually lead to a universe where matter and energy are all at the same level everywhere, without all those nice gradients that make up comparative vacuum and stars and planets and hot and cold.  I’m thinking about winding down on the level of quarks and leptons, so to speak.

Current quantum mechanics seems to indicate that what we think of as “matter” is really a form of structured energy, and those various structures determine the physical and chemical properties of elements and more complex forms of matter.  And that leads to my problem.  Every form of energy that human beings use and operate “runs down” unless it is replenished with more energy from an outside source.

Yet the universe has been in existence for something like fifteen billion years, and current scientific theory is tacitly assuming that all these quarks and leptons – and photons – have the same innate internal energy levels today as they did fifteen billion years ago.

The scientific quest for a “theory of everything” tacitly assumes, as several noted scientists have already observed, unchanging universal scientific principles, such as an unvarying weak force on the leptonic level and a constant speed of light over time.  On a practical basis, I have to question that.  Nothing seems to stay exactly the same in the small part of the universe which I inhabit, but am I merely generalizing on the basis of my observations and anecdotal experience?

All that leads to the last question.  If those internal energies of quarks and leptons and photons are all declining at the same rate, how would we even know?  Could it be that those “incredible speeds” at which distant galaxies appear to be moving are more an artifact of changes in the speed of light?  Or in the infinitesimal decline of the very energy levels of all quarks, etc., in our universe?

Could our universe be running down from the inside out without our even knowing it?

The Absolute Need for Mastery of the Boring

A few weeks so ago, I watched two college teams play for the right to go to the NCAA tournament.  One team, down twenty points at halftime, rallied behind the sensational play of a single star and pulled out the victory by one point in the last seconds.  That was the way television commentators and the print media reported it.  I saw it very differently. One of the starting guards for the losing team missed seven out of twelve free throws, two of them in the last fifteen seconds.  This wasn’t a fluke, a bad day for that player – he had a year-long 40% free throw success percentage.  And just how many games in the NCAA tournament have been lost by “bad” free throw shooting?  Or won by good free throw shooting?  More than just a handful.

Good free-throw shooting is clearly important to basketball success.  Just look at the NBA.  While the free-throw shooting average for NCAA players is 69%, this year’s NBA average is 77%, and 98% of NBA starters have free throw percentages above 60%, with 75% of those starters making more than three-quarters of their free throws.

To my mind, this is a good example of what lies behind excellence – the ability to master even the most boring aspect of one’s profession. Another point associated with this is that simply knowing what a free throw is and when it is employed isn’t the same as being able to do it.  It requires practice – lots of practice. Shooting free throws day after day and improving technique is not exciting; it’s boring.  But the fact that there are very, very few poor free-throw shooters in the NBA is a good indication that mastery of the boring pays off.

The same is true in writing.  Learning grammar and even spelling [because spell-checkers don’t catch everything, by any means] is also boring and time consuming, and there are some writers who are, shall I say, slightly grammatically challenged, but most writers know their grammar.  They have to, because editors usually don’t have the time or the interest in cleaning up bad writing.  It also gets boring to proofread page after page of what you’ve written, from the original manuscript, the copy-edited manuscript, the hardcover galleys, the paperback galleys, and so on… but it’s necessary.

Learning how to fly, which most people believe is exciting, consists of a great deal of boredom, from learning to follow checklists to the absolute letter, to practicing and practicing landings, take-offs, and emergency procedures hour after hour, day after day until they’re second nature.  All that practice is tedious… and absolutely necessary.

My opera director wife is having greater difficulty with each year in getting students to memorize their lines and music – because it’s boring – but you can’t sing opera or musical theatre if you don’t know your music and lines.

I could go on and on, detailing the necessary “boring” requirements of occupation after occupation, but the point behind all this is that our media, our educational system, and all too many parents have instilled a message that learning needs to be interesting and fun, and that there’s something wrong with the learning climate if the students lose interest.  Students have always lost interest.  We’re genetically primed to react to the “new” because it was once a survival requirement.  But the problem today is that the skills required to succeed in any even moderately complex society require mastery of the basics, i.e., boring skills, or sub-skills, before one can get into the really interesting aspects of work.  Again, merely being able to look something up isn’t the same as knowing it, understanding what it means, and being able to do it, time after time without thinking about it and without having to look it up repeatedly.

And the emphasis on fun and making it interesting is obscuring the need for fundamental mastery of skills, and shortchanging all too many young people.

Original

Last week, in my semi-masochistic reading of reviews, I came across a review of The Magic of Recluce that really jarred me.  It wasn’t that the review was bad, or even a rave.  The reviewer noted the strengths of the book and some areas she thought weak, or at least that felt rough to her.  What jarred me were the words and the references which compared it to books that had been published years afterward, as if The Magic of Recluce happened to be copying books that actually appeared after it.  Now, this may have been as much my impression as what the reviewer meant, but it struck a chord – off-key – in my mind because I’ve seen more than a few reviews, especially in recent years, that note that The Magic of Recluce was good, decent… or whatever, but not as original as [fill in the blank].

Now… I’m not about to get into comparative “quality” — not in this blog, at least, but I have to admit that the “not so original” comment, when comparing Recluce to books published later, concerns me.  At the time the book was published, almost all the quotes and reviews noted its originality.  That it seems less “original” now to newer and often younger readers is not because it is less original, but because there are far more books out with differing magic systems.  Brandon Sanderson, for example, has developed more than a few such systems, but all of them came well after my systems in Recluce, Erde, and Corus, and Brandon has publicly noted that he read my work well before he was a published author.

The word “original” derives from “origin,” i.e., the beginning, with the secondary definition that it is not a copy or a duplicate of other work.  In that sense, Tolkien’s work and mine are both original, because our systems and the sources from which we drew are substantially different.  Tolkien drew from linguistics and the greater and lesser Eddas, and, probably through his Inking connections with C.S. Lewis, slightly from Wagner.  I developed my magic system from a basis of physics.  Those are the “origins.”

The other sense of “original” is that signifying preceding that which follows, and in that sense, my work is less original than that of Tolkien, but more “original” than that of Sanderson or others who published later, for two reasons.  First, I wrote it earlier that did those who followed me, and second, I developed magic systems unlike any others [although the Spellsong Cycle magic has similarities to Alan Dean Foster’s Spellsinger, but a fundamentally different technical concept].

There’s also a difference between “original” and “unique.”  While it is quite possible for an original work not to be unique, a truly unique work must be original, although it can be derivative.

Inn any case, my concerns are nothing compared to those raised by the reader review I once read that said that Tolkien’s work was “sort of neat,” even if he did rip off a lot from Terry Brooks.

 

Anyone Can Do That

The other day I received an email from a faithful reader who noted that he had stopped reading The Soprano Sorceress because the song magic was “too easy.” Over the years I’ve received other comments along the lines that all she had to do was open her mouth and sing.

Right. Except that under the magic system in Erde, the song had to be perfectly on pitch and in key; the words had to specify what had to be accomplished; and the accompaniment had to match. In the opening of that book, a sorcerer destroyed a violinist whose accompaniment was imperfect — because it could have threatened his life. Comparatively few professional singers, except classically trained opera singers, can maintain such perfection in a live performance. And some of those don’t have the best diction — yet clear diction would be vital in song spell-casting. Now… try it in the middle of a battle or when your life is under immediate threat.

I bring this up because there are certain skills in any society, but particularly in our society, that almost everyone thinks they can do. Most people believe they can sing, or write, or paint almost as well as the professionals, and almost all of them think they can certainly critique such with great validity.

I’m sorry. Most people have a far higher opinion of their skills than can be objectively confirmed — and that’s likely an understatement. Even in noted music conservatories, only a minority of graduates are good enough, talented enough, and dedicated enough to sing professionally. The same is true of noted writing programs or established art programs. For that matter, comparatively few graduates of noted business schools ever make it to the top levels of business organizations or corporations.

A similar attitude pervades our view of sports. Tens of millions of American men identity with sports and criticize and second-guess athletic professionals whose skills they could never match under pressures they can only vaguely comprehend. Monday morning quarterbacking used to be a truly derogatory term, enough so that its use tended to stop someone cold. Now it’s almost jocular, and everyone’s an expert in everything.

Is all this because our media makes everything look easy? Because the media only concentrate on the handful of individuals in the arts, athletics, and professions who are skilled, dedicated, and talented enough to make it look “easy.” Or is it because our society has decided to tell students that they’re wonderful, or have “special” talents when they’re failing?

The bottom line is that doing anything well is not “easy,” no matter how effortless it looks, especially when one of the talents of the best is to make that accomplishment look effortless… and that usually means that only those who truly understand that skill really know what it took to make it look easy or effortless.

The Impact of the Blog/Twitter Revolution

The Pew Research Center recently reported that among 19-28 year-olds, blogging activity dropped from close to thirty percent in December 2007 to around fifteen percent by the end of 2009, while the number of teenagers who blogged continues to decline. Those under thirty now focus primarily on Facebook and Twitter. On the other hand, blogging has increased among adults over thirty by close to forty percent in the last three years, although the 11% of those who do blog is still below the 15% level of the 19-29 age group. Based on current trends, that gap will close significantly over the next few years.

These data scarcely surprise me. After all, once you’ve blurted out, “Here I am,” and explained who you are, maintaining a blog with any form of regularity takes, thought, insight, and dedicated work, none of which are exactly traits encouraged in our young people today, despite the lip service paid to them. And, after all, while it can be done, it’s hard to fully expose one’s lack of insight and shallowness when one is limited to the 140 characters of a Twitter message, and since Facebook is about “connecting” and posturing, massive thought and insight are not required.

There is a deeper problem demonstrated by these trends — that technology is being used once more to exploit the innate tendency of the young to focus on speed and fad — or “hipness” [or whatever term signifies being cool and belonging]. All too many young adults are functionally damaged in their inability to concentrate and to focus on any form of sustained task. Their low boredom threshold, combined with a socially instilled sense that learning should always be interesting and tailored precisely to them, makes workplace learning difficult, if not impossible, for far too many of them, and makes them want to be promoted to the next position as soon as possible.

As Ursula Burns, the President and CEO of Xerox, recently noted, however, this urge for promotion as soon as one has learned the basics of a job is horribly counterproductive for the employer… as well as for the employee. The young employee wants to move on as soon as he or she has learned the job. If businesses acquiesce in this, they’ll always be training people, and they’ll never be able to take advantage of the skills people have learned, because once they’ve learned the job, they’re gone from that position, either promoted or departed to another organization in search of advancement. It also means that those who follow such a path never fully learn, and never truly refine and improve those skills.

This sort of impatience has always existed among the young, and it’s definitely not unique to the current generations. What is unique, unfortunately, is the degree to which society and technology are embracing and enabling what is, over time, effectively an anti-social and societally fragmenting tendency.

Obviously, not all members of the younger generation follow these trends and patterns, but from what I’ve learned from my fairly widespread net of acquaintances in higher education across the nation, a majority of college students, perhaps a sizable majority, are in fact addicted to what I’d call, for lack of a better term, “speed-tech superficiality,” and that’s going to make the next few decades interesting indeed.

Miscellaneous Thoughts on Publishing

Several of the comments in the blogsphere during the Macmillan-Amazon dust-up focused on the point I and others had raised about the fact that, depending on the publisher, from thirty to sixty percent of all books lost money and that those losses were made up by the better-selling books. A number of commenters to various blogs essentially protested that publishers shouldn’t be “subsidizing” books that couldn’t carry their own weight, so to speak. At the time, I didn’t clarify this misconception, but it nagged at me.

So… almost NO publishers print books that they know will lose money. The plain fact of the matter is that when a publisher prints a book, it is usually with the expectation that it will at least break even, or come close. At times, publishers know a book will be borderline, because the author is new, but they publish the book in the hopes of introducing an author whose later books, they believe, will sell more. While the statistics show that 30%-60% of books lose money, the key point is that the publishers don’t know in advance which books will lose money. Yes, they do know that it’s unlikely that, for example, a Wheel of Time or a Recluce book will lose money, but no publisher has enough guaranteed best-sellers to fill out their printing schedule. Likewise, they really don’t know who will become a guaranteed best-seller. Just look at how many publishers turned down Harry Potter. Certainly, no editors ever thought that the Recluce books would sell as well or for as long as they have. Not to mention the fact that there are authors whose books were at the top of The New York Times bestseller lists whose later books were anything but bestsellers. The bottom line is simple: Publishers do not generally choose to print books that they know will lose money just to subsidize a given book or author. They try to print good-selling books, and they aren’t always successful.

Last week, Bowker released sales figures for the book publishing industry that revealed that only two percent of all book sales in 2009 were of e-books, while 35% were of trade paperbacks, 35% were hardcovers, and 21% were mass market paperbacks. Interestingly enough, though, while chain bookstores sold 27% of all books, e-commerce sites, such as Amazon and BarnesandNoble.com, sold 20% of all titles, including hardcovers, trade paperbacks, and mass market paperbacks. People talk about how fast matters can change, but even “fast” takes time. Jeff Bezos started Amazon in 1994. Today, based on the Bowker figures, Amazon probably accounts for between nine and fourteen percent of all U.S. book sales, but that’s after sixteen years of high growth. A study by Nielsen [the BookScan folks] also revealed that forty percent of all readers would not consider e-books under any circumstances. To me, those figures suggest that, while e-books may indeed be the wave of the future, the industry isn’t going to be doing big-time surfing on it for many years to come.

Total book sales were down about three percent last year, but fiction sales were up seven percent. The overall decline was linked to a decrease in sales of adult nonfiction. That indicates there was definitely an increased market for escapism in 2009.

And one last thought… in 1996, Amazon was still struggling, and there was a question as to whether it would really pull through — and then Jeff Bezos introduced the reader reviews, and Amazon never looked back. Because readers could offer their own views… they bought more books from Amazon. Do so many people feel so marginalized that being able to post comments changes their entire buying habits? The other downside to reader reviews is that the increasingly wide usage of the practice — from student evaluations to Amazon reviews — reinforces the idea that all opinions are of equal value… and they’re not, except in the mind of the opinion-giver. Some reader reviews are good, thoughtful, and logical. Most are less than that.

So, in yet another area, good marketing has quietly undermined product excellence.

Thoughts on "The Oscars"

Actually, this blog deals with my reaction to the expressed thoughts of others about the Oscar ceremony. Before beginning, however, I will cheerfully admit that I watch almost all movies either on DVD or satellite, often years after they’re released.

Now…for those thoughts. By Monday morning, in all too many media outlets, so-called columnists and pundits were complaining about the ceremony being too long and that too much time was wasted on “minor” awards that no one cared about, such as make-up, costumes, sound mixing, and the like. I didn’t happen to see a complaint about special effects, but maybe I overlooked it.

There are two BIG things that bother me about all this Monday morning quarterbacking. First, the Oscars were designed to recognize all aspects of film-making, not just the six “biggies.” As a matter of fact, I could make the argument that those who have been nominated for those — best picture, best director, best leading actor and actress, best supporting actor and actress — need the recognition far less than all the others who enabled the “biggies” to shine. Without a good script, the best actor looks stupid, as some of the greatest names in film have proved a few times. With the wrong music, the right mood doesn’t get created, and Richard Nixon certainly proved that make-up makes a difference. How can you have a Jane Austen period piece, or a Young Victoria, without the right costuming? The entire success of Avatar depends not so much on the actors as on all the things that aren’t the actors. The actors and directors are always recognized. Why begrudge all the others a few hours once a year when a few of them actually get noticed?

In addition, the ceremony and the awards were originally developed to provide such recognition — not to provide prime-time, viewer-oriented “entertainment.” But, of course, because many people have become interested, the “Oscar ceremony” is now packaged as entertainment, and the vast majority of the more technical awards are presented at another ceremony — noted at the “real” Oscar ceremony with a quick picture and thirty seconds of explanation [out of three hours] and not even a listing of names, because, after all, why should one be obliged to read a long list at the “official” Oscar ceremony?

My second BIG objection is that movies, especially today, are a highly technical enterprise that requires great expertise, and yet these commentators seem to want to ignore the very expertise that makes such great films possible in favor of glitz and celebrity. In a way, it reminds me of the Roman Empire, where the great majority of the engineers who designed all those buildings, bridges, and aqueducts were slaves — more privileged slaves, to be sure — but slaves nonetheless. And what happened as even the minimal respect for those slaves vanished in the decadence of glitz and ancient celebrity?

What these commentaries about the dullness of recognizing expertise reveal, unfortunately, is a deploring culture shift away from appreciating the technology that underpins everything we do, including even one of the least substantive aspects of our society — cinema — toward even more superficiality. And even that superficiality that has to be so current. Last year is so passe. As for more than a year ago… forget it.

The polite and bored minimal applause that followed the heartfelt tribute to John Hughes was incredibly painful to hear. A man who gave his life to his art, and combined humor and insight, and the general reaction was, “We’re bored.” And then the “In Memoriam” section was so abbreviated and flashed over so quickly, with names even eliminated when the camera flashed to James Taylor singing, that it was almost a travesty.

Are we so into glitz that we can’t spare an hour or two once a year to allow a little recognition for those who went before and for a comparative handful of experts, who represent tens of thousands of technical specialists that we never otherwise acknowledge, yet whose contributions are absolutely vital to the film industry? Is that really too much to ask?

And, remember, I’m not even a film buff.

Reader Expectations?

The other day I got an email from a male reader who “finally read” The Soprano Sorceress… and enjoyed it and says he’s looking forward to the others. What was interesting about the e-mail was that this reader — a careerist serving in the military — admitted he’d put off reading the book because the protagonist was female. After receiving that email and then getting the early sales reports on Arms-Commander, I got to thinking matters over. I’ve written a number of books with female protagonists, and frankly, while they’ve sold well, they haven’t sold as well as other comparable books of mine with male protagonists, even though, in general, they’ve gotten far better reviews.

Now… obviously, female protagonists don’t kill book sales in general, or Patty Briggs or Marjorie Liu or any number of other authors wouldn’t be on The New York Times bestseller lists. So, if my books with female protagonists, which get better reviews, don’t sell as well as those with male protagonists, why is that so? I’d certainly hope that it’s not that better written books don’t sell as well.

What I’ve tentatively concluded is that readers form expectations of writers, and when an author writes something that appears to go against those expectations in a negative way, sales suffer. I’d been writing professionally for 25 years when the first book I wrote from a female viewpoint — and, yes, it was The Soprano Sorceress — was published, and I had eighteen books in print by then, all of which featured strong male characters and many of which were military/action oriented. Although I have always written strong female characters, they were usually viewed from the male perspective.

In retrospect, I shouldn’t have been surprised. I upset the expectations I’d inadvertently established over more than twenty years. In a way, perhaps I should have been grateful that the fall-off in sales was merely “noticeable,” rather than catastrophic. Part of the loss in readers was probably alleviated because I’d been publishing fantasy for some six years before I took on writing a fantasy with a female protagonist, so that the shift was likely not so wrenching as it might have been to some readers.

And yet, at the same time, why didn’t I pick up more readers from among those who like strong female protagonists? For exactly the same reason! Those readers had likely scanned earlier books of mine and decided they were not to their tastes, and having done so, were not likely to return to peruse later works of mine unless someone called one to their attention.

This sort of expectation-generation, unfortunately, is not helped by current publisher marketing strategies, where all too many authors are encouraged to use one pen name for one type of book, and another for a different type, and where an author’s name is a stringently and narrowly defined “brand.” That strategy is akin to applying fast-food marketing techniques to books, and while it might sell more books in the short run, it definitely has the downside of limiting publication of books and/or authors that don’t “brand” easily.

This”branding” also has the side-effect of effectively reducing reader exposure to a wider range of fiction. For example, I write, under my own name, straight SF, fantasy, alternate world SF, what might be called science fantasy, and I write from both male and female points-of-view, and I use different tenses in different books. Offhand, I don’t know of another author who does all that — not under the same name, although I do know some writers with multiple pen names for differing styles.

In the end, though, I have to ask, just what are readers losing by the creation of such rigid expectations of author names? What discoveries will they never make… what intellectual and mental challenges will they never encounter… what unexpected pleasures will they miss?

And The Winner Is…

No, I’m not giving awards, but commenting on the social implications of the recent Winter Olympics. Put bluntly, there’s something really wrong with the world when a national psyche, such as Canada’s, rests on the outcome of a hockey game. Did this athletic contest produce a cure for cancer, a new space drive that will allow us access to the planets, or a way to effectively deal with terrorism? For that matter, did any of the Olympic contests really determine the best athletes in any given endeavor? No, they did not. They determined who was the best on a given day. One could even claim that the U.S.was the better hockey team since the two teams split contests and overall the USA scored more goals. But Canada’s national psyche was “saved” because one game was somehow more important than another game. A game! So be it… sadly.

And, by the way, when did games and viewing them become so important? Has it ever happened before in history? Several times, as a matter of fact — in ancient Greece, before its civilization collapsed, in Rome, after the fall of the Republic, and in Central America, before the Mayan civilization became too weak to survive environmental catastrophe. There might be other examples, as well, but those spring to mind immediately.

More to the immediate point, the Winter Games — and the television hoopla surrounding them — trivialized the lives of everyday people everywhere. A skater was praised for competing and winning a bronze medal in the days following her mother’s death. Yes, she was courageous, but how many people, every day, have to go with their lives after a loved one dies unexpectedly? Several other athletes were lauded for overcoming difficulties in order to triumph in their fields, and the media played it up as if they were the only ones who had ever done so. I don’t recall any media hoopla or medals for my wife when she had to sing a full concert while her mother was dying, in order to keep her job, nor do I recall any great praise for the student who had to do a singing competition after surviving a car crash and a broken shoulder and the death of a beloved aunt. And there seemed to be a great deal of concern over whether Robert Jordan’s Wheel of Time series would be finished, which concern and commentary lasted far longer than the brief praises of his career at the time of his unfortunate death.

There was a note that the coverage of the Winter Olympics actually outpolled American Idol in terms of viewers. Why should this have been any surprise? They’re different sides of the same coin. Both reward a performance of the moment, not necessarily sustained excellence, and both performances, frankly, have little to do with improving the human condition, except for momentarily making those watching feel better. I’m not taking anything away from the athletes; they’ve worked long and hard to achieve excellence in their fields. But to showcase such performances and to surround them with such hype… what does that say about our society?

Now, before anyone jumps to conclusions that I’m just a geeky science fiction and fantasy writer who has no understanding and appreciation of sports, I will point out that, for better or worse, I was one of those athletes. In addition of lettering in high school sports, I was a competitive swimmer for fifteen years, all the way through college, and although I was just a touch too slow to be Olympic caliber, I do know what it takes to succeed first hand. I’m not against sports; I’m against the glorification of the spectator side of sports, against glitz overwhelming true achievement, and against the creation of an image where sports achievement is blown totally out of proportion to solid values in life.

We live in a world where American Idol far outdraws opera, yet opera is far more demanding and technically superior. Where stock car drivers make thousands of times what those other drivers — such as truckers and highway patrol officers — do. Where graphic novels sell far more than books that actually make readers think in depth. Where the glitz and financial manipulations of Wall Street quants and financiers draw rewards hundreds of thousands of times greater than the salaries of those who police our streets, fight our fires, and educate our children.

History won’t remember American Idol, nor the winners of Olympic Games. If history is even read by the coming generations, it might list Shakespeare, Edison, Washington, Jefferson, Martin Luther King, Pasteur, and Einstein, among others who made real accomplishments.

What did the Winter Olympics say about us as a society? That the winner is… those with the greatest ability to entertain and dazzle, rather than those who provide us with solid achievements?

The Difficulty of Optimism

The other day, Jo Walton, another author, posted a commentary on Tor.com about the decline in “optimistic” science fiction books, claiming that she found few SF books that showed a “positive future” and asking “Why is nobody writing books like this now?”

I won’t quote extensively from her article, but she does make the point that optimistic science fiction was written in the depths of the Great Depression, through WWII, and through the 1950s, none of which were exactly the most cheerful of times, despite a certain later gloss of nostalgia, while noting that today most SF views of the future are rather grim.

What struck me about both her commentary and the initial responses posted was that both Jo and the commenters restricted their views to a comparatively few handfuls of writers, and those writers tend to be those who have high visibility in the F&SF fan community and press. Even some writers who have fairly high visibility and who show a certain optimism about the future — such as Joe Haldeman, Michael Flynn, David Drake, or Walter Jon Williams — aren’t mentioned. While my optimism is of the somewhat cynical variety, I do often write about futures with optimistic features and places, and I-m optimistic about solutions — just not about their costs… and needless to say, I’m not mentioned either.

So, as Jo herself asks, how much of this is merely seeing what one wants to, and how much is grounded in a fundamental change in what is being written? Another relevant question is: How does one define optimistic? From the viewpoint of the tens of thousands of American mothers who lost children to so-called “childhood diseases” every year prior to about 1940, the health situation we have today would look incredibly optimistic. The same would be true of all the slaves in the south in 1850. On the other hand, the Jeffersonians of 1800 would be appalled by the centralized banking and commercially dominated economy of the twenty-first century. For all the housewives of the years before 1950, modern conveniences would likely seem the ultimate optimistic convenience, and long-distance modern transport is definitely far better and more optimistic than sailing ships and horse-drawn wagons.

I’m a great believer in the fact that life comes in all shades, particularly of gray, and the events of the past half-century, in particular, have reinforced that feeling in millions of Americans. We have comparatively few Americans, in percentage terms, in grinding poverty, particularly compared to most of the world’s population, but we also have far higher taxation rates than we ever had when grinding poverty was the norm for twenty percent of our population. Despite the tabloid headlines, civic violence is far lower than it was a century ago, but there are far more restrictions on personal acts and behavior. And so it goes. In a way, one might even call these trade-offs the “loss of societal innocence.” This makes it difficult for an intelligent writer to present an unblemished and totally optimistic view of a future where technology will solve all the major problems facing society — or even one of them.

Yet, despite my quibbles with what Jo Walton has written, and despite those of us who struggle to show optimism in depicting the future, I think she touches a vital point. It is getting harder and harder to be both realistic and highly optimistic in writing about probable futures, although I do believe, as I think my writing shows, in qualified optimism.

In the end, the question becomes: Can any realistic future high-tech society present other than qualified optimism, given higher population levels, higher and often unrealistic societal expectations, and the need to maintain basic levels of order among society itself?

The Iceberg/Powder Keg

Last week, a biochemist who was denied tenure shot six of her colleagues, and three died. An engineer blew up his house and piloted his private plane into an IRS office after publishing a manifesto claiming how, time and time again, tax judgments by the IRS had wiped out his savings and retirement. What wasn’t revealed by these reports is the fact that they’re the tip of an iceberg that’s been quietly growing over the past several decades.

What is this iceberg? It’s the ever-growing pressure in all areas of society to do more with less, and it’s been exacerbated by the economic meltdown and recession.

American manufacturing, as I noted earlier, is hiring as little as possible, and is either automating as much as possible or simply closing American facilities and importing goods from off-shore facilities or manufacturers in order to keep costs down. Admittedly, some facilities have retained hourly-paid employees, but have kept their hours the same or cut them hours while expecting higher production levels.

The same sorts of pressure have hit education on all levels. In most states, teachers and aides have been let go, and classroom sizes have increased. More students are going to college to try to improve their skills and qualifications, but across the entire nation, college faculties have been reduced and classes have been cut, making it harder and harder for students to graduate in a timely fashion, putting additional stress on the remaining faculty, the students, and their parents. Yet state legislatures are still demanding greater cuts in higher education because tax revenues are down, and the legislators are feeling pressure not to increase taxes. A professor who is denied tenure in this climate may never teach again, and granting tenure, no matter what anyone says, can be both arbitrary and unfair, and even if it is not, it’s highly stressful and getting more so because everything is reviewed under a microscope. Over the past decade, I’ve seen or read about a number of cases, including one shooting, and another case where a professor literally attacked campus security, kicking and screaming, when being removed from an office he refused to vacate.

The TEA Party protesters are another symptom of this pressure, complaining primarily that taxes are too high and government too intrusive.

This pressure affects everyone and shows up in different ways. For example, I’m writing more books than I was ten years ago, and according to readers and critics, the books I’m writing now are better than the ones I was writing then, but I’m making less, even though the price of books is slightly higher. Why? Because I’m selling fewer copies of each older [backlist] title on average each year. This isn’t limited to me. Once you get below the top twenty best-selling authors or so, in general book sales are lower. Certainly, it now takes fewer copies sold to make the lower rungs on The New York Times bestseller list, and this reduction in reading has hit midlist and beginning authors especially hard. Much has been made of the fact that younger readers aren’t reading as much, so much so that another factor has been ignored — one that my wife and other professionals have told me time and time again. They’re all working longer hours, and they’re too tired to read as much as they used to. Now… there are those who’ve been downsized out of higher paying jobs, and they have the time to read — but they don’t have the money to buy books, or as many books.

Why are there more and more “reality shows” on television? One reason is that they’re far cheaper to produce — another result of trying to do more with less. Another reason is likely that they offer a way for hard-pressed individuals to “succeed” outside the normal occupational channels, where too often these days harder and longer and better work is required just to keep a job, rather than mark one for advancement.

Yet, for all the commentary on the “recession,” on jobs, on politics, I have yet to see a commentary on what all of these factors add up to for those who are still employed — ever increasing pressure on working Americans, from those at the lowest level to doctors, professors, and other professionals, who are feeling more and more that they’re being backed into a wall or a corner from which they cannot escape.

Whether that iceberg becomes a powder keg — that remains to be seen.