Archive for the ‘General’ Category

Lack of Communication

One of the biggest problems my wife and I keep coming across in this supposedly ultra-communicative world is… lack of real communication. How can this be in a world filled with cellphones, IMs, Twitter, email, and even old-fashioned telephones? Actually, it’s very predictable. In effect, all this modern technology increases the noise to signal ratio while increasing the demands on personal time and decreasing the real and productive time for communication, while making live, real-time, person-to-person contact, especially in commerce and business, more and more difficult.

For example, because my wife suffers from asthma, she wanted to see if she could get a swine flu vaccination. Both the state health website and the local newspaper reported that vaccinations were available at certain hours from the hospital, a local pharmacy, and the health department. After getting periodic blood work at the hospital, she attempted to discover about getting a vaccination. After waiting some fifteen minutes to get an answer from a real live person, she was told that, contrary to published information, the hospital didn’t have the vaccine and that she would have to get it from a pharmacy or her private doctor. On her way home she stopped by the pharmacy, where, after waiting, she was told that they only gave “regular” flu vaccinations [which she already had]. Once at home, she made calls over an hour to the doctor’s office [calls necessitated by a continually busy line], only to discover that the only source of vaccine in the city was at the health department. Calls to various functionaries in the health department were rewarded with voice message after voice message, none of which answered her question, until some time later, she got a real person, who informed her that, first no one over 49 could get a vaccination, regardless of health conditions, and that, in fact, there was so little vaccine that only small children were being vaccinated. All in all, that process took close to two hours.

A month or so ago, I ordered a present for my brother from a company I’ve patronized for years. I attempted to order by telephone, but never could get through. So I tried the website and placed the order. Within minutes I had an order confirmation. Except… after a week, I had no shipment confirmation. So I tried to telephone…and again, after listening to various voice mail messages and punching buttons and waiting ten minutes… I was disconnected. I sent an email asking for an order status, and got no response, even after two days. I left a telephone message, since no real person would answer, and another two days passed with no response. I tried again, and after close to fifteen minutes waiting, a real person answered — and I stated the problem. She promised to look into it. Seven hours later, she called back, apologizing because it had taken six hours to get through to her own warehouse, but stating that the order had been sidetracked, but was now being packed and sent on its way. That was six days ago, and it hasn’t arrived at my brother’s house yet.

I could follow these examples with at least a half-dozen more, all of events in the past year and all involving similar problems. But what I want to know is why anyone in his or her right mind thinks that technology actually improves communications. Oh…it’s great for sending things to people… but for actually communicating… it seems to me that the inundation factor has effectively reduced two-way communications. I’m sure all the voice mail screens and messages reduce manufacturer and office costs… but they certainly increase my costs and waste more of my time, and that’s not so much cost-savings as cost-shifting.

Modern communications have many advantages, but when there’s a problem to be solved… the systems aren’t up to it… and don’t tell me that they’re saving me time. My computer saves me time; company voice response systems and voice menus and endless options waste that time.

"A Stellar Performer"

The problem with “stars” and stellar performers in any field is that the description includes three categories, not one. The first and most obvious is the individual who is indeed stellar and recognized as such. The second is the individual who is recognized as stellar, but who is only competent, or even less. The third is the individual who consistently performs at a stellar level and is never recognized as such.

In some areas, particularly those where “popularity” comprises a large part of what determines “stardom,” which often includes not just cinema and entertainment, but many corporate CEOs, there’s often little distinction between the first and second category, especially when one of the factors that determines such stardom is appearance and “presence,” rather than performance.

In areas of what one might call more concrete achievement, or in the arts, recognition of stellar achievement is often either ignored or overlooked, at least until the achiever is safely dead. Caesar Augustus was the first ruler of the Roman Empire, but his success was largely founded on the technical support of Marcus Agrippa, who designed the weapons and built the fleets Augustus needed, not to mention masterminding the battle of Actium that destroyed Antony’s fleet… and building the original Pantheon and rebuilding and modernizing much of the city of Rome. Van Gogh never sold a painting in his life-time [except to his brother].

Wolfgang Amadeus Mozart was considered a gifted and highly competent composer during his lifetime, but others, such as Salieri, were the stars. Bach was thought to be a very good organist who wrote so much competent music that few in his lifetime recognized his genius. Jane Austin sold literally only a few thousand books in her lifetime — less than ten thousand, but today her works have sold in the millions and have spawned cinematic success after success.

In the corporate and political worlds, both in the past and today, image determines “stellar performers” more often than actual performance, as noted in a number of financial publications about a year ago where they documented that, on average, those CEOs who were less visible and less highly compensated tended to consistently outperform the “stars.” During his lifetime, Warren Harding was greatly beloved and popular, despite being possibly the worst president of the United States, and John F. Kennedy remains beloved to this day, despite a lack of any significant achievement, except perhaps avoiding war with the USSR, and in spite of his extravagant, if concealed, wide-spread philandering.

Then there are the stellar performers who are seldom if never recognized. Often these are teachers who produce outstanding student after outstanding student, students who achieve great success but seldom mention, or only mention in passing, the teacher who launched them. Some, such as Nadia Boulanger, are noted, but most are not. Sometimes, they’re authors who produce good or great book after great book, but who never catch the “critical” eyes of reviewers or scholars. At times, they’re in unlikely fields, such as Bob Lee of Colorado, who understood politics better than any man I ever encountered in twenty years in the political arena, and who mentored an entire generation of politicians and political operatives, and who died almost forgotten by so many whose careers he had made possible.

From what I’ve seen, the unrecognized stellar performers far outnumber the ones who are lauded and praised, and in many, many cases, the performance of the unrecognized stars is superior to those recognized. So why do we as a society tend to over-reward image, even when such images are so often based on little or no substance?

Wrong!… and Socially Irresponsible

Last Friday night, the comedian and social critic Bill Maher stated that vaccinations for the swine flu did no good. In a discussion with heart surgeon and former Senator Bill Frist, Maher went on to say that immunizations aren’t that helpful for any diseases and then proceeded to claim that because the flu virus mutates so quickly, immunizations do no good. Maher ignored both Frist’s statistical proofs and his personal experiences as a doctor, dismissing the statistics out of hand and the experience as “anecdotal.” Not only that, but he apparently dispatched a twitter message to thousands suggesting that anyone who got a swine flu vaccination was an idiot. Since I’ve just recently discussed the ignorance of the anti-vaccine advocates, I won’t deal in detail with the medical side here, but with an equally troubling aspect of Maher’s totally false assertions.

Frankly, I’d always thought Maher was more intelligent than that, but clearly he’s out of his depth when talking about diseases. Yes, the flu virus does mutate, but the mutations in the course of a year don’t render the vaccination ineffective, and in fact, one of the reasons why young people, those under 30, are at so much greater risk than older adults is because those who are older have been exposed to flu strains and vaccines with similarity to the H1N1 strain, and those past exposures have given them greater resistance, and in some cases, immunity.

But what concerns me most about Maher’s ignorance and arrogance — and he was arrogant and patronizing in the interchange — was what it reveals about too many of the current generation of commentators and comedians. If I claim something untrue and libelous about someone, particularly in print, I could face a lawsuit and be responsible for damages. If Maher, or any other popular media figure, purveys blatantly wrong information that could lead to someone dying because they decided not to be vaccinated, there’s no effective way to prove that the individual refused vaccination solely because of Maher’s comments, even though those comments create and reinforce an unfounded belief among some segments of the population that vaccines are ineffective and dangerous. In effect, Maher and others who purvey falsely dangerous information get a free pass.

The First Amendment effectively guarantees the freedom of the press [and media] to allow writers and talking heads to spout any nonsense they want, but the problem with this is that in our media-driven culture, all too many people take as gospel what their favorite “talking head” says. That’s one reason why so many Americans believe things that aren’t true and that may be harmful, or in this case, deadly to them. Yet trying to legislate a fix here is far worse than the problem, because, unlike the case for vaccines, many public issues aren’t nearly so clear-cut as to what is “the truth,” and all too often government itself has a vested interest in misrepresentation.

Thus, public figures, whether they like it or not or whether they accept it, do in fact have a social responsibility not to set forth total falsehoods as truth. The right to freedom of speech may allow a freedom from moral and ethical standards of conduct, as too many public figures seem to demonstrate at least upon occasion, but those freedoms do not make the purveying of falsehoods ethically correct. And when a public figure forthrightly advocates a course of conduct that creates a public hazard or danger, the rest of us have a responsibility to bring such to light those falsehoods and misstatements.

So I’ll put it as clearly as I can. Maher’s words were not only flat-out wrong; they were blatantly socially irresponsible… and, with thousands of lives at stake, that is inexcusable.

Being Connected

The other day my brother and I were discussing social networking — Facebook, MySpace, Twitter, E-mails, etc., and he made the observation that, apparently for most people, “It’s important that you’re in touch, not that you have anything important to say.” Or even that you have anything at all to say.

Twitter is, of course, unless I’m already outdated, the latest phenomena, and it’s epidemic. But why? Messages are limited to something like 140 characters, enough to say, “Here I am in suburban metropolis, going to Vortex [or whatever]” or “At San Diego ComicCon, and Neil Gaiman’s here…” Why should anyone really care? And yet, they obviously do.

College campuses are filled with students, and more and more, they don’t talk to each other face to face. The moment a class lets out, most of them are on their cellphones — those that weren’t already texting under their desks in class — connecting to someone, and oblivious to anyone around them, so much so that students have been known to walk in front of oncoming cars… and not just occasionally, either. It’s not even remarkable when a high school girl receives something like 20 twitters/text messages in less than a half hour… or that none of them convey any information to speak of.

So… why are so many people working so frantically to “stay in touch,” especially given that it’s not that cheap? Since human beings come from simian stock, is this fad a form of “verbal grooming?” Or is it an attempt by the communicators to reassure themselves that they really do mean something to someone in a universe that we as humans have been forced to realize is so vast as to reduce even our entire solar system to comparative nothingness? Or perhaps an effort to fill some sort of emptiness with the sound of a familiar voice… or at least the letters texted by a friend?

It’s clear that I’m incredibly dated and old-fashioned, at least in the social communications sense, but I’d rather hear those words and voices in person. It’s not that I don’t have a cellphone, because I do. I just never carry it except when I travel. When I do travel, I use it to obtain information, such as directions to the bookstore I’m going to visit. Although I do know how to use a GPS and could certainly use an IPhone or a Blackberry, I’ve no interest in putting my entire life on one of them, not after watching what happens to people when they lose them or break them… or even when they don’t, because they’re always checking them, as if their communications device happens to be more important than the people around them. Just what does that tell you about how they feel about you?

I even forgot the cellphone when I went to WorldCon in Montreal, and it wasn’t even close to a disaster. Getting information from a live person suits me fine, but, with the increasing depersonalization of communications involving commerce, with the endless message menus, I wonder just how much longer that will be possible.

And yes, when I travel, I do call my wife to touch base — generally every night, not every five minutes. But that may be because we’re more connected in the ways that count.

The "Anti-Vaccine" Illusion

A lead story on AOL last week was “Teen Dies from Vaccine.” Farther down in the story was the “admission” that no definitive link had yet been established between the vaccine and the girl’s death and that over a million girls had already received the British vaccine against cervical cancer. In the United States over the past decade, if not longer, a growing number of parents have been keeping their children from receiving vaccines for fear that the children will suffer adverse side effects, ranging from autism to death.

The problem with both the news story and the parental reaction is that they represent the equivalent of medical no-nothingism and an unwillingness to understand as well as a failure to comprehend the magnitude of what vaccines have prevented over the years. Many of the vaccines are administered to prevent what we in western European-derived cultures would term “childhood diseases,” with a feeling that such diseases are mild and would be an inconvenience at worst. Unhappily, this is an illusion.

I’m old enough to remember classmates in leg-braces and iron lungs as a result of polio, now prevented by a vaccine. My mother remembers classmates who died of whooping cough, and an acquaintance whose child was born severely handicapped because the mother caught the measles when she was pregnant. Now… those are anecdotal, although we tend to remember the anecdotes better than the statistics. The statistics are far grimmer, if less emotionally binding. Even today, whooping cough [pertussis] kills 200,000 unvaccinated children annually, mainly in the third world [or 2 million in the past ten years], and, in 1934 alone, before the vaccine was widely administered in the U.S., more than 7,500 children died from it. Measles killed thousands of U.S. children every year prior to the adoption of the vaccine. The U.S. averaged 30,000 cases of diphtheria annually, with some 3,000 deaths each year.

Are these vaccines safe, though, ask the skeptics? For roughly 99.9% of the population, yes, but there is always a tiny, tiny fraction of those vaccinated who may suffer side-effects, as with any medicine. The early version of the pertussis vaccine, for example, did have some adverse side effects, often severe, for a minute fraction of children, including, I might add, one of my own daughters, but those who suffered from such side effects were a minuscule fraction of those vaccinated, and in the U.S., that version of the vaccine is no longer used.

Despite years of overwhelming statistics and the reduction of death rates to the point where some diseases, such as smallpox, have been virtually eliminated, anti-vaccination advocates still proliferate, preying on the fears of those who understand neither science or medicine. The plain fact is that, no matter how “safe” a medical procedure or medicine or vaccine is deemed to be, there will always be someone — one of a very few individuals — who will suffer an adverse reaction. In comparison, for every food ever developed, there is some one who is allergic to it — often fatally — but we don’t advocate no eating wheat because some people have gluten disorders, or peanuts because others might die from ingesting them.

The problem with the media highlighting isolated adverse effects or deaths from vaccines is that — given the anecdotal nature of the human brain and the fact that anecdotes affect us far more strongly than do verified facts and statistics — such reports create and have created a climate of opinion that suggests people’s children are “safer” if they’re not vaccinated. The lack of vaccine-generated resistance/immunity in a population then allows the return and spread of a disease and, as I’ve noted above, such diseases aren’t anywhere as “mild” as most people tend to believe. After all, measles is estimated to have wiped out more than half the Native American population, and was documented in decimating the Hawaiian population.

Mild childhood diseases? Nothing to worry about? Just worry about the vaccines. Think again.

Bookstore Insanity?

Amazon and other booksellers are offering enormous discounts on Dan Brown’s latest book, in some cases, according to the Wall Street Journal, at as low as 52% of the list price. Now, I’m not privy to the inside pricing discounts, but I’ve been led to believe that the top discounts to the major book chains are “officially” set at 47% off list price, and promotional and shipping allowances can add another five percent to the margin of the large chain bookstores. If… if that’s so, then the profit margin on The Lost Symbol is slightly less than $2.00 per hardcover.

Now, bookstores won’t sell my books for less than a margin of close to $6.00. So how can they possibly sell The Lost Symbol so cheaply in these times when book sales are lagging? According to all the trade press, they’re doing it in the hopes that book buyers will also buy lots of other books as well.

Well… maybe…

But consider the fact that The Da Vinci Code sold more than 43 million copies in hardcover in its first three years and that Random House held off issuing a U.S. paperback version for three years because the hardcover kept selling so well. If The Lost Symbol sells as well, and initial sales certainly suggest it might, even at the highly discounted initial sales price, the “profits” on the hardcover sales, of just one book, are likely to approach $100 million. Then, too, book stores have this habit of increasing the “discount” price after several months, and certainly after a year, and these back-end hardcover sales help boost total profits.

One of the problems with this kind of pricing is that it has a tendency to hammer the less profitable stores or chains, such as Borders. When a large chain, such as Barnes and Noble, is profitable, then a book like The Lost Symbol merely adds to those profits, and B&N can price aggressively to maximize total sales. In order just to remain competitive, however, a weaker chain, such as Borders, has to match the B&N price, and thus cannot price to gain a larger profit margin per unit sold. Since the chains have decided to compete primarily on pricing, and since Borders has bought into this, not that Borders has that much choice at this point, Borders is simply hanging on, trying to keep from losing more market share. Since B&N has something like 300 more superstores than Borders, often in generally better locations, overall, a blockbuster like The Lost Symbol may help Borders, but not nearly so much as B&N — or even Walmart, which doesn’t even try to offer more than a token limited book stock.

The other problem with this kind of pricing is that, overall, it reflects higher prices for hardcovers, because publishers tend to follow the “base prices” of the lead titles. Even if The Lost Symbol never sells at list price, all the other books of similar genre, size, and scope are likely to be priced within a dollar or two of the Brown book, and at most, they’ll be discounted at either 20% or 34%… and they might not even top out at the 528 pages of The Lost Symbol. This isn’t just an academic point, either, since there have been recent lawsuits over publishers’ discounting policies, particularly those involving the major chains and how they affect independent bookstores and smaller regional book chains.

Call the high discount on a blockbuster predatory… even short-sighted, but in terms of the competition it’s not insanity, and much as they’d like you to think so, it’s not even a loss leader. Lower-profit, but not a loss leader.

Unaffordable?

Lately, the health care debate has centered around the cost of health care insurance, and a number of commentators have made the judgment that the President’s or the Congressionjal plans ares “unaffordable,” but what exactly do they mean? Oh, I know, the idea is that people don’t have the money to pay for health insurance, and the dictionary-derived definition of “unaffordable” is (1) “to have insufficient means for” or (2) “to be unable to meet the expense of.”

The problem with this analysis and these judgments is that “unaffordable” runs a range of personal definitions from “it’s physically and financially impossible” to “I’d rather spend the money somewhere else because paying for health insurance will really cut our/my lifestyle.”

There’s no doubt that there are millions of people in poverty who simply can’t afford any form of health insurance, but based on my observations and experience, there are also millions who choose to gamble with their health care costs for any number of reasons. The problem with this sort of gambling is that society is left with the choice of either (1) picking up the costs in one way or another, either through higher insurance premiums for those who pay, or through longer waits and less adequate care for everyone, or higher taxes on lots of someones or (2) denying care to those who cannot pay, and letting people suffer or die. It’s politically quite clear that the second option is not feasible, at least not overtly.

Moreover, as health care costs continue to rise, and they will, given the remarkable advances in medical technology, insurance costs will also rise, and more and more individuals and families will be tempted to opt out of insurance as costs of care and insurance increase… because those costs will reduce the funds available for other goods and services.

Every day, my wife and I see this happening. I’ve mentioned how many students lack health insurance because their parents won’t pay for it, although most plans will cover [if the parents will pay] students through ages from 21-25. The university discontinued its student plan because not enough students would opt for it. In many cases, the parents have incomes above the cut-offs mentioned in the plans now before the Congress, but choose not to pay health insurance. They take vacations, buy new cars, and many even have toys such as snowmobiles and ATVs. Their children also have cars and cell phones and don’t have any trouble eating out whenever they want. They do protest that they can’t afford sheet music and text books, but they do have all sorts of electronic gadgets.

But… many of these people are among those protesting the President’s push for health care reform. People are now screaming that requiring insurance will squeeze people, force small business to close if they’re required to come up with insurance for employees, and they’re furious about the idea that those families who make more than $66,000 (or $88,000 in the other legislative proposal) will have to pay thousands in tax penalties if they don’t buy health insurance.

But who is supposed to pay for their health costs if something goes wrong… as it often does?

Let’s look at this in terms of a personal example. My wife and I are fairly healthy individuals, and for ten years after she took her position here at the university, we incurred relatively few major medical costs. Then some eight years ago, we took a vacation to Yellowstone. We were walking, not even hiking, along a gentle slope, and she turned to take a picture. Somehow, she set her foot down wrong and slipped, just slightly, and snapped her ankle and leg in two places. She wasn’t carrying extra weight; she was in excellent physical condition; and she didn’t have osteoporosis. It was just a freak accident. A year later, after two operations, months in a wheel chair, and physical therapy, she was finally able to walk close to normally… and, of course, after more than $40,000 in medical bills. We were insured, although the co-pay wasn’t insignificant, but the total wasn’t even close to the cost of more major medical events, such as trauma care from severe auto accidents or cancer treatments, etc. Exactly how many people have even $40,000 to spend on medical costs?

The total savings of the average 60-year old male in the United States amount to something like $50,000, yet the size of the average house has doubled in the last generation, and just compare the size of the “average” American car or SUV to a car of the 1940s or early 1950s. Credit card bills have skyrocketed… but millions of Americans are furious that government is trying to force insurance coverage so that those already covered — or taxpayers — don’t have to pay more.

As I discussed earlier, medical cost savings are close to a red herring. The rate of cost increases may be held down, but total medical costs aren’t going to decrease — not unless we decide not to treat people or to treat them a lot less extensively.

The entire issue is about who’s going to pay for what… and how, and all the arguments avoid that basic issue. Those who are covered now don’t want their coverage costs to go up and their benefits to go down, and those who aren’t covered seem to want someone else to pay for their care. In some cases, particularly in cases of documented poverty, it’s clear that people need help, but it’s also clear that there are more than a few people out there who claim that health care insurance is “unaffordable” because they want a standard of care they don’t want to pay for, and they resent the possibility of being told that, one way or another, they’re going to have to pay the bill one way or another.

So… the questions remain: “unaffordable” for whom, and why do so many claim it is unaffordable, given the American standard of living?

No One Ever Praises Glue

The past year has been filled with argument and controversy, the latest examples being all the violent arguments over health care reform and the outburst of South Carolina Congressman Joe Wilson — of “You Lie!” infamy.

We’re living in a time that’s becoming more and more of an “in-your-face” era, where the right to say and do anything in any place has become more and more apparent… and extolled as a societal virtue of sorts. This hasn’t happened overnight, of course, but the signs have been there. Some ten years ago, I was attending a community symphony performance of Handel’s Messiah. Unfortunately, a young man sitting in front of me kept talking during the singing. I tapped him on the shoulder and politely requested that he stop talking during the performance. He ignored me, and if anything, began to talk more loudly, as if the singers and I were intruding on his conversation. When I placed my hand on his shoulder, he became abusive and threatening for a moment… but he did stop talking — until after the concert when he suggested that my behavior was unbelievable and that if I weren’t so much older, he’d have knocked my block off — except his language was far ruder than that. He was disturbing everyone in three rows…if not more… but my asking him to be polite was absolutely insufferable? We’d all come to hear the concert, not him.

We have students texting in classes, shooting each other in schools and on the streets, and their parents threatening lawsuits against teachers who attempt to maintain discipline. We have talk show hosts and now politicians reaching new lows in their language and demeanor while effectively inciting violence or violent reactions to those with whom they disagree.

Less and less are people working things out, and more and more they shout, demanding that their opponents accept “the truth.” Since each side has a “truth,” all the shouting does is widen the gaps. “Tell it like it is” only means “tell it like I see it.” While there’s nothing wrong with telling your side of the story, it’s only one side. Sometimes, it’s the “better” side. Sometimes, it’s not, but the unspoken assumption today is that when “I” speak, it’s the truth, while “you” speak, you lie. And it’s far from persuasive when either side shouts the “truth.”

It used to be that what held groups together were small things, like manners, civility, a respect for the others as individuals, even when everyone’s views were not precisely the same. And there were people in those groups who tried to work out solutions on which most people could agree. And there was a recognition that resources were limited, and that not everyone could have everything.

These people, these manners and mannerisms, and these recognitions, were a form of glue, glue that held groups and societies together. The problem today is that everyone praises the individuals and the traits that divide society, and leadership seems to be defined by who shouts the loudest and in the most abusive manner, rather than by who tries to solve the problem. No one recognizes, let alone praises, the glue that once held us together.

How about a national day in praise of glue?

Common Sense

There’s a local primary election going on today where I live, and at least two of the candidates are running on a “common sense” platform. From what I can determine, and I know one of them fairly well, outside of the use of the term, their approaches to civic government differ considerably, but each is clear about the fact that he is the “common sense” candidate. But before I muddy the waters even more, I’d note that the dictionary definitions of “common sense” are “practical understanding” or “sound judgment.”

That said, after spending some twenty years in and around national politics, my instinctive reaction is to immediately distrust anyone who uses the term “common sense” in a political arena. The realistic translation of the term is more like: “Given my values, biases, background, and feelings, this is what makes sense to me.” The problem, of course, is that many of the rest of us may not share those values and feelings, and what is “common sense” to him or her may seem like anything but that to others.

Then, when you mix “common sense” with politics, unfortunately, the results are often anything but what reflects “sound judgment” on a larger scale. Why? Because politics requires compromise, and politicians tend to reflect the views of the majority of their constituencies, and those constituencies can and do have very different views. On the local level here, for example, the city council agreed to sell the condemned junior high school building to the university because renovating it would cost far more than building a totally new facility and because the empty building sat where it was surrounded on three sides by the university. On those grounds, the sale seemed to make sense… except the sole municipal swimming pool — which was not condemned — was located on the property. The university demolished the condemned structure and replaced it with a parking lot until the university could obtain the funding for a new theatre center [still pending with the state legislature], and leased the swimming pool back to the city for two years at a token fee.

The city council proposed to replace the swimming pool with a full-scale recreational center, including a better and larger pool, which seemed like a good idea to many, since there isn’t such a public-access facility of that nature closer than fifty miles away. One group in the community protested the spending of taxpayer funds at this time of financial difficulty as showing no common sense or fiscal restraint. Another group said that it was only common sense to have a recreational facility for a rapidly growing city — and to have a swimming pool to support the swimming programs at the two high schools, which have among the better swim teams in the state. A third group claimed it was only common sense to replace the pool with a better pool, but not to spend the money on a larger recreational center. One can cite “common sense” arguments for all three positions, but the debate ended up in a free-for-all requiring a ballot initiative on which proposal to adopt — which turned out to be, from what I can determine, a sort of compromise building that will be more than just a swimming center, but far from a full recreational center… and then last week, the council revealed that they’d under-budgeted for the facility now under construction.

So much for common sense — and this was just about one building in one small city/large town.

The current national debates involve far greater costs and complexity, and incredibly involved trade-offs between costs and life-and-death situations, and when someone starts in on “common sense,” take a good hard look at just whose “common sense” viewpoint he or she is espousing, because common sense evaluations rest on who gains and who loses, and what costs are borne by whom, and who “gets” and who does without.

And I won’t even call that observation “common sense.”

Titles…

The other day I got an email from my editor telling me that the sales department didn’t much care for the title of the novel I’d just turned in. I called him back and asked him what the problem was. The sales types’ reaction was simple. The title was too much like that of a previous book of mine. Now… the two titles only shared one word, and there was a similarity and synonymy between the last word of the old title and the first word of the new title. Upon reflection, I could see their problem and went to work coming up with an alternative title — which I did and which both editor and sales types accepted as “much better.”

Except…artistically, the title wasn’t much “better.” It will certainly be commercially better, and it won’t confuse book sellers and book buyers, and I’ll definitely be better off in so far as those concerns translate into higher sales.

Even though titles cannot be copyrighted, using the exact same title as a previously published book usually isn’t a good idea, for multiple reasons, but I did it once, unknowingly, with the Recluce book Colors of Chaos, only to find out, years later, that Bob Vardeman had published a book with the same title eleven years earlier. It didn’t seem to hurt my sales, and I hope it didn’t hurt his.

Besides avoiding being a copycat, there are more than a few reasons why the title brainchildren of authors may be changed. One, interestingly enough, is that certain terms can be trademarked, and in most cases, that trademark cannot be used without the consent of the trademark holder. At least one New York Times bestselling author has been required to change a title for that reason.

Another reason is length. No matter how perfect the title, it has to fit on the cover of the book, and preferably in a type size large enough to be readable from a distance. Some art directors are not terribly fond of the word “the” to begin a title, because they think it takes up unnecessary space without adding to the clarity of the title in the slightest. And, frankly, some of my titles, in retrospect, probably didn’t need the article. Some did. And at least one is far better without the article.

The original title of Archform:Beauty was Beauty5. Why was it changed? First, because the sales computers couldn’t handle exponents, and second, because sales types kept asking where the first four “Beauty” books were. Yes… that’s right. They apparently don’t teach exponents in sales.

And of course, sometimes a title is just plain bad for any one of a number of reasons. It may make perfect sense to the author, but not to anyone else, or it may be culturally limited. The original title of The Green Progression was the Russian word for “green.” That made sense to us, but not to anyone else. Unfortunately, even the title change didn’t help sales much. On the other hand, “Recluce” doesn’t translate into Swedish, not with the overtones the word has in English, and finally the Swedish translators — through the efforts of a Swedish acquaintance of mine, for whose perspicacity I am most grateful — changed “Recluce” to “Sarland.” I’m told this makes much better reading in Swedish, and I have to take their word for it, but since the Swedish publisher is still acquiring Recluce books, the sales evidence would seem to support that conclusion.

Now… I’ve had generally good experiences with Tor with regard to titles, but I understand other authors have not had entirely sanguine results with their publishers over titles, and I occasionally see titles on the shelves… and shudder, but that’s another matter entirely, since I’m clearly antiquarian in my thinking that titles should exhibit some modicum of taste… whether the title refers to a cookbook or a vampire novel.

The Post-Literate Society

Years ago, a friend who worked in the consulting field with me deplored the growing use of the computer mouse, which he still called a GUI [graphic user interface], as the first step toward a “post-literate” society. At the time, I thought he was over-reacting. Now… I’m not at all sure.

The College Board just released its latest statistics, and the SAT reading test scores for last year’s graduating seniors were the lowest since 1994. That choice of 1994 as a reference point is particularly interesting because, in 1995, the College Board “recentered” the SAT reference point, which had been based on the average scores set in 1941. The practical effect of this “recentering” was to raise the median score by roughly 80 points. That means that last year’s reading scores might well be the worst in far longer than a mere fourteen years.

In addition, just a few weeks ago, the ACT test annual results were released, and ACT officials noted that, according to the test results, only 25% of test takers, again graduating seniors, had the ability to handle college level work.

Add to these data the facts that the number of young adults reading is down by over 40% from those of a generation earlier and the fact that close to 40% of those young adults obtaining graduate advanced degrees have inadequate verbal and reading analysis skills, and my friend’s suggestion that we are headed toward an electronic and post-literate society doesn’t look quite so far-fetched.

Why am I concerned? Besides the fact that fewer readers will result in fewer book sales?

Because:

  1. The ability to frame complex thoughts correctly is vital if we wish to retain a semblance of a representative government in a complicated and highly technological society, as is the ability to analyze what others have written and to be able to sort out the misinformation based on understanding and logic, rather than through preconceptions and emotional reactions.
  1. There is a vast difference between emotional responses to an individual on a personal basis, where first impressions are often correct, and emotional responses to complex issues framed simplistically by talking heads and politicians.
  1. Perception and understanding are severely limited if one cannot read quickly and understand well, and those limitations make people more vulnerable to shysters, deceptive business practices, and clever politicians.
  1. Enormous parts of our culture and history will be lost, and most people will not even understand that they have suffered such a loss.
  1. History can be “changed” at will in all-electronic formats. Have people forgotten that Amazon just recently eliminated two electronic books without anyone being able to stop them? What if they’d just altered the text? How many people would even notice? And… if the news is all graphic and auditory… then what?

As for the decline in book sales… well, it will likely be gradual enough that I won’t have to worry about it. The younger authors… that’s another question. Maybe they ought to consider graphic novels as a fall-back.

Symptoms of Decline?

Recent studies on brain functions and learning have determined that learning associated with increased brain function is largely dependent on three factors: concentration, difficulty, and leaving one’s “comfort zone.” The first makes perfect sense and certainly is nothing new or unanticipated. If you don’t concentrate on learning — whether facts, concepts, or new skills — you won’t learn them, plain and simple.

The second factor is a little trickier. If what you’re trying to learn is simple, you may learn it, but it won’t improve brain functioning. If it’s so difficult that you can’t even begin to understand, you won’t learn or improve brain functions, either. The optimum for learning and increasing brain function and neuron creation is trying to learn something that is very difficult for you, and at the edge of your ability, but still possible.

The third factor is that for actual learning to take place, you have to consider factors and facts that move you outside your “comfort zone,” possibly to consider other viewpoints or facts that you might otherwise reject and to examine them open-mindedly, and not with a view merely to dismiss or discredit them.

Now… what do these findings have to do with “decline,” as indicated in the blog title?

First… concentration. The growth of the computer and video culture has resulted in a generation that is having an increasingly difficult time concentrating on a single subject for any appreciative length of time. In addition, all too many schools, particularly in the lower grades, are pandering to this decreased attention span by switching subjects more frequently. Any subject — or book — that requires time and effort to master, particularly if not filled with action or gee-whiz amazing facts, is termed “boring.” Unfortunately, a great many basics of any culture and civilization could be termed boring, yet mastery of many is vital to maintain civilization and technology. Perfection in engineering requires painstaking and often tedious work, but without it, equipment, bridges, highways, and buildings all can fail… with catastrophic results, as we have been recently reminded.

Second… difficulty. Because of the “every child is wonderful” syndrome permeating U.S. culture, there’s also an increasing tendency to praise young students rather than to challenge them to the limits of their ability. There’s also a tendency to limit challenges in the classroom because it will hurt the “self-esteem” of less talented or less motivated students. While many private schools and some charter schools are not falling into this trap, all too many other schools are… and since future learning patterns are set by early learning patterns, all too many children are not only not learning, but they’re not learning how to learn.

Third… comfort zones. Our entire high-tech communication and learning systems are designed and operated to allow people to maximize remaining inside their comfort zones. Pick only the friends you want. Talk to them on your cellphone, and ignore anyone else. Pick only the music you want, and isolate yourself with your earphones. Watch only the news that caters to your biases. Study at your own pace, never under pressure. This also translates, more and more, into behavior patterns where people listen less and less to those with whom they disagree, while becoming more and more intolerant of differences. We can see this playing out in our political discourse daily.

I’ve talked to scores of teachers over the past few years, from all over the country, and most of those I’ve talked to agree that, while students are certainly as intelligent, if not more so, than their parents, a majority of them have difficulty learning anything that challenges them. They especially have difficulty in transferring skills learned in one discipline to another, or even learning from their own mistakes in writing one paper and applying what they should have learned from those mistakes to the next paper.

At a time when we live in the most complex and high-tech societies in history, the ability to learn and to keep learning becomes more and more important, and even as we have discovered what is necessary to enhance and improve that ability, as a society we’re turning away from the kind of education and discipline necessary. Almost fifty years ago, in The Joy Makers, James Gunn postulated a future society where everyone eventually retreated into their own comfortable self-reality bubble, blissfully unaware that the machines that maintained them would eventually fail and unable to comprehend that, let alone develop the expertise to continue society.

Is that where we’re headed?

Customer Service?

While I’ve often bristled, especially as an author, at the slogan “the customer is always right,” perhaps because I don’t think that fiction should be totally consumer driven on all levels and that authors should make efforts to elevate their readers’ understanding, there’s definitely more than a grain of truth to the adage. It also invites a tremendous amount of hypocrisy in the business community.

In the previous blog, I noted how certain products often aren’t available because the re-sellers are actually selling space and not the product per se. In this instance, customer service clearly takes a back seat to other considerations, i.e., maximizing profit rather than customer satisfaction. While I’m the first to understand that those businesses that don’t make a profit won’t remain in operation long, I have trouble when they also talk about their commitment to the customer. The other day, my wife found a product that she really liked, from a company that was sending her a similar type of product. She liked the new product much better, but when she called to change her monthly order, the company representative told her that they couldn’t change her order, that the “new” product couldn’t be shipped under the old program. My wife’s reaction? She canceled the old order and now buys the new product from a local merchant. Her total spending for products from that company is less, and she probably would have continued to buy more if the company had been more accommodating.

There’s another firm that has a slogan along the lines of “we haven’t forgotten who keeps us in business.” I don’t patronize them very much any more — except when I absolutely have to — because they tack fees onto everything and at every turn. And I’m getting more and more irritated at the airlines for all their fees for everything. I often travel long distances, and given what I do and how I do it, it’s simply not possible to cram all the handouts, press packages, and the clothes into a carry-on. So I have to collect more paper [for the IRS, to document more expenses] that I’ve lost more than once, which costs some money over the course of the year. Then, there’s the boarding pass/baggage routing problem. Because of where I live, there are often considerable layovers, and the computers won’t issue a bag tag if too many hours elapse between first take-off and last take-off. That means I have to program extra time into things so that some overworked airline clerk can laboriously override the computer and make sure my baggage tags are printed to the right destination.

I could go on and on… with example after example, but the point behind all of this is that all too often the slogan or the idea of customer service comes far down the line of business priorities — and yet all too many companies tout it, some of which provide very little of either customer consideration or service. I understand that there are other business considerations, but if there are, I’d really appreciate it if companies in such a position weren’t so fawningly hypocritical… and I suspect I’m probably not the only one who feels that way.

But then, if the ad or the internet says that they really serve customers, it has to be true, doesn’t it?

About that "Gaffe"…?

To begin with, let me preface what follows with several disclaimers. First, I am a registered Republican and have been my entire adult life, even serving in the Reagan Administration. Second, I’m what one might call a “Teddy Roosevelt Republican” and more than a little disenchanted with and appalled by the current Republican “leadership” — which can be better described as “followership of the far right.” Third, I have six very professional daughters and an extremely successful professional wife.

All that said, I’m absolutely disgusted with the media and all the pundits who have hounded and pounced on Hillary Clinton because she questioned an inquiry about what “President Clinton” thought and pointed out that she was the secretary of state, not her husband. Media talking head after media talking head has claimed that this was a gaffe, carelessness, a serious mistake, etc., and even blamed her for distracting from the great health care debate.

Telling the truth — a serious mistake? Are we still mired in the mindset of 1890 where a woman’s opinion means less than her husband’s? Where, when a woman points out the blatantly obvious, it’s a gaffe and a mistake? Where a woman is not allowed to show a certain irritation with such a question? Where an honest response is immediately attributed to being “over-tired”?

The media reaction demonstrates, once again, that even the so-called liberal media, who flaunt their liberalism and their supposed lack of bias, are still imbued with a “liberal” amount of male chauvinism, and some of those who exhibit it are unfortunately women. Yes, the “liberal media” tended to champion Barrack Obama in the last election, but looking at history reveals another story. Black men received the right to vote — however hemmed in that right was by wide-spread prejudice, narrow-minded custom, and outright lawlessness — before women did, and the supposedly more liberal political party of the United States just one year ago decided that a black man was preferable to a white woman as the party nominee for president. The amount of criticism faced by now-Supreme Court Justice Sotomayor in her confirmation process emphasized as much the fact that she was a woman as a Latino, although the feminine aspect was clouded by almost always linking “Latino” and “woman.”

Should it be surprising that women may not reach the same decisions under law as do men, even when they have the same education? We are all products of our backgrounds, genders, educations, and experience. While I don’t agree with all the decisions rendered by Justice Sotomayor, frankly, I don’t agree with all the decisions made by other Justices, either. That divergence of opinion is exactly why the Founding Fathers created a Supreme Court with nine members, not one, or a lesser number, so that differing views could indeed be factored into interpreting the law.[Note: I stand corrected. The original number of justices was five, and then varied from six to five to ten until 1869, when it was fixed at nine, although Franklin Roosevelt tried to add more justices.] And why, exactly, are the decisions made by men automatically assumed to be correct? After all, it was nine men who once affirmed the “constitutional legality” of segregation in Plessy vs. Ferguson, a Supreme Court decision that affirmed segregation and stood almost sixty years in error, a decision by a Court that has had exactly two black jurists and three women in its entire history.

Both history and the events of the past few weeks point out, once again, just how deeply male chauvinism remains embedded in even the supposedly most “liberal” institutions in this, the self-proclaimed land of the free. And the fact that I seem to be one of the few pointing it out is even more depressing.

Another Failure of the Market System?

As at least one result of last year’s and this year’s financial meltdown, economists and politicians are back to debating the relative merits of “free markets” versus “regulated markets,” and everyone has a different idea of how much, if any, regulation is required for a given sub-market, i.e., securities, mortgages, housing, health care, etc.

One of the problems with these kinds of debates is that often the debaters aren’t actually debating what they think they are. What do I mean by this? I’ll give you a very prosaic example. On a Tuesday, in mid-day, I went into a food retail giant — WalMart. Among the items I was seeking were a particular brand of non-allergenic shaving gel and a variety of cat food. I’m particular about the shaving gel because I have sensitive skin, not that the brand that works best for me is either more or less expensive; it’s priced the same as the others by that company, presumably because the base is the same, and all that differs is the additives, or the lack thereof. The cat food is also standard, neither more nor less expensive than the others, and since my cats prefer it to all others, many of which they turn their noses up at, I buy that brand.

When I got to the shaving gel shelf, there were no cans of my variety. Every other variety — except the one I wanted — was stacked to overflowing. This is far from the first time this has happened. It’s so frequent that I usually buy two, and often pick up some when I don’t even need any. Needless to say, the same was true of the cat food… and that was nothing new, either. I’ve seen the exact same thing happen year after year with other items, as well as these products, in other grocery chains. Now… in a truly “rational” market, why would a retail seller have the shelves filled with items that don’t sell and continually sell out of those that do without restocking more frequently? For two reasons. First, in most grocery chains, we’re not talking about the sale of product, but the “lease” of shelf space to the manufacturer, who clearly puts a higher premium on trying to sell a wider range of products than in maximizing profit from a best selling item. Second, customer product preferences often vary from store to store, or region to region, and many manufacturer clearly must believe that the cost of maximizing sales of a given consumer product on a store-by-store or even a regional basis is less profitable than adopting a standard shelf-stocking model.

This has been a problem for F&SF sales in the big-box stores, because, depending on locale, F&SF sales can be the largest fiction seller in a store… or the worst, and sometimes that depends on as little as whether the section manager, or even one employee, is enthusiastic about a given genre. But again, the primary consideration for some booksellers isn’t necessarily maximizing sales, but minimizing costs. Of course, if you don’t sell enough books, or anything else, minimizing costs merely prolongs the time before you have to declare bankruptcy — which has been one of the problems, in my opinion, facing Borders.

In terms of healthcare, similar questions arise. One question that many, many women raise is why so many healthcare plans stint on things like birth control and preventative care, while paying for erectile dysfunction drugs and expensive heart procedures for older white males? Is it because health plans are run largely by men with those priorities or because there’s a wealthy section of the health-care marketplace, albeit through generous insurance plans, willing and able to pay for those health services? Or are there other economic reasons?

The biggest reason for the housing and financial services meltdown lay in the fact that there was a far greater profit margin — short-term, to be sure — in selling houses — and mortgages — to borderline homeowners than in servicing honest and reliable homeowners.

All of this leads back to one question: Rational and profitable economic behavior for whom… and at what cost to everyone else?

Free… Oh Really?

For the past several years, I’ve been running across a mantra, or slogan, along the lines of “knowledge wants to be free.” This is complete bullshit. Knowledge isn’t an entity; it’s a compilation of data, information, insights, and the like. What the simplistic slogan means is that people want knowledge, information, and entertainment to be free, and many, if not most of them, will pirate songs, stories, e-books, and the like under the excuse that those who create it are already making exorbitant profits… or that it’s somehow their right to have such “knowledge” without paying for it. Now… we have a rationalization of this in book form.

A gentleman by the name of Chris Anderson recently released a book entitled Free, which I have not read, but which, according to interviews and commentary, which I have read, makes the point that the internet is the marketing model of the future, where content is free, because that’s what people want. I’ll agree with half of that. People always want good things for less than they cost, but a great deal of what’s free really isn’t. In fact, most of it isn’t. It’s paid for in other ways.

Take this blog. Whoever reads it gets the contents without charge, but it didn’t come for nothing. Tor paid for the design and pays for the servers on which it is hosted, as well as for the technical people who put on the artwork and book covers. I write the text, questions, schedules, and news, and no one pays me. The hope is, of course, that both Tor and I will be repaid by readers who go out and buy more books. But free, in the sense of costing nothing, it’s not.

Mr. Anderson also apparently believes that whatever appears on the web should be free and that whoever creates it should profit, as do some musical groups, apparently, by sales of tickets to live events and selling merchandise. This may be fine if one has other merchandise to sell, but if one’s livelihood is gained from people buying intellectual property, one has to limit what one provides for free. I can provide economic, political, and fiction-related insights here for free, because I have fictional “merchandise” to sell through online and bricks-and-mortar bookstores. Other writers, I have to admit, are far better at this than I am. But what of editorial writers? What will happen to that profession if news goes entirely on-line for “free”? Or musicians and songwriters? We’re already seeing a dwindling of truly professional smaller musical groups, the kinds that actually could grub out a living by touring small clubs across the nation. In fact, I recently read that some clubs are now actually charging the musicians, rather than paying them. Is this because something like 90% of the “recorded” music out there is either “free” or pirated? Or because the smaller groups can’t effectively use the “free” aspect of the internet to promote money-generating concerts that will repay the costs of providing “free” services? In a related aspect, my wife the singer and opera professor has noted that the cost of sheet music has skyrocketed because singers and students are buying far less because they can copy it easily… and consequently, the music for more and more songs and operas is out of print, because those songs and operas are less popular and sales won’t pay for even the printing costs.

In addition to these questions, there’s another one, and to me, it’s far more troubling. It’s the idea that worthwhile services — whether insights, music, or entertainment — should be marketed as “free,” because they’re not. They’re paid for indirectly and in other ways, either by advertisers, or subsidized by the sale or other goods and services, and often the user/consumer has no way of knowing who or what is behind anything. Some “free” providers are very up-front, as am I in offering this blog to interest readers in my books. But how many people know how many hundreds of millions of dollars Google has poured into YouTube? Or even who all the other providers of “free” stuff happen to be, and what their agendas might be?

To me, the disguised “free” content idea is just another way in which social institutions end up separating responsibility and accountability from making money. The concept of “free” is also intellectually dishonest… but… all that “stuff” is free, and that excuses everything… doesn’t it?

The Popularity of "More of the Same"

The fact that I was once an economic market research analyst still plagues me, because it’s become clear that I ask questions about writing that probably are better left unasked, at least in public forums. But then, when I was an economic analyst I also asked those questions, and they were part of the reason why I didn’t remain an employed analyst. As the most junior economist in the company, you don’t question the vice-president of marketing’s brand-new and very expensive product, no matter how bad an idea it is, or express doubt about fancy economic models, not if you want to keep your job, no matter how correct posterity proves you — because if you do, you won’t be working at most companies long enough to experience that posterity. I wasn’t the first economic type to learn this first hand, and I was far from the last. More than a few analysts and economists did in fact question the long-term effects of derivatives, and most of those who questioned were not exactly rewarded. A few were fortunate enough to be ignored; the rest fared worse.

With that as background, I’m going to observe that the vast majority of the very most commercially successful authors write “more of the same.” By this I mean, for example, that while the events in subsequent books may change, the feel and structure of each “new” book tends to mirror closely the feel of previous books. I’m not saying that all authors do this by any means, just that a large percentage of those who sell millions of copies of their books. This practice, from what I can tell, emerged first in the mystery/thriller field, followed closely in what I’d call the “high glamour” type novel by writers such as Danielle Steel, Judith Krantz, Sidney Sheldon, and others, but now it seems to be everywhere.

Some authors [or their agents] are so sensitive to the commercial aspects of the “more of the same” that the author uses a different pen name when writing something even slightly different, so that Nora Roberts also writes as J.D. Robb, and by noting that she is writing as J.D. Robb, she gets to cash in on her fame as Nora Roberts while announcing to readers that the J.D. Robb books are a different “more of the same.” In F&SF, Dave Wolverton became David Farland to write fantasy, and perhaps to also make clear that he wasn’t writing Star Wars books about Princess Leia, Jedi apprentices, and the like.

Who knows? Maybe I should have adopted a pen name, say Exton Land, for all my fantasies when I started writing them and saved the L.E. Modesitt, Jr., moniker for my science fiction. But then, which name would I have used for the “Ghosts of Columbia” books? And The Hammer of Darkness really isn’t either. By strict logic, then, to maximize commercial success, I shouldn’t have written any of those… or even The Lord-Protector’s Daughter, because it has a “different” feel.

And in some ways, I may be in the worst of both worlds, because the Recluce books have enough of a similar feel that I’m often criticized for being formulaic there, but I’m clearly not formulaic enough to replicate the success of Harry Potter or The Wheel of Time, etc.

At the same time, when I do something different, such as in Archform:Beauty or Haze, those readers who were expecting a faster book, such as Flash or The Parafaith War, feel that I haven’t met their expectations.

Then again, at least I’m not totally captive to “more of the same.” That would be almost as bad as having been successful as an industrial economist.

The Opening of Communications Technology and the Shrinking of Perspective

Over the past few years, there’s been a great deal of enthusiasm about the internet and how it’s likely to revolutionize the world, and almost all of the commentators express optimism.

The Economist recently reported a study on the effect of the internet, and the conclusion of the study was that the extent and range of contacts of internet users had become more limited, both geographically and culturally, with the growth of internet usage. This certainly parallels the growth of “niche” interest sites and the “Facebook” effect, where like gathers to like.

In effect, if these trends continue, and if the study is correct, and the authors caution that it is only preliminary and a proxy for a far wider and more detailed effort, the internet is creating a voluntary form of self-segregation. What’s rather amusing, in a macabre way, is that when Huxley, in Brave New World, postulated the segregation of society by ability and by the programming of inclination, the government was the evil overlord pressing this societal division upon the population as a means of indirect and effective repression and social control. Now it appears that a significant percentage of internet users are effectively doing the same thing enthusiastically and voluntarily.

A similar trend is also occurring as a result of the proliferation of satellite and cable television, where programming is broken into a multiplicity of “viewpoint-orientations,” to the point that viewers can even select the slant and orientation of the news they receive. This is having a growing impact as the numbers and percentages of Americans who read newspapers continue to decline.

At the same time, we’ve seen a growing polarization in the American political system, combined with a disturbing trend in the government away from political and practical compromise and toward increasingly strident ideological “purity,” along with the growth and vehemence of “public” and other interest groups.

Somehow, all this open communication doesn’t seem to be opening people’s viewpoints or their understanding of others, but rather allowing them greater choice in avoiding dealing with — and even attacking — the diversity in society and the world. Wasn’t it supposed to be the other way around?

"Reality" and Literary Quality in Mainstream and Genre Fiction

One of the canards about genre fiction, especially science fiction and fantasy, is that it’s not “real” or realistic. But what, exactly, is “real” or “reality?” Is the definition of “real” a setting or set of experiences that the reader would experience in the normal course of his or her life? Is a “real” protagonist one who is similar to most people?

Even in mainstream fiction, the most memorable characters are anything but normal. Let’s face it, there’s nothing dramatic about the life of an honest, hard-working machinist, accountant, salesman, or retail clerk who does a good job and has a solid family life… and, consequently, no one writes that kind of story, except perhaps most rarely as a dystopia. Most readers only want to read about these people when they’re faced with a great challenge or disaster and when they can surmount it, and by definition that makes the characters less “normal.” Readers generally don’t like to read about average people who fail; they do like to read about the failures of the “superior” people or the golden boys or girls. And just what percentage of readers actually live in multimillion dollar houses or penthouses or drive Bentleys or the equivalent? That kind of life-style is as removed from most readers, if not more so, than the backdrop of most fantasy or science fiction.

One of the great advantages of science fiction and fantasy is that it can explore what happens to more “average” or “normal” people when they’re faced with extraordinary circumstances. That’s certainly not all F&SF does, nor should it be, but what all too many of the American “literary” types fail to recognize is that a great amount of what is considered literary or mainstream verges on either the pedestrian or the English-speaking equivalent of watered-down “magic realism.”

After the issue of “realism” comes the question of how one defines “literary.” Compared to F&SF, exactly what is more “literary” about a psychiatrist who falls in love with his patient [Tender is the Night], dysfunctional Southern families [Faulkner], the idiocy of modern upscale New Yorkers [Bright Lights, Big City], or any number of other “mainstream” books?

When one asks the question of American literary theorists, and I have, the immediate response is something along the lines of, “It’s the writing.” I don’t have any problem with that answer. It’s a good answer. The problem with it is that they don’t apply the same criterion to F&SF. Rather than look at the genre — any genre, in fact — and pick out the outstanding examples, as they do with their own “genre,” and mainstream fiction is indeed a genre, they dismiss what they call “genre” writers as a whole because of the stereotypes, rather than examining and accepting the best of the genre. Yet they’d be outraged if someone applied the stereotype of “parochial” or “limited” to mainstream fiction.

Interestingly enough, the permanent secretary of the Swedish Academy — which awards the Nobel Prize for literature — just last year issued what amounted to that sort of dismissal of American mainstream fiction, essentially calling it parochial and narcissistically self-referential. Since then, I haven’t seen a word of response or refutation from American literary types, but perhaps that’s because all the refutations are just being circulated within the American “literary” community.

Maybe I’m just as parochial in looking at F&SF, but I see a considerable range of literary styles, themes, and approaches within the field, and intriguingly enough, I also see more and more “mainstream” writers “borrowing” [if not outright stealing] themes and approaches. That does tend to suggest that, even while some of the very same writers who have insisted that they don’t write SF are doing the borrowing, that some of the artificial “genre” barriers are weakening.

Of course, the remaining problem is that the book publishing and selling industry really loves those genre labels as a marketing tool… and so do some readers… but that’s another issue that I’ve addressed before and probably will again. In the meantime, we need to realize that F&SF is far from “an ineluctably minor genre,” as one too self-important, if noted, writer put it, but a vital component of literature [yes, literature]. Eventually, everyone else will, too, at least those who can actually think.

Romances and F&SF

Last week, a reader made the comment that “Most literature professors would dismiss Mr. Modesitt’s novels with the same contempt he probably reserves for Harlequin romances.” While I can’t argue with his evaluation of “most literature professors,” even though I spent several years teaching literature at the undergraduate level, I can and do dispute the assessment of my views on romances, Harlequin or otherwise. Having survived the adolescence and maturing of six daughters, who now tend to prefer F&SF, I have seen more than a handful of romances around the house over the years. I’ve even read a few of them, and I’m no stranger to including romance in at least some of my books.

Because of my own contempt for those literary types, whether professors or writers, who sniff down their noses at all forms of “genre” fiction, I’m not about to do the same to romances… or thrillers, or mysteries. I do allow myself some disgust at splatter-punk, and the pornography of violence and/or human plumbing, otherwise known as ultra-graphic sex, but that doesn’t mean some of it might not be technically well-written. Snobbery and blanket exclusion under the guise of “excellence” or “literary value” is just another form of bias, usually on the part of people who haven’t bothered to look deeply into genres or forms.

While more than a few “sophisticates” and others dismiss romances as formulaic, that’s just a cop-out. Just about every novel ever published is formulaic. If novels weren’t, they’d be unreadable. The only “formulaic” question about a work of fiction is which formula it follows.

Romances happen to have some redeeming features, features often lacking in mainstream “literary” fiction, such as a belief in love and romance, and optimistic endings, and often retribution of some sort for evil. There’s often a theme of self-improvement as well. Are these “realistic” in our world today? No, unhappily, they’re probably not, but paraphrasing one of the grumpy old uncles in Secondhand Lions, there are some things, which may not even be true, that people are better off for believing in, such as love, honor, duty… And if romances get readers to believe in the value of such traits, they’re doing a lot more for the readers and society than “realistic” novels about the greed on Wall Street or the narcissism of the wealthy or the depths of violence and degradation among the drug and criminal cultures.

From a practical point of view as an author, I also can’t help but note that romances are the largest selling category of fiction by a wide-selling margin. Nothing else comes close. As in every other form of writing, there are exceedingly well written and even “literary quality” romances, and there are abysmal examples of fiction, but as Theodore Sturgeon said decades ago, “ninety percent of everything written is crap.” That includes F&SF, romances, and even, or especially, mainstream “literary” fiction.

So… no, I don’t dismiss romances. Far from it. And I just write my romances as part of my science fiction and fantasy.