The Weaker Sex… Revisited

Years ago, James Tiptree, Jr. [Alice Sheldon] wrote a novella entitled “Houston, Houston, Do You Read,” in which present-day male astronauts were catapulted into a future where there are no men. The implications of the story are that, despite their greater physical strength, men were indeed the weaker sex and perished, albeit with a little “help.” Some recent analyses of educational achievement by gender suggest that Sheldon just might have been on to something.

Over the past few years, sociologists, psychologists, teachers, and even politicians have been raising an increasing number of questions about the gender and educational implications of a high-tech society. Three generations ago, women students were a clear minority in higher education and in almost all “professional” occupations. Today, female students comprise the majority of undergraduate college students and graduates, with a nationwide ratio running in excess of 56% to 44%, a ratio that is becoming more unbalanced every year. So many more women are attending college that upon many campuses, particularly at elite universities and liberal arts colleges, they’re being subjected another form of discrimination. In order to keep a gender balance, many schools effectively require female students to meet higher academic standards than male students.

A recent report [Gender Equity in Higher Education: 2006] reported that in college men spent more time watching television, playing video games, and partying, while women had better grades, held more leadership posts, and claimed far more honors and awards.

The trend of women seeking professional education is also continuing in many graduate fields, such as law and medicine, where women outnumber men at many institutions.

In more and more state universities, women outnumber men, in some cases composing sixty percent of the student body. Even at Harvard, the latest freshman class has more women than men. The only areas where men are numerically dominant are in the hard sciences and in engineering, but even there, a greater and greater percentage of science and engineering students are female. Not only that, but in the vast majority of institutions, the majority of awards and honors are going to women. Now, admittedly, this female expertise hasn’t yet managed to reach and shatter the “glass ceiling” prevalent in the upper reaches of the faculty in higher education or in corporate America, but it’s coming, one way or another. There are female CEOs, but many women are simply choosing to create their own businesses, rather than play the “good old boy game.” Others become consultants.

Another factor that I’ve noted in my occasional university teaching, and one also noted by various family members, three of whom are professors at different universities, is that a decreasing percentage of college-age men are inclined to apply themselves in any degree that requires more than minimal effort, both physically and intellectually. This is a disturbing trend for society in a world where education and intellectual ability have become increasingly important, both to hold society together and to achieve a comfortable lifestyle suited to personal satisfaction and raising children. Even more disturbing is that this gender-based educational disparity becomes greater and greater as the income of the parents decreases. In short, men from disadvantaged homes are often half as likely to get a college degree as women from the same cultural and economic background.

Both my wife [who is a full professor] and I have watched male students turn down full-tuition scholarships because the requirements were “too hard,” while talented women literally had to claw their way through the same program.

Do these trends represent a failure of our educational system.. or is it that too many men can’t really compete with women once the playing field starts to get more level? Or that men need special treatment to achieve on an equal basis? After all, the real hunters are the lionesses, not the “noble” lion.

Reading… Again

In the headlines recently have been more stories about how little Americans read. According to an AP-IPSOS study, twenty-seven percent of all adults in the United States have not read a book in the past year. The remainder — those who claimed to have read at least one book over the past 12 months — averaged seven books. According to another earlier Gallup poll, some 57% of Americans had not read a book [besides the Bible] in the previous year.

I’m not troubled by the fact that there are those who haven’t read any books. In any society, there are people who just aren’t readers. But I am troubled by the numbers and the way they fall out.

I wasn’t surprised that the readers of fantasy and science fiction comprised less than 5% of all readers. Nor was I exactly astounded to discover that, over the past 15 years, book-reading percentages are down for the 18-25 age group, from close to 90% to less than 60%. More than half of all frequent readers are over age 50, and more than 55% of all books are purchased by those over 50. The highest concentrations of readers are among those who are older and college-educated.

Yet book sales are up. Exactly what does that mean? I’m reminded of a study done for the National Opera Association several years ago. Sales of opera tickets were up, and everyone was pleased until they looked closely at the numbers — which showed that while the number of tickets sold was up, the actual number of patrons was down, and that the average age of patrons was increasing.

The statistics on book reading seem to be following a similar pattern, and for years now, various pundits and social scientists have been worried that Americans are losing their reading skills – and that a smaller and smaller percentage of Americans are highly literate. Yet the U.S. economy still dominates the world stage, and, despite the difficulties in the Middle East, our military machine has no equal — even in situations where we have to deal with sectarian civil wars. So what’s the problem?

The problem is information-processing. To make intelligent decisions, human beings need information. They can obtain that information in one of three ways: direct personal experience, listening and watching, or reading. The first two means, while often necessary, share one basic problem. They’re slow, and the information flow is very restricted. Even slow readers generally can process written information several times faster than auditory information, and they can always refer back to it. That’s one reason, often forgotten, why stable civilizations did not emerge until written languages developed. The invention of the printing press in Europe provided a tremendous informational advantage to western European civilization, which, until that time, had lagged behind the Asiatic cultures, particularly the Chinese. The Chinese culture effectively used an elaborate written pictograph-based character language to restrict social and political access to a comparatively minute fraction of the population, which resulted in a tremendous information gap once western cultures combined alphabet-based languages with widespread use of the printing press and the comparative decline of Chinese power and influence.

In its own fashion, an auditory-visual media culture limits and shapes information flow, first by selectively choosing what information to promulgate and second by tying that information to visual images. Now, immediately, someone will question this by pointing out the multiplicity of media outlets and the different media channels. There are hundreds of cable and satellite channels; there are thousands of blogs and web-sites. How can I claim this is limiting? First, on every single cable and satellite station, the information flow is effectively limited to less than one hundred words a minute. That’s the top rate at which most people can process auditory input, and most facts have to be put in words. Second, while the internet remains primarily text-based, the vast majority of internet users is limited to what best might be called “common access” — and that is extremely limited in factual content. If you don’t believe me, just search for Mozart or Einstein or anything. In most cases, you’ll find hundreds, if not thousands, of references, but… you’ll seldom find anything beyond a certain “depth.” Oh… I’m not saying it’s not there. If you’re a university student, or a professor using a university library computer, or if you want to pay hundreds or thousands of dollars in access fees, or if you live in a very affluent community with integrated library data-bases, you can find a great deal… but Joe or Josephine on the home computer can’t.

In reality, the vast majority of internet users circulate and re-circulate a relatively small working data-base… and one which contains far less “real” information than a very small college library, if that.

Then add to that the fact that close to 60% of college graduates, according to a Department of Education study published last year, are at best marginally literate in dealing with and understanding information written at the level of a standard newspaper editorial.

These lead to the next question. Why does all this matter?

I’d submit that it matters because we live in the most highly technical age in human history, where no issue is simple, and where understanding and in-depth knowledge are the keys to our future… and possibly whether we as a species have a future. Yet the proliferation of an auditory-visual media culture is effectively limiting the ability of people, particularly the younger generations, to obtain and process the amount of information necessary for good decision-making and replacing those necessary reading and researching skills with simplistic audio-visuals and an extremely limited common informational data-base. This makes for a great profit for all the media outlets, but not necessarily for a well-informed citizenry.

Like it or not, there isn’t a substitute for reading widely and well, not if we wish what we have developed as western semi-representative governments to continue. Oh… some form of “civilization” will continue, but it’s far more likely to resemble a westernized version of the pre-printing press Chinese society, with a comparatively small elite trained in true thought and mental information processing, all the while with the media and communications systems types enabling “sound-byte” politicians with simplistic slogans while trumpeting freedom of expression and banking greater and greater profits.

Come to think of it… I wrote a story about that. It’s entitled “News Clips from the NYC Ruins.” If you didn’t read it in The Leading Edge, you can read it in my forthcoming story collection — Viewpoints Critical — out next March from Tor.

The Anecdotal Species?

A recent article in the Economist suggested that the U.S. space program was headed further astray by concentrating on manned missions when so much more knowledge could be obtained at a lower cost from instrumented unmanned missions. After reading that, my first reaction was to disagree — on the grounds that unmanned missions keep getting their funding cut because they’re “just” research, and research always tends to get cut first in any consensus budgeting process, either in the corporate world or in government. In addition, I had the feeling that most people don’t identify with space cameras, comet probes, asteroid probes, near-earth orbit surveys, and the like, despite the cries and protests from many in the scientific community that “science missions,” as opposed to astronaut missions, are being under-funded, even though they provide far more information per dollar spent.

But then, ever since the Soviet space program collapsed, much of the impetus for U.S. space development seems to have collapsed as well, whether for manned or unmanned projects. Only when Americans perceive an immediate and real threat do we seem able to press for research in technological or military areas.

As I considered these points, I began also to reflect upon the time I spent at the U.S. EPA, when there was a great furor over the possible contamination from leaking Superfund sites. Now, there was no question that a considerable number of abandoned waste sites were in fact leaking and contaminating the environment near those locations, and public opinion was clear and decisive. Fixing Superfund sites was top priority. Somewhat later on, the Agency looked further into the environment priorities, and issued several reports. The gist of the findings was that, in general, that the public’s priorities for environmental improvement were almost inversely related to the real dangers to people and health. The actual illnesses and deaths from leaking Superfund sites were far lower than those from at least five other major environmental issues. How could this be? It happened, I believe, because we are an anecdotal and egocentric species. Those dangers we see and hear personally, those we can understand easily, and those which can be related to us personally by those we know and trust — these are the dangers we believe in. When chemically-caused cancer occurs in a local community because of groundwater contamination, we react. But when the EPA or a state health agency notes that fatalities are rising from exposure to natural radon or skin cancer caused by the thinning of the ozone layer, we don’t. When health agencies point out that smoking is a far greater health hazard than any of the environmental concerns, such notice has a comparatively small effect. Even when the massive damage claims arrive from increased hurricane activities, we tend not to put as much personal priority in looking into why hurricanes seem to be more of a problem — and we just want someone else to pay for the repairs.

We all want our problems solved first. Then, and only then, will we devote resources to other people’s difficulties. This is a practical and natural approach from a Darwinian and historical point of view. If we don’t solve our problems first, we and our children may not be around to solve anyone else’s problems. But what happens when a non-immediate problem could become a far-larger problem threatening us and everyone else?

This difficulty isn’t a new problem in American society, and it’s not a problem confined to the U.S. Prior to roughly 1938, no one wanted to consider the implications of what Stalin was doing in the USSR or Hitler in Germany, or Mussolini in Italy. No one in the “western” world paid all that much attention to the Japanese “rape of Nanking” and what it foreshadowed. Today, because the area has no oil and no strategic import, and because few Americans have seen or experienced the brutality and continual death, most Americans don’t really pay much real attention to the genocide in Darfur.

This same mindset applies to the exploration of space — or the issues surrounding global warming. If something doesn’t pose an imminent danger or have an immediate entertainment or economic value… one that can be exploited quickly, why bother?

Then… add one more complicating factor. In neither space exploration nor global warming do we truly have a certain solution. While we’ve reached the point where it appears that a majority of the knowledgeable scientific community believes that there is some form of global warming occurring, there is no strong consensus on what might be called a workable strategy. What one group calls workable is called punitive by another. Reducing carbon emissions is one possibility, but that will penalize third world and developing nations disproportionately, if carried out. Unilateral action by industrial nations means their citizens bear higher costs. Reducing greenhouse gases is another possible approach, but that cost falls more heavily on the high-tech economies. Requiring more fuel efficient cars increases costs and decreases choices more for those who require cars to get to their jobs… And so it goes.

The bottom-line question might well be: Can a species that’s been hard-wired over a million years to be short-term, personally/familially-oriented, and anecdotal cope with problems that require long-term planning and wide-spread consensus?

Belief?

Believing in something does not make it true. Disbelieving in something does not mean that it cannot exist. Admittedly, on the quantum level, the act of observing often changes or fixes what is, but so far, at least, the question is not whether a particle or wave or photon exists, but in what form and exactly where.

The problem in human behavior is that belief has consequences, often deadly ones. I cannot imagine that a Supreme Being, should one exist, could possibly care whether the correct prophet happened to be the son or the nephew, or whatever, of the old prophet. Nor do I think that it is at all rational that rigid belief in one set of rituals about a God will give one worshipper eternal favor while rigid belief in another set of rituals about that same God will damn a different worshipper eternally. And I have great difficulty in thinking that any deity will grant one eternal and blissful life for slaughtering those who believe differently, particularly those who have done nothing to offend one except not to share the same beliefs.

All that said, in human affairs, it doesn’t matter much whether I or others have difficulty understanding why people would care about such differences passionately enough to kill to attempt to force their beliefs on those who would choose to believe differently — or not to believe in a deity at all. The fact is that, both now and throughout history, millions upon millions of people have been killed over beliefs, not just religious beliefs, but political beliefs, cultural beliefs, and even economic beliefs.

Yet there is no true proof behind these beliefs, especially religious beliefs. Oh, there are books, and testimonies, and prophets, and visions, and unexplained phenomena, but true proof, in the scientific sense, is missing. Even for some well-accepted political beliefs, solid verifiable proof of their efficacy is scant or lacking altogether.

Science, at least in theory, is supposed to test propositions and to verify them. We apply such methodology to every physical aspect of life in modern society, yet there is no comparable test for “belief.” All one has to do is to say, “I believe.”

And so, despite astronomical, atomic, chemical, and geologic evidence that the universe is close to 15 billion years old, there are those who choose to believe that it was created far more recently than that. Despite a long fossil record documenting evolution, creationists cite lapses and faults in the fossil chronology, yet dismiss the counter-argument that there is no physical record at all suggesting “instant” divine creation. Nor is there any true physical evidence suggesting an afterlife.

So… what’s the problem with belief? Everyone has some belief or another.

Beliefs have consequences, and not just the obvious ones. Take the widely held belief in some form of an afterlife, a belief held by close to seventy percent of all Americans and eighty percent of Americans over 50, according to recent surveys. What does that mean? One of the greatest dangers of this commonly held belief is that it allows cruelty in the name of all manner of rationales. After all, if there is a supreme deity, if there is an afterlife, well… all those folks who got slaughtered have another chance to repent and redeem themselves. It’s not like it’s forever.

But… what if it is? What if one life is all anyone gets? There’s lots of belief about eternal life, but there’s no proof, not scientific proof. We want all sorts of tests about whether our food is safe to eat, whether our cars are safe to drive, whether our water is pure, whether our air is clean. Yet, we believe in an afterlife without a single shred of scientific proof. Are there two standards in life? Those for the physical world, where everything must be proven and proven again, where lawsuits abound over the tiniest discrepancies… and those for beliefs, where, regardless of the consequences, we accept something totally unproven?

Is that because we can’t face, and don’t want to face, the truly toxic aspect of belief in an afterlife — that it allows us all sorts of justifications for cruelty, for oppression, for suppression? If the life we have now is the only one we will ever have, and if we accept that, could we live with all that we have done to others?

Then, too, the truly terrifying possibility is that we could, and that the results would be worse, far worse. Does that mean that belief in unproven deities is preferable to the alternative? If so, what does that say about us as a species?

Thoughts on Reader Commentaries

Like many authors, I do read at least some of the posts and commentaries about my work, not so much for ego-surfing, because one nasty comment wounds more than a score of positive ones heal, but to see what some of the reactions [if any] to what I wrote are. After many years, there are certain patterns which have become obvious.

First, a number of readers believe that whatever my protagonists say and do is what I believe. So do I believe in pre-emptive action [as do Jimjoy Wright, Nathaniel Firstborne Whaler, and Gerswin], in semi-preemptive action [ala Lerris, Lorn, Trystin, or Van Albert], or reaction in massive force [Ecktor deJanes, Anna, or Secca]?

Because different protagonists react in different fashions, I find that this occasionally engenders one of two reactions from readers. The first reaction is that I am being inconsistent. The second reaction, which is far more common, is when the reader fixates on a particular type of hero or behavior and ignores all the others. For example, many readers believe that I only write coming of age stories about young heroes, But even in the Recluce Saga, of the fourteen books published [or about to be published], exactly half deal with “coming-of-age.” None of the Spellsong Cycle novels use that approach, and only one of the Corean Chronicles is really a coming-of-age tale. Almost none of my science fiction novels deal with “coming of age” themes. By these figures, less than twenty percent of my work is true “coming of age” work.

Then there is the charge that I write the “same” book, over and over. To this charge, I plead “partly guilty,” in that there are common sub-themes in every book I write: the hero or heroine learns something and accomplishes something and there’s some form of romantic interest. I’m not terribly interested in writing books where the protagonist learns nothing and/or accomplishes nothing. In practice, a protagonist either learns or doesn’t learn, accomplishes something or doesn’t. Now, in the James Bond books, and in many of the endless series with the same cast of characters, a great deal of action takes place, but when it’s all over, what exactly has happened? Isn’t the norm that one set of disposable characters has been killed or jailed, or been made love to and discarded, only to be replaced by another set for the next episode? Has the real structure of the world or the political system changed — or has the scenery just been replaced, so to speak, and made ready for another series of adrenaline-filled [or lust-filled or whatever-filled] adventures?

Nor am I interested in writing nihilistic or “black” fiction. Years ago, in my closest approach to the dark side, I did write one classical tragedy in three volumes, and sales of the third volume plummeted. Interestingly enough, now that The Forever Hero has been reprinted in a single fat trade paperback, it has continued to sell modestly… but reader reaction has been more than a little mixed. Even so, I seldom write books with unabashedly “everything is well” endings. Most of what I write has what I’d call “bittersweet” endings, those where the protagonists achieve their goals, but end up paying more, if not far more, than they dreamed possible. I’ve also discovered that, because I often don’t make that explicit, a number of readers don’t always catch the darkness veiled within the ending.

In a larger sense, however, ALL writers write the same books over and over. As Heinlein pointed out over 35 years ago, there are only a handful of plots, presented in many guises, but limited in “formula,” if you will, to those basic plots.

Oh… and then there’s the reader reaction to the food. More than a few times, there have been questions and comments about why my books have so many scenes where the characters eat. With those comments and questions have come observations about the food, ranging from why it’s so simple in some books to why it’s so elaborate in others. Why the meal scenes? Because, especially in low-tech societies, meals are about the only opportunity for conversations and decisions involving more than two people. As for the fare served, I try to make it appropriate to the time and culture, as well as to the economic status of those at the table.

Finally, as exemplified by the reaction of some few readers to my comments and amplifications on why most readers don’t like or aren’t interested in F&SF, there are always those who believe that, by what I have written, I am attacking their most cherished beliefs, and that because I am, I’m clearly an idiot. By this standard, I suspect all of us are idiots to someone, and writers more so because writers who continue to be published have to say something, and something will always offend someone. My personal belief is that a writer who offends no one usually has little to offer.

Most professional writers do offend someone, and in that sense, you as readers can judge authors not only by their supporters and friends, but also by those who dislike them , although I would also suggest, based on experience, that most readers who dislike an author cannot be impartial in evaluating their dislike. Why? Because most writers published by major publishing houses produce an acceptable technical product [even if editors must proof and correct it in some cases], when someone claims they dislike a writer because his work is “badly written,” “excessively verbose,” “so spare you can’t follow the action,” “filled with cliches,” and the like, all too often this sort of criticism veils a deeper dislike within the reader, and one based more upon conflicting values than upon the writer’s technical deficiencies. Now, I am far from claiming that we as writers do not make technical mistakes or that we do not occasionally manifest such deficiencies, but any writer who has glaring technical deficiencies, as cited by some readers, will not get book after book published. In the end, most criticism reflects as much, if not more, about the critic as about the author.