Archive for the ‘General’ Category

The Illusion of Knowledge

Recently, I’ve read more and more on both sides of the “debate” about whether the internet/world-wide-web is a “good” thing.  One ardent advocate dragged out the old
“Greek” argument that even writing was “bad” because memory would atrophy… and, of course, look how far we’ve come from the time of the Greeks, how much knowledge we’ve amassed since then.

And… in a cultural and societal sense, that accumulation of knowledge has, in fact, occurred, but I’m not so certain that we now don’t stand at the edge of a precipice, where, if we choose incorrectly as a society, we will slide down the slippery slope into ignorance and anarchy, if not worse. Some people already believe we’ve started to slide so much that we’ll never recover.  While I’m not that pessimistic, not yet, at least, I would like to point out a fatal flaw in the idea that technology results in a more knowledgeable society.

To begin with, let us consider the very meaning of “knowledge.” Various dictionary definitions begin with:  (1) a product of understanding acquired through experience, practical ability or skill and (2) deep and extensive learning.  The key terms here are understanding and learning.  The problem with the web and electronic technology in general is that most users fail to understand that access to information or facts is not at all the same as understanding those facts, their use, or, especially, their significance.  True understanding is impossible without a personally learned internal database.  Being able to net-search things is not the same as knowing them, and very few individuals can retain facts looked up unless they have a personal internal knowledge base to which they can relate such facts.

All too many educational “reformers” either tend to equate the learning of specific, often unrelated facts, processes, and discrete skills with education or knowledge, or, at the other extreme, they emphasize “process” and inter-relations without ever requiring students to learn basic structures and facts.  Put another way, information access is not knowing or knowledge, nor is learning processes and systems ungrounded in hard facts. Both the understanding of process and systems and a personal integrated factual “database” are necessary for an individual to be educated and knowledgeable, and far too few graduates today possess both.

The often-too-maligned educational system of the early and mid-twentieth century had a laudable objective:  to give students the basic knowledge of their society and the basic skills needed to survive and prosper in that society.  Did it often fail?  It did, and in many places, and far too frequently.  But that didn’t mean that the objective was wrong; it meant that all too often the techniques and means used were not suited to various types of students.

What followed that system is certainly no better, and possibly much worse. When something like 40% of high school graduates cannot explain against whom the American Revolution was fought and why it was important, those students cannot be classed as knowledgeable.  Nor can the 60% who cannot write coherent complex sentences or understand them be considered educated.

A culture that exalts the ability to use technology over the ability to understand it and over the ability to explain even what society is, why it exists, and what forms of government benefit who and why is in deep trouble.  So is one where the process of accessing information is elevated over understanding what that information means and how to use it. That, by the way, is also known as thinking.

And yet, every day, and in every way, our society is encouraging an ever-increasing percentage of our young people to communicate, communicate, communicate with less and less real knowledge… and without even being able to understand truly how little that they know about the basis and structure of the world in which they live.

And concerning knowledge… that is the greatest illusion of all.

They Did It All by Themselves [Part II]

Several weeks ago, an article appeared in the local newspaper, an interview with the new artistic director of the Utah Shakespeare Festival.  He’s a product of the local university, where he learned his craft from, among others, Fred Adams, the legendary professor who established and ran for decades the Festival [which has won, among other honors, a Tony for being one of the best regional theatres in the United States].  The new director is an accomplished and effective actor, and there’s no doubt about that.  But what bothered me about the interview was that not a single word appeared about those who mentored, taught, inspired, and hired him, including Fred Adams.  Everything was about the new director, his talents, and his aspirations.  I can’t honestly say whether this was because he never mentioned those who had helped him every step of the way or because the interviewer left any such remarks out of the final story.

In some ways, it doesn’t matter, because, as the story ran, it’s all too symbolic of American culture today.  No one owes anything to anyone.  In fact, it’s even worse than that. Part of this change lies in an attitude that everything important exists only in the here and now, a change in what was once a core American value.  Southern Utah University, for example, exists only because, more than a century ago, a handful of local citizens mortgaged everything they had to come up with the funds to build the first building of the school – the building being required by the state legislature.  They did so because they felt that would offer a better future to their children and their community.  None of them ever received any financial reward, and their act is largely buried in history… except for a few older residents of the town and some university faculty.

Another symptom indicative of this change in public attitudes is reflected in the content of those largely useless student evaluations.  As a senior faculty member, my wife serves on the committee that reviews tenure and promotion applications for faculty. Since faculty members are now required to include all student evaluations and comments, she sees the comments from students across all disciplines in the university, and what is so incredibly disheartening is that there is virtually no real appreciation for professors at any level. The overwhelming majority of the comments – even of professors who have demonstrated incredible teaching effectiveness and who have gone out of their way to help students for years – deal with complaints, often insanely petty.

Part of this trend may be because all too many students don’t seem to know what’s important.  One student praised a professor because he once brought in soft drinks for the class!  Another faculty member was praised for bringing donuts. Exactly what does this have to do with education? Over the years, my wife and other members of her department have done such quiet deeds as paid student medical bills out of their own pockets, created student scholarships with their own funds, and personally helped students financially, offered hundreds of unpaid hours of additional instruction – the list is endless.  Once, say fifteen years ago, students seemed to appreciate such efforts.  Today, they complain if faculty members don’t smile when the students perform [yes… this actually happened.  Twice!].

In the interests of full disclosure, as the saying goes, I probably haven’t offered enough gratitude to those who helped me – but I have offered it, in speeches, in book dedications, and in interviews… and I didn’t forget them, visiting and writing them over the years.  And certainly there are notable exceptions, some very public.  One noted Broadway singer and actress, in giving a concert last week, paid clearly heart-felt tribute upon several occasions to her undergraduate singing teacher.  The problem is that these are exceptions… and becoming more and more infrequent every year.

The noted Isaac Newton once said that he had accomplished so much because he stood “on the shoulders of Giants,” but all of us owe debts to those who preceded us.  We didn’t do it alone, and far too many people who should know this fail, time and time again, to know that, to appreciate it, and to acknowledge it, both privately and publicly.

No… I’m Not Theologically Challenged… Just Directionally Impaired

A little over a week ago, I did something – unintentionally – that I truly wish I could undo, and for which I’m very sorry. I was taking my morning walk with the over-energetic Aussie-Saluki when a car pulled over, and a strange man with a delightful and precise English accent asked me for directions to the Catholic Church.  I was glad to oblige, and promptly stated, “Just go to the end of the road; turn right, and it’s two blocks up.”

Simple enough.

Except… I’m one of those people who have no innate sense of left and right.  I’m not directionally impaired in the sense of getting lost; I almost always know where I am, and have enough sense [acquired painfully from my wife] to ask directions when I don’t.  I’m also very good at providing written directions to others.  But when I’m caught off-guard as I was on that morning, with my thoughts more on other matters, from the plotting of the newest book to what a beautiful morning it was, I often speak before full consideration of my words.

The road used to end where I meant for him to turn, but it hasn’t for more than a year, since the city extended it a half-mile downhill through a winding canyon to meet up with the Cross Hollows Parkway.  Instead, there’s only a stop sign there now.  And… as a result of my left-right confusion, I told the English gentleman to turn right, rather than the correct direction, which was left.

But I didn’t.  About twenty seconds after he drove off, I realized what I had done and started waving and running after the car, Aussie-Saluki delighted that we were running.  Alas, he never looked back… and by the time I got home and went looking for him in the car… he was nowhere to be found.  I just hope he found the right church.

Now… the other side of the story is…  Equally inadvertently, the directions I had given him were precisely correct in taking him to the nearest LDS Stake [church], if almost three-quarters of a mile away, rather than the four blocks to the Catholic Church.

So… either way… I’m either regarded as a directional idiot, theologically challenged in not even knowing which church was which, or determined to steer the poor man away from his church of choice to another faith [even though I’m not a member of either faith].

As so many people have often probably said, I just wish I’d thought through what I’d said a little more carefully…. And because I never knew who he was, this is the only apology I can offer.

You Don’t Get What You Don’t Pay For

The other day I came across a series of articles, seemingly unrelated – except they weren’t.  The first was about why Vietnam is now producing perhaps the majority of great young chess players in the world.  The second was a news report on the Gina Bachauer International Artists Piano Competition in Salt Lake City, and the third was a table of the average salaries of U.S. university professors by area of specialty.

The Vietnamese are producing chess champions and prodigies, it seems, because [gasp!] they pay them.  Gifted young players are paid from $300 to $500 a month to learn and play chess, and the best get all expenses paid to play in tournaments world-wide.  These are substantial incentives in a country where the average monthly family earnings are around $100.  Of course, American teenagers spend more than that monthly on what the Vietnamese would likely consider luxuries, and in the United States young chess players must count on the support of family or charitable organizations… and despite being one of the largest and most prosperous nations in the world, we have comparatively very few international class chess masters.

The finals of the Gina Bachauer Piano Competition were held in Salt Lake City last week, and of the eight finalists, one was Russian, one was Ukrainian, and the other six were Asian. This pattern has been ongoing for close to a decade, if not longer.  We haven’t produced a true giant in piano performance in decades, but then, the top prize is a mere $30,000, hardly worth it for Americans, apparently, not when it takes 15 plus years of study and hours upon hours of daily practice – all for a career in which the top-flight pianists generally make less money than whoever is 150th on the PGA money list.

All this might just tie in to the salaries of university professors.  The three areas in which university professors’ salaries are the lowest are, respectfully, from the bottom: theology/religion; performing and visual arts; and English.

I’m cynical, I know, but I don’t think that this is coincidental.  In the United States, mainstream religions [who generally require some intensive theological training] are losing members left and right.  The highest-paid performing and visual artists are those who can provide the most spectacular show, not the most technically sound performance, and most “professional” pop singers could not even match the training or technical ability of the average graduate student in voice, but technical ability doesn’t matter, just popularity, as witness American Idol.  As for English, when 60% of all college graduates aren’t fully technically competent in their own language, this does suggest a lack of interest.

The other factor in common in these areas is that the average semi-educated American believes that he or she knows as much as anyone about religion, singing, dancing, acting, and English as anyone.  And that’s reflected in both what professors are paid and in what experts in those fields are paid. The problem is that popular perceptions aren’t always right, regardless of all the mantras about the “wisdom of the crowd.”  The highest paid professors – and professionals – in the United States today are in the field of business and finance.  That’s right – those quant geniuses who brought you all the greatest financial melt-down since the Great Depression, not to mention the “Flash Crash” of a month or so ago when technical glitches resulted in the largest fastest one-day decline in the market ever.  Oh… and just as a matter of national pride, if you will, why do professors of foreign languages get paid 8-10% more than professors of English? Especially when the mastery of English is at a decades-low point?

More to the point, it’s not just about singers, writers, English professors, but about all of society.  We may complain about the financiers and their excesses, but we still allow those excesses.  We may talk about the importance of teachers, police, firefighters, and others who hold society together, but we don’t truly support them where it counts.

As a society, we may not always get what we pay for, but you can bet we won’t get what we don’t pay for.

Everyone’s Wonderful! [Part II and Counting]

I noted some time back that the scholar Jacques Barzun had documented in his book From Dawn to Decadence what he believed was the decline of western culture and civilization and predicted its eventual fall.  One of his key indicators was the elevation of credentials and the devaluation of achievement. Along these lines, the June 27th edition of The New York Times [brought to my attention by an alert reader] carried an article noting the emergence and recognition of multiple high school valedictorians. One high school had 94, and another even had 100!

While many factors have contributed to this kind of absurdity, two factors stand out: (1) rampant grade inflation based on an unwillingness of educators and parents to apply stringent standards that measure true achievement and (2) a society-wide unwillingness to recognize that true excellence is rare – except perhaps in professional sports.

So many problems arise from this tendency to over-praise and over-reward the younger generation that I can’t possibly go into all of them in a blog.  But I do want to address some of those of greater import, not necessarily in order of societal impact, but as I see them.  First of which is the fact that, beyond high school and certainly beyond college, there can’t be multiple “winners.”  There will only be one position at the hospital for a new surgeon, one or two vacancies for new teachers each year at the local school or a handful at most.  Graduate schools only take a limited number of applicants from the overall pool, and they do make choices.  Sometimes, the choices or the grounds on which they’re made may not be fair, just as a bad grade in freshman PE may keep a high school student from becoming valedictorian [if only one is chosen, the way it used to be], but the plain fact is that, in life, economics and need limit what is available, and students need to learn that not everyone gets to be top dog, even if the differences between the contenders seem minuscule,

Second, by recognizing multiple students as “valedictorians,” schools and parents are both devaluing the honor and simultaneously over-emphasizing it as a credential.  As a result, more and more colleges are ignoring whether students are “valedictorians” and relying on other factors, such as, perhaps regrettably, standardized test scores.

Third, like it or not, as former President Jimmy Carter once stated [and for which he was roundly criticized], “Life isn’t fair.”  It may not be “fair” that one teacher somewhere in the past didn’t like this or that student’s performance and gave them an A- rather than an A, and that kept them from being valedictorian.  It’s not “fair” that Ivy League schools now require better grades from their female applicants than from their male applicants because more female students work harder and the schools don’t want to overbalance their student bodies with women.  Unfortunately, what society can do in “legislating” fairness is not only limited, but impossible to produce anything close to absolute fairness in real terms.  All society can do is set legal parameters to prohibit the worst cases.  We, as individuals, then have to do our best to act fairly and learn to work around or live with the instances where “life isn’t fair,” because it isn’t and never will be.

Fourth, frankly, in cases of similar or identical grades, other factors should be weighed.  They certainly are in all other occupational situations in life, because they have to be. When there are limited spaces, decisions will be made to determine who gets the position.  Not observing this practical factor in high school is just another aspect of giving students an inflated view of their own “specialness,” or, if you will, the continuation of the “trophies for everyone” philosophy.

But… is anyone listening?  Apparently not, because there’s more and more grade inflation, more and more valedictorians, and more and more emphasis on how “wonderful” every student is.

Marketing F&SF

Recently, Brad Torgersen made a lengthy comment about why he believed that F&SF, and particularly science fiction, needs to “popularize” itself, because the older “target market” is… well… old and getting older, and the younger readers tend to come to SF through such venues as media tie-in novels, graphic novels, and “popular” fiction.  While he’s absolutely correct in the sense that any vital genre has to attract to new readers in order to continue, he unfortunately is under one major misapprehension – that publishers can “market” fiction the way Harley-Davidson marketed motorcycles.  Like it or not, publishing – and readers – don’t work quite that way.

Don’t get me wrong.  There’s quite a bit of successful marketing in the field, but one reason why there are always opportunities for new authors is that it’s very rare that a publisher can actually “create” a successful book or author.  I know of one such case, and it was enabled by a smart publisher and a fluke set of circumstances that occurred exactly once in the last two decades.  Historically, and practically, what happens far more often is that, of all the new authors published, one or two, if that, each year appeal widely or, if you will, popularly.  Once that happens, a savvy publisher immediately brings all possible marketing tools and expertise to publicize and expand that reading base and highlight what makes that author’s work popular.

In short, there has to be a larger than “usual” reader base to begin with, and the work in question has to be “popularizable.”  I do have, I think it’s fair to say, such a reader base, but, barring some strange circumstances, that reader base isn’t likely to expand wildly to the millions because what and how I write require a certain amount of thought for the fullest appreciation of readers, and the readers who flock to each new multi-million selling new novelistic sensation are looking primarily for either (1) entertainment, (2) a world with the same characters that promises that they can identify with it for years, or (3) a “fast” read, preferably all three, but certainly two out of three.

All this doesn’t mean that publishers can’t do more to expand their readership, but it does mean that such expansion has to begin by considering and publishing books that are likely to appeal to readers beyond those of the traditional audience, without alienating the majority of those traditional readers. And in fact, one way that publishers have been trying to reach beyond the existing audience is by putting out more and more “supernatural” fantasy dealing with vampires, werewolves, and books with more explicit sexual content.  The problem with this approach is that, first, such books tend not to appeal to those who like science fiction and/or tech-oriented publications, as well as also tending to alienate a significant percentage of older readers, as opposed to, as Brad pointed out, media tie-in novels, which appeal across a wider range of ages and backgrounds. Another problem is that writing science fiction, as opposed to fantasy, takes more and more technical experience and education, and fewer and fewer writers have that background.  That’s one reason why SF media tie-in novels are easier to write – most of the technical trappings have been worked out, one way or another.

I don’t have an easy answer, except to say that trying to expand readership by extending the series of authors with “popular” appeal or by copying or trying to latch on to the current fads has limited effectiveness. Personally, I tend to believe that just looking for good books, whether or not they fit into current popularity fads, is the best remedy, but that may just be a reflection of my views and mark me as “dated.”

In any case, Brad has pointed out a real problem facing science fiction, in particular, and one that needs more insight and investigation by editors and publishers in the field.

The Vanishing/Vanished Midlist?

Several weeks ago, I attended a science fiction convention where the guest of honor was a writer who spent some 20 years as what one might call a “high mid-list author,” someone able to work full-time as a writer and pay the bills.  Except… several years ago, this came to an end for the writer.  Oh… the writer in question still publishes two books a year, but they aren’t selling as well as earlier books, although those who read the books claim they’re as good, if not better, than earlier work, and now to make ends meet requires outside additional work as a consultant and educator.  To make matters worse, at least from my point of view, this writer produces work that is more than mere entertainment and mental cotton-candy.

Interestingly enough, more and more of the books cited by “critical” reviewers in the F&SF field [with whom I have, as most know, certain “concerns”] seem to come from smaller presses.  This is creating, I believe, an almost vicious cycle in F&SF publishing. The more the books praised by reviewer come from small presses, the more larger publishers get the message that “good” or “edgy” or “thoughtful” books don’t sell as well, and the greater the almost subconscious pressure to opt for “fiction-fun” or “fiction-light.”  To their credit, certain publishers, including mine, thankfully, are resisting this trend, but I’m still seeing more of those novels that are gaming and media tie-ins or endless series.  And yes, the Recluce Saga is long, but… as I keep pointing out, no character has more than two books.  I don’t have eight or ten or fifteen books endlessly spinning improbable stories and extensions about the same character or characters.

With the drastic changes in wholesale distribution over the past decade or so, virtually no mid-list books receive such distribution, except perhaps lower-selling titles of big-name authors.  As a result of these trends, the midlists of at least some large publishers that were once the home of “thoughtful” books are shrinking. Some such midlist writers have found homes with the smaller presses, but small press distribution systems often are not as extensive. That has resulted in lower sales for the authors who wrote those books, and lower sales means lower incomes, and either cutting back on writing or holding down more other jobs… or… trying to re-invent one’s self with another form of “fiction-light.”

I’ve heard many who believe that e-book sales can help here, but the sales figures I’ve seen suggest that e-books do more for those books that have high sales levels and wide distribution in hardcover and paperback – and those aren’t the midlist books.

It almost appears that the midlist F&SF titles are going to become a ghetto within a genre… and that concerns me, and it’s certainly affecting all authors, but particularly those who once wrote good midlist books and made a living at it… and now can’t.

Electronic Free-Loading… and Worse

Even with spam “protection,” the amount of junk email that my wife and I receive is astronomical – less than one in fifty emails is legitimate.  The rest are spam and solicitations.  Now I’m getting close to a hundred attempted “spam” comments on the website daily, all of them with embedded links to sell or promote something. That’s just one facet of the problem.  Another facet is the continual proliferation of attempts at phishing and identity theft.  It makes one want to ask – have there always been so many people trying to make a buck, rupee, ruble, Euro, or whatever by freeloading or preying on others?

I know that con artists have been around since the beginning of history, but never have such numbers been so obvious and so intrusive to so many.  Is this the inevitable result of an electronic technology that makes theft, fraud, and blatant self-promotion at the expense and effort of others a matter of keyboarding at a distance?  At one time, these types of offenses had to be carried out in person and embodied a certain amount of risk and a probability of detection and usually criminal punishment.  Now that they can be accomplished via virtually untraceable [for practical purposes] computer/internet access, they’ve proliferated to the point where virtually every computer connected to the net runs the risk of some sort of loss or damage – a form of computer Russian roulette.

But what I find the most disheartening about this is the fact that so many people, once the risk and criminal penalty factors were so dramatically reduced by technology, set out to exploit and fleece others.  Even those of us not yet fleeced or exploited have to take time, effort, and additional software to deal with these intrusions.  I have to sort through the potential comments quarantined by the system several times a day, because a few are legitimate, and deserve to be posted, and I still have to take time to delete all the unwanted email.  I have to pay for protective software, and so forth.  In effect, every computer user is being taxed in terms of time, money, and risk by this radical expansion of the unscrupulous.

Now… those who are extreme technophiles will claim that the downsides of our technologically based communications/computing systems are negligible… or at least that the benefits far outweigh the downsides.  But the problem here is that most of the benefits, especially in terms of costs, go to large institutions and the unscrupulous, while the downsides fall on the rest of us.  I don’t see that, for example, that the internet enables more good writers; it enables writers who are better self-promoters, and some good writers are, and a great many aren’t.  In trying to evaluate honestly what I do on the net, I suspect that my internet presence is similar to treading water.  I’m not losing much ground to the blatant self-promoters, but for all the effort it requires, I’m not gaining either, and it’s time spent when I can’t be writing.  Yet if I don’t do it, especially with, I have to admit after looking at recent sales figures [and yes, some of you were right] the recent spurt in the growth of e-books, my sales will suffer.

I don’t see that the internet is that useful in enabling small businesses, because there are so many, and the effort and ingenuity require to attract customers is considerable, but it certainly allows large ones to contact everyone.  And it certainly allows every variety of cyber-criminal potential access to a huge variety of victims with almost no chance of getting detected, let alone prosecuted and punished.  The idea of privacy has become almost laughable, even for those of us who don’t patronize social networking sites.

Cynical as I may be, my hopes have always been that technology would be employed to enable the best to be better, and the rest to improve who and what they are.  Yet… I have this nagging feeling that, more and more, technology, particularly communications technology, is dragging down far more people than it is improving, especially ethically… and, even if it isn’t, it’s creating a tremendous diversion of time from actual productive work.  That diversion may be worthwhile in manufacturing-based industries, but it’s a definite negative force in areas such as writing and other creative efforts.  In a society that is becoming ever more dependent on technology, unless matters change, this foreshadows a future in which marketing and hype become ever more present and dominant, even as the technophiles are claiming communications technology makes life better and better.

Better and better for whom?  And what?

Fantasy… Should be Fun?

The other day, when reading a blogger’s review of The Soprano Sorceress, I came across an interesting question, clearly meant to be rhetorical – what point was there to reading a fantasy if the reader didn’t like the fantasy world created by the author?  It’s a good question, but not necessarily in the way that the reviewer meant, because his attitude was more along the lines of wanting to avoid reading about worlds he didn’t like, particularly because he asked another question along the lines of what fun was there in reading about such a world.

Yet… I have to confess that there are authors I probably won’t read again because I don’t care that much for their worlds, just as there are authors I won’t read again because I don’t care for their characters.  In particular, I don’t care for characters who make mistakes and errors that would prove fatal in any “realistic” world situation, yet who survive for book after book [I presume, because the series continues, even if I’m no longer reading them].  Obviously, those kinds of books have great appeal, because millions upon millions of them sell, and maybe that’s the “fun” in reading them.

But there’s a distinction between “good” and “fun,” and often one between “entertaining” and “thought-provoking,” and there are readers who prefer each type, although sales figures suggest that “fun” and “entertaining” are the categories that tend to outsell others significantly, often by orders of magnitude.

The question the blogger reviewer asked, however, holds within it an assumption that all too many of us have – that “our” view is the only reasonable way of looking at a particular book… and that, I think, is why I tend to be reluctant in reading reviews, either those considered “professional” or those less so, because the vast majority of reviewers start from the unconscious presupposition that theirs is the only “reasonable” way of looking at a given book.  The more “professional” the reviewer is, the less likely this presupposition occurs, but there are still well-known reviewers and review publications that fall regularly into this mind-set.  The problem lies in not only in the expectations of the reviewer, but also in the knowledge base – or the lack of knowledge – that the reviewer possesses.  A novel that uses allusions heavily to disclose character will seem shallow to the reader or reviewer who does not understand those referents.  A reader unfamiliar with various “sub-cultures,” such as the corporate or legal worlds, politics, the military, academia, is likely to miss many subtleties of the type where explanation would destroy the effect.  Because of this “sub-culture” blindness, certain books, or parts of certain books, tend to be less entertaining – or even boring – to those unfamiliar with the subculture, whereas a reader who understands those subcultures may be smiling or even howling with laughter.

As a side note, despite the impression that some bloggers have apparently gained from this site, I do read blog reviews of my work and that of other authors on a continuing basis, if sometimes reluctantly.  Why reluctantly?  Because it’s more often painful than not.  As a writer, for me such blogs often raise the question of why the reader didn’t understand certain matters that appear so obvious to me.  Could I have done something better, or was the matter presented well and the reader didn’t get it?  Half and half?  Such questions and second-guessing, I feel, are necessary if any writer wants to improve, no matter how long he or she has been writing… but I suspect any author who claims the process is enjoyable or entertaining is either lying or a closet masochist.  As part of being a professional, an author should know, I personally believe, the range of reactions to his or her work, as well as the reasons behind those reactions, but, please, let’s not have commentators suggest that we’re somehow outdated, out of touch, or unreasonable when we suggest that the process isn’t always as pleasurable to us as it apparently is to those who take great delight in complaining about what they perceive as deficiencies in what we write.  Sometimes, indeed, the deficiencies are the writer’s, but many times the deficiencies lie in the reviewer, and where the deficiencies may lie, or even if there are such deficiencies, isn’t always obvious to most readers of either blog or professional reviews… or even of professional blog reviews.

Sometimes… Just Sometimes… We Get It Right

Way back in 1958, in the so-called “Golden Age” of science fiction, Jack Vance wrote a book called The Languages of Pao, in which he postulated that language drastically affects human thought patterns and, thus, the entire structure of a culture or civilization.  A more scholarly statement of this is the linguistic relativity principle, otherwise known as the Sapir-Whorf hypothesis, of which there are two versions.  One states that language limits and determines cognitive categories. A weaker version merely suggests that language  influences thought and certain non-linguistic behaviours.  The Sapir-Whorf hypothesis was thought to be discredited by color-related experiments in the 1960s, because researchers  found that language differentials did not seem to affect color perception or usage.

Recent studies of human brain patterns and linguistic development, reported in the June 1st edition of New Scientist, strongly suggest that, first, there is not, as previously thought, a  genetically-determined “universal” human instinct/hard-wired pattern for language that is common to all human beings, but that languages are in fact learned and used in often totally different ways by those speaking different tongues.  Thus, as speculated by Vance, languages do in fact shape the way we not only think, but the very way in which we see the world.  And, as occasionally happens, but not so much as we science fiction writers would like to think or claim, one of us has actually anticipated a fundamental discovery, and one that has profound implications for human civilization, implications that I don’t think most people have fully considered.

If this research is accurate, then, for example, intractible cultural differences may well lie in the linguistic patterns of a culture.  A language that offers many ways in which to accurately express the same concept or thought would likely promote more openness of thought than a language in which there is literally only one correct way in which that thought can be expressed.  A language/culture that allows rapid linguistic innovation may promote change and development… but it might well have the downside of undermining standards, because standards, as represented by language, are not seen as fixed or immutable.   We already know that words expressing concepts, such as “freedom”  or “equality,” do not “translate” into the exact same meanings in different cultures, and this research offers insights into why the differences go beyond mere semantics.

These possibilities have certainly been considered in human history, if only instinctively or subconsciously.  For centuries, the Roman Catholic church resisted the translation of the Bible into any other language, insisting it be read and taught only in Latin.  Since 1635, with a few years in abeyance during the French Revolution, L’Academie Francaise has policed usage and linguistic development in France, attempting to restrict or eliminate the use of Frenchified Anglicisms.  And languages do affect other aspects of human behavior.  Recent studies have shown that speakers of tonally-inflected languages have far, far, higher rates of perfect pitch than do speakers of languages that are not tonally inflected.  Not entirely coincidentally, it seems to me, speakers of such languages also appear to have more successful classical musicians.

A more disturbing aspect of the research is the possibility that linguistic differences may well create cultural “understanding” divides that are difficult, if not impossible, to bridge, simply because the languages create antithetical patterns of thought, so that a speaker of one language cannot literally comprehend emotionally the concepts and values behind the words of a speaker of another language.  The initial research suggests that the magnitude of variances in languistic learning patterns ranges from very slight to quite significant… and it will be interesting to see if such differences can ever be quantified.  But it does appear that speaking another language goes far beyond the words.

And a science fiction writer pointed out the cultural implications and ramifications for societies first.

Pressing the Limits

As both individuals and as a species, human beings have always had a tendency to press the limits, both of their societies and their technologies.  This tendency has good points and bad points… good because as a species we wouldn’t have developed and life would still be in the “natural state,” or “nasty, brutish, and short,” a pithy observation attributed to the philosopher Thomas Hobbes in Leviathan.  The “bad” side of pressing the limits has been minimized, because the advantages have been so much greater over time than the drawbacks.

Except… the costs and the consequences of pushing technology to the limit may now in some cases be reaching the point where they outweigh the overall benefits, and not just in military areas.

The latest and most dramatic evidence of this change is, of course, the current Gulf of Mexico oil rig explosion and the subsequent oil blowout.  Deep-sea drilling and production platforms are required to have in place redundant blow-out protectors… as did the BP rig.  But the blow-out protector failed.  Such failures are exceeding rare.  Repeated tests show these work over 99% of the time, but something like 60 have failed in tests of the equipment.  The Gulf oil disaster just happens to be one of the few times it’s happened in actuality and represents the largest such failure in terms of crude oil releases.  What’s being overlooked, except by the environmentalists, who, so far as I can tell, are operating more on a dislike of off-shore drilling than a reasoned technical analysis, is the fact the number of offshore drilling platforms is around 6,000 in service world-wide in some form or another, and increasing.  That number will increase whether the U.S. bans more offshore drilling or not.  From 1992 to 2006, the Interior Department reported 39 blow-outs at platforms in the Gulf of Mexico, and although none were as serious as the latest, that’s more than two a year, yet that represents a safety record of 99.93%.  In short, there’s not a lot of margin for error.  What makes the issue more pressing is that drilling technology is able to drill deeper and deeper – and the pressures involved at ever greater depths put increasing stress on the equipment to the point where, as is apparent with the BP disaster, stopping the flow of oil in the case of a failure becomes extraordinarily difficult and exceedingly expensive, as well as time-consuming.  Because crude oil is devastating to the environment, the follow-on damage to the ecosystems and the economy of the surrounding area will create far greater costs than capping the well.

Pushing technology beyond safe limits is nothing new to human beings.  When steam engines were first introduced, the desire for power and speed led to scores, if not hundreds, of boiler explosions.  Occasionally, disasters led to changes, such as the phasing out of hydrogen dirigibles after the Hindenburg fire and crash, but that change was also made easier by the improvements in aircraft, which were also far faster than dirigibles. The costs of other disasters are still with us – and we tend to overlook them.  The town of Centralia, Pennsylvania, has largely been abandoned because the coal seams in the mostly worked out mines beneath the town caught fire and have been smoldering away for more than forty years, causing the ground above to collapse and continually releasing toxic gases.  In Pennsylvania alone, there are more than 30 such subterranean fires.  World-wide there are more than 3,000 such fires, some of which release more greenhouse gases and other toxic fumes than some coal-fired power plants.  Yet few of these fires are more than watched, because the technology does not exist that can extinguish them in any fashion close to cost-efficient and in some cases, not at all because the fires burn so deep.

Pushing electronic technology to the limits, without regard for the implications, costs, and other downsides, has resulted in a world linked together in such a haphazard fashion that a massive solar flare – or a determined set of professional hackers – could conceivably bring down an entire nation’s communications and power distribution network – and that doesn’t even take into account the vast increase in the types and the amounts of exceedingly toxic wastes created on a world-wide scale, most of which is still not handled as it should be.  Another area where technology is being pressed to the limits is that of bio-tech, where scientists have reported creating the first synthetic cell.  While they engineered in considerable safeguards, once that technology is more available, will everyone?

As illustrated by the BP disaster, we when, as a society, push technology to its limits on a large scale, for whatever reason, the implications of a technological or systems failure are getting to the point where we require absolute safety in operation of those systems – and obtaining such assurance is never inexpensive… and sometimes not even possible.

But then again… if we tweaked existing technology just a bit more so that we could get even more out of it…. get more oil, more bandwidth, make more profit…

When to Stop Writing… [With Some “Spoilers”]

The other day I ran across two comments on blogs about my books.  One said that he wished I’d “finish” more books about characters, that he just got into the characters and then the books ended.  The other said that I dragged out my series too long.  While the comments weren’t about quite the same thing, they did get me to thinking.  How much should I write about a given character?  How long should a series be?

The simple and easy answer is that I should write as long as the story and the series remain interesting.  The problem with that answer, however, is… interesting to whom?

Almost every protagonist I’ve created has resulted in a greater or larger number of readers asking for more stories about that particular character, and every week I get requests or inquiries asking if I’ll write another story about a particular character.  That’s clearly because that reader identified with and/or greatly enjoyed that character… and that’s what every author likes to hear.  Unfortunately, just because a character is so memorable to readers doesn’t mean that there’s another good story there… or that another story about that character will be as memorable to all readers.

Take Lerris, from The Magic of Recluce.  By the end of the second book about him, he’s prematurely middle-aged as a result of his use of order and chaos to save Recluce from destruction by Hamor… and his actions have resulted in death and destruction all around him, not to mention that he’s effectively made the use of order/chaos magic impossible on a large or even moderate scale for generations to come.  What is left for him in the way of great or striking deeds?  Good and rewarding work as a skilled crafter, a happy family life? Absolutely… but there can’t be any more of the deeds, magic, and action of the first two books.  That’s why there won’t be any more books about Lerris.  If I wrote another book about Lorn…another popular character… for it to be a good book, it would have to be a tragedy, because the only force that could really thwart or even test him is Lorn himself.  After a book in which a favorite character died, if of old age after forty years of magic working – and all the flak I took from readers who loved her – I’m understandably reluctant to go the tragic route again.  So… for me, at least, I try to stop when the best story’s been told, and when creating an even greater peril or trial for the hero would be totally improbable for the world in which he or she lives.

For the same reason, because I’ve never written more than three books about a given main character, my “series” aren’t series in the sense of eight or ten books about the same characters, but groupings of novels in the same “world.”  Even so, I hear from readers who want more in that world, and I read about readers who think I’ve done enough [or too much] in that world.  Interestingly enough, very few of the complainers ever write me; they just complain to the rest of the world, and for me that’s just as well.  No matter what they say publicly, I don’t know a writer who wants to get letters or emails or tweets telling them to stop doing what they like to do… and I’m no different.

But those who complain about series being too long usually aren’t dealing with the characters or the stories. From what I’ve seen and read, they’re the readers who’ve “exhausted” the magic and the gimmicks.  They’re not there for characters and insights, but for the quicker “what’s new and nifty?”  And there’s nothing wrong with that, but it’s not necessarily a reason for an author to stop writing in that world; it’s a reason for readers who always want the “new” to move on.  There’s still “new” in the Recluce Saga; it’s just not new magic.  Sometimes, it’s stylistic.  I’ve written books in the first person, the third person past tense, the third person present tense.  I’ve connected two books with an embedded book of poetry.  I’ve told the novels from both the side of order and the side of chaos, and from male and female points of view.  Despite comments to the contrary, I’ve written Recluce books with teenaged characters, and those in their twenties, thirties, forties, and older. That’s a fair amount of difference, but only if the reader is reading for what happens to the characters… and virtually all the critics and reviewers have noted that each book expands the world of Recluce.  I won’t write another Recluce book unless I can do that, and that’s why there’s often a gap of several years between books.  The same is true of books set in my other worlds.

So… I guess, for me, the answer is that I stop writing about a character or a world when I can’t show something new and different, although it may be quiet new or character new.

Technology, Society, and Civilization

In today’s modern industrial states, most people tend to accept the proposition that the degree of “civilization” is fairly directly related to the level of technology employed by a society.  Either as a result or as a belief, then, each new technological gadget or invention is hailed as an advance. But… how valid is that correlation?

In my very first blog [no longer available in the archives, for reasons we won’t discuss], I made a number of observations about the Antikythera Device, essentially a clock-work- like mechanical computer dating to 100 B.C. that tracked and predicted the movement of the five known planets, lunar and solar eclipses, the movement of the moon, as well as the future dates for the Greek Olympics. Nothing this sophisticated was ever developed by the Roman Empire, or anywhere else in the world until more than 1500 years later.  Other extremely technological devices were developed in Ptolemaic Egypt, including remote-controlled steam engines that opened temple doors and magnetically levitated statues in those temples.  Yet both Greece and Egypt fell to the more “practical” Roman Empire, whose most “advanced” technologies were likely the invention of concrete, particularly concrete that hardened under water, and military organization.

The Chinese had ceramics, the iron blast furnace, gunpowder, and rockets a millennium before Europe, yet they failed to combine their metal-working skill with gunpowder to develop and continue developing firearms and cannon.  They had the largest and most advanced naval technology in the world at one point… and burned their fleet.  Effectively, they turned their backs on developing and implementing higher technology, but for centuries, without doubt, they were the most “civilized” society on earth.

Hindsight is always so much more accurate than foresight, but often it can reveal and illuminate the possible paths to the future, particularly the ones best avoided. The highest level of technology used in Ptolemaic Egypt was employed in support of religion, most likely to reinforce the existing social structure, and was never developed in ways that could be used by any sizable fraction of the society for societally productive goals.  The highest levels of Greek technology and thought were occasionally used in warfare, but were generally reserved for the use of a comparatively small elite.  For example, records suggest that only a handful of Antikythera devices were ever created.  The widest-scale use of gunpowder by the early Chinese was for fireworks – not weapons or blasting powder.

Today, particularly in western industrial cultures, more and more technology is concentrated on entertainment, often marketed as communications, but when one considers the time and number of applications on such devices, the majority are effectively entertainment-related.  In real terms, the amount spent on basic research and immediate follow-up in the United States has declined gradually, but significantly, over the past 30 years.  As an example, NASA’s budget is less than half of what it was in 1965, and in 2010, its expenditures will constitute the smallest fraction of the U.S. budget in more than 50 years.  For the past few years, the annual budget of NASA has been running around $20 billion annually.  By comparison, sales of Apple’s I-phone over 9 months exceeded the annual NASA budget, and Apple is just one producer of such devices.  U.S. video game software sales alone exceed $10 billion annually.

By comparison, the early Roman Empire concentrated on using less “advanced” technology for economic and military purposes.  Interesting enough, when technology began to be employed primarily for such purposes as building the coliseum and flooding it with water and staging naval battles with gladiators, subsidized by the government, Roman power, culture, and civilization began to decline.

More high-tech entertainment, anyone?

Sacred? To Whom?

I’ll admit right off the top that I have a problem with the concept that “life is sacred,” not that I don’t feel that my life, and that of my wife and children and grandchildren aren’t sacred to me.  But various religions justify various positions on social issues on the grounds that human life is “sacred.”  I have to ask the question why human life, as opposed to other kinds of life, is particularly special – except to us.

Once upon a time, scientists and others claimed that Homo sapiens were qualitatively different and superior to other forms of life.  No other form of life made tools, it was said.  No other form of life could plan logically, or think rationally.  No other form of life could communicate.  And, based on these assertions, most people agreed that humans were special and their life was “sacred.”

The only problem is that, the more we learn about life on our planet, the more every one of these assertions has proved to be wrong.  Certain primates use tools; even Caledonian crows do.  A number of species do think and plan ahead, if not in the depth and variety that human beings do.  And research has shown and is continuing to show that other species do communicate, from primates to gray parrots.  Research also shows that some species have a “theory of mind,” again a capability once thought to be restricted to human beings. But even if one considers just Homo sapiens, the most recent genetic research shows that a small but significant fraction of our DNA actually comes from Neandertal ancestors, and that genetic research also indicates that Neandertals had the capability for abstract thought and speech.  That same research shows that, on average, both Neandertals and earlier Homo sapiens had slightly larger brains than do people today.  Does that make us less “sacred”?

One of the basic economic principles is that goods that are scarce are more valuable, and we as human beings follow that principle, one might say, religiously – except in the case of religion.  Human beings are the most common large species on the planet earth, six billion plus and growing.  Tigers and pandas number in the thousands, if that.  By the very principles we follow every day, shouldn’t a tiger or a panda be more valuable than a human?  Yet most people put their convenience above the survival of an endangered species, even while they value scarce goods, such as gems and gold, more than common goods.

Is there somehow a dividing line between species – between those that might be considered “sacred” and those that are not?  Perhaps… but where might one draw that line?  A human infant possesses none of the characteristics of a mature grown adult.  Does that make the infant less sacred?  A two year old chimpanzee has more cognitive ability than does a human child of the same age, and far more than a human infant.  Does that make the chimp more sacred?  Even if we limit the assessment of species to fully functioning adults, is an impaired adult less sacred than one who is not?  And why is a primate who can think, feel, and plan less sacred than a human being?  Just because we have power… and say so?

Then, there’s another small problem.  Nothing on the earth that is living can survive without eating in some form or another something else that is or was living.  Human beings do have a singular distinction there – we’re the species that has managed to get eaten less by other species than any other species.  Yes… that’s our primary distinction… but is that adequate grounds for claiming that our lives, compared to the lives of other thinking and feeling species, are particularly special and “sacred”?

Or is a theological dictum that human life is sacred a convenient way of avoiding the questions raised above, and elsewhere?

Making the Wrong Assumption

There are many reasons why people, projects, initiatives, military campaigns, political campaigns, legislation, friendships, and marriages – as well as a host of others – fail, but I’m convinced that the largest and least recognized reason for such failures is that those involved in such make incorrect assumptions.

One incorrect assumption that has bedeviled U.S. foreign policy for generations is that other societies share our fundamental values about liberty and democracy.  Most don’t.  They may want the same degree of power and material success, but they don’t endorse the values that make our kind of success possible.  Among other things, democracy is based on sharing power and compromise – a fact, unfortunately, that all too many U.S. ideologues fail to recognize, which may in fact destroy the U.S. political system as envisioned by the Founding Fathers and as developed by their successors… until the last generation.  Theocratically-based societies neither accept nor recognize either compromise or power-sharing – except as the last resort to be abandoned as soon as possible.  A related assumption is that peoples can act and vote in terms of the greater good.  While this is dubious even in the United States, it’s an insane assumption in a land where allegiance to the family or clan is paramount and where children are taught to distrust anyone outside the clan.

On a smaller scale, year after year, educational “reformers” in the United States assume, if tacitly and by their actions, that the decline in student achievements and accomplishments can be reversed solely by testing and by improving the quality of teachers.  This assumption is fatally flawed because student learning requires two key factors – those who can and are willing to work to teach and those who can learn and who are willing to learn.  Placing all the emphasis on the teachers and testing assumes that a single teacher in a classroom can and must overcome all the pressures of society, the media, the social peer pressures to do anything but learn, the idea that learning should be fun, and all the other societal pressures that are antithetical to the work required to learn. There are a comparative handful of teachers who can work such miracles, but basing educational policy and reforms on those who are truly exceptional is both poor policy and doomed to failure.  Those who endorse more testing as way to ensure that teachers teach the “right stuff” assume that the testing itself will support the standards, which it won’t, if the students aren’t motivated, not to mention the fact that more testing leaves less time for teaching and learning.  So, in a de facto assumption, not only does the burden of teaching fall upon educators, but so does the burden of motivating the unmotivated, and disciplining the undisciplined at a time when society has effectively removed the traditional forms of discipline without providing any effective replacements.  Yet the complaints mount, and American education is failing, even as the “reformers” keep assuming that teachers and testing alone can stem the tide.

For years, economists used what can loosely be termed “the rational person” model for analyzing the way various markets operated.  This assumption has proved to be horribly wrong, as recent studies – and economic developments – proved, because in all too many key areas, individuals do not behave rationally.  Most people refuse to cut their losses, even at the risk of losing everything, and most continue uneconomic behaviors not in their own interests, even when they perceive such behaviors in others as irrational and unsound.  Those who distrust the market system assume that regulation, if only applied correctly, can solve the problems, and those who believe that markets are self-correcting assume that deregulation will solve everything.  History and experience would suggest both assumptions are wrong.

In more than a few military conflicts dating back over recent centuries, military leaders have often assumed that superior forces and weapons would always prevail.  And… if the military command in question does indeed have such superiority and is willing to employ it efficiently to destroy everything that might possibly stand in its way, then “superiority” usually wins.  This assumption fails, however, in all too many cases where one is unable or unwilling to carry out the requisite slaughter of the so-called civilian population, or when military objectives cannot be quickly obtained, because, in fact, in virtually every war of any length a larger and larger fraction of the civilian population becomes involved on one side or another, and “superiority” shifts.  In this regard, people usually think of Vietnam or Afghanistan, but, in fact, the same sort of shift occurred in World War II.  At the outbreak of WWII in 1939, the British armed forces had about 1 million men in arms, the U.S. 175,000, and the Russians 1.5 million.  Together, the Germans and Japanese had over 5 million trained troops and far more advanced tanks, aircraft, and ships.  By the end of the war, those ratios had changed markedly.

While failure can be ascribed to many causes, I find it both disturbing and amazing that seldom are the basic assumptions behind bad decisions ever brought forward as causal factors… and have to ask, “Why not?”  Is it because, even after abject failure or costly success that didn’t have to be so costly, no one wants to admit that their assumptions were at fault?

Ends or Means

By the time they reach their twenties, at least a few people have been confronted, in some form or another, with the question of whether the ends justify the means.  For students, that’s usually in the form of cheating – does cheating to get a high grade in order to get into a better college [hopefully] justify the lack of ethics?  In business, it’s often more along the lines of whether focusing on short-term success, which may result in a promotion or bonus [or merely keeping your job in some corporations], is justified if it creates long-term problems or injuries to others.

On the other hand, I’ve seldom seen the question raised in a slightly different context.  That is, are there situations where the emphasis should be on the means? For example, on vacation, shouldn’t the emphasis be on the vacation, not on getting to the end of it?  Likewise, in listening to your favorite music, shouldn’t the emphasis be on the listening and not getting to the end?

I suppose there must be some few situations where the end is so vital that the means don’t matter, but the older I get, the fewer examples of that I’ve been able to cite because I’ve discovered that the means so affect the ends that you can seldom accomplish the ends without a disproportionate cost in collateral damage.

This leads to those situations where one needs to concentrate on perfection in accomplishing the means, because, if you don’t, you won’t get to the end.  Some instances such as these are piloting, downhill ski racing, Grand Prix driving [or driving in Los Angles or Washington, D.C., rush hour traffic], or undertaking all manner of professional tasks, such as brain or heart surgery, law enforcement, or fire fighting.

The problem that many people, particularly students, have is a failure to understand that, in the vast majority of cases, learning the process is as critical [if not more so] as the result.  Education, for example, despite all the hype about tests and evaluations, is not about tests, grades, and credentials [degrees/certification].  Even if you get the degree or certification or other credential, unless you’ve learned enough in the process, you’re going to fail sooner or later – or you’ll have to learn all over what you should have learned the first time.  Unfortunately, because many entry-level jobs don’t require the full skill set those who were trying to provide the education were attempting to instill, that failure may not come for years… and when it does, the results will be far more catastrophic.  And, of course, some people will escape those results, because there are always those who do… and, unfortunately, for some reasons, those “evaders” are almost invariably the ones those who don’t want to do the work to learn the process pick as examples and reasons why they shouldn’t work on learning the processes behind the skills.

Studies done on college graduates two generations ago “discovered” that such graduates made far more income over their lifetimes than did those without a college degree.  Unfortunately, the message became that a degree was what mattered, not the skills represented by that degree, and ever since then people have focused on the credential, rather than on the skills, a fact emphasized by rampant grade and degree inflation and documental by the noted scholar Jacques Barzun, in his book, From Dawn to Decadence: 500 Years of Western Cultural Life, 1500 to the Present , where he observed that one of the reasons for the present and continuing decline of Western Civilization is the fact that our culture now exalts credentials over skills and real accomplishments.

One of the most notable examples of this is the emphasis on monetary gain, as exemplified by developments in the stock and securities markets over the past two years.  The “credential” of the highest profit at any cost has so distorted the process of underwriting housing and business investment that the profit levels reaped by various sectors of the economy bear no relationship to their contribution to either the economy or culture.  People whose decisions in pursuit of ever higher and unrealistic profit levels destroyed millions of jobs are rewarded with the “credential” of high incomes, while those who police our streets, fight our fires, protect our nation, and educate our children face salary freezes and layoffs – all because ends justify any means.

Hypocrisy… Thy Name Is “Higher” Education

The semester is over, or about over, in colleges and universities across the United States, and in the majority of those universities another set of rituals will be acted out.  No… I’m not talking about graduation.  I’m talking about the return of “student evaluations” to professors and instructors. The entire idea of student evaluations is a largely American phenomenon that caught hold sometime in the late 1970s, and it is now a monster that not only threatens the very concept of improving education, but it’s also a poster child for the hypocrisy of most college and university administrations.

Now… before we go farther, let me emphasize that I am not opposing the evaluation of faculty in higher education.  Far from it.  Such evaluation is necessary and a vital part of assuring the quality of faculty and teaching.  What I am opposed to is the use of student evaluations in any part of that process.

Take my wife’s music department.  In addition to their advanced degrees, the vast majority have professional experience outside academia.  My wife has sung professionally on three continents, played lead roles in regional operas, and has directed operas for over twenty years.  The other voice professor left a banking career to become a successful tenor in national and regional opera before returning to school and obtaining a doctorate in voice.  The orchestra conductor is a violinist who has conducted in both the United States and China.  The band director spends his summer working with the Newport Jazz Festival.  The piano professor won the noted Tchaikovsky Award and continues to concertize world-wide.  The percussion professor performs professionally on the side and has several times been part of a group nominated for a Grammy.  This sort of expertise in a music department is not unusual, but typical of many universities, and I could come up with similar kinds of expertise in other university departments as well.

Yet… on student evaluations, the students rate their professors on how effective the professors are at teaching, whether the curricula and content are relevant, whether the amount of work required in the course is excessive, etc.  My question/point is simple:  Exactly how can 18-24 year-old students have any real idea of any of the above?  They have no relevant experience or knowledge, and to obtain it is presumably why they’re in college.

Studies have shown that the closest correlation between high student evaluations is that the professors with the easiest courses and the highest percentage of As get the best evaluations. And, since evaluations have become near-universal, college level grades have experienced massive grade inflation.  In short, student evaluations are merely student Happiness Indices – HI!, for short.

So why have the vast majority of colleges and universities come to rely on HI! in evaluating professors for tenure, promotion, and retention?  It has little to do with teaching effectiveness or the quality of education provided by a given professor and everything to do with popularity.  In the elite schools, student happiness is necessary in order to keep student retention rates up, because that’s one of the key factors used by U.S. News and World Report and other rating groups, and the higher the rating, the more attractive the college or university is to the most talented students, and those students are most likely to be successful and eventually boost alumni contributions and the school’s reputation.  For state universities, it’s a more direct numbers game.  Drop-outs and transfers represent lost funds and inquiries from state legislatures who provide some of the funding.  And departments who are too rigorous in their attempts to maintain or [heaven forbid] upgrade the quality of education often either lose students or fail to grow as fast as other departments, which results in fewer resources for those departments.  Just as Amazon’s reader reviews greatly boosted Amazon’s book sales, HI! boost the economics of colleges and universities.  Professors who try to uphold or raise standards face an uphill and usually unsuccessful battle – as evidenced by the growing percentage of college graduates who lack basic skills in writing and logical understanding.

Yet, all the while, the administrations talk about the necessity of HI! [sanctimoniously disguised as thoughtful student evaluations] in improving education, when it’s really about economics and their bottom line… and by the way, in virtually every university and college across the country, over the past 20 years, the percentage growth in administration size has dwarfed the growth in full-time, tenure-track, and tenured faculty.  But then, why would any administration really want to point out that perceived student happiness trumps academic excellence in every day and in every way or that all those resources are going more and more to administrators, while faculties, especially at state universities, have fewer and fewer professors and more and more adjuncts and  teaching assistants?

Newer… Not Always Better

Somehow people, especially students, don’t get it.  As the title above suggests, just because something is newer, it isn’t necessary better – even in computers.  I have yet to find a commercial graphing program in existence today that is anywhere even close to the Boeing Graph program of some 25 years ago.  And as techno-historians know, the Beta videotape system was far superior to the VHS system.

What’s interesting now, though, is that for some applications – such as viewing student voice teachers and critiquing them – VHS tapes are far superior to DVDs.  Why?  Because the tapes can be paused at any given second, or rewound to a precise point.  Commercial DVDs and equipment can’t.  When a voice professor is studying vocal dynamics, that’s important.  Having to play through sections, even at high speed, takes time and often overshoots or undershoots the point in question.  Yet my wife’s pedagogy students complain that she uses “antiquated equipment” and makes them use old-fashioned tapes instead of new hip digital disks.  What they don’t seem to understand is that “new” isn’t better if it doesn’t do what you want it to, especially when “old” technology does.

This isn’t confined to the sometimes arcane area of vocal pedagogy, but applies across our techno-society. Typewriters do a far better job of filling in forms – at least those not available on one’s own computer – than do computers. Word Seven is a much faster word processing system for text than is the current version of Word [which I do have for the other applications], and the search capabilities of fifteen-year-old WordPerfect 6.0 still exceed those of any current version of Word.  As I noted in an earlier post, a keyed ignition is far more effective at turning off a runaway engine than a new high-tech keyless engine, not to mention safer.  My “old” color ink-jet printer delivers a far cleaner and clearer image than does the new and improved laser-jet printer, even if the laser is faster. And in terms of overall medical effectiveness, in terms of all factors, there’s no solid proof that the newer NSAIDs have any more benefits and more effectiveness than does good old aspirin, and although aspirin does have a slightly higher propensity to create gastro-intestinal bleeding, it also has many other benefits, such as reducing the risk of heart attacks and colon cancer – and it’s one of the oldest drugs around. Certainly, the now-retired Concorde passenger jet was far superior to any commercial aircraft now in service in getting passengers across the ocean quickly, and more than a few pilots still claim that the retired F-14 exceeds anything now flying for total air superiority.  Photographic film still provides a better image than does comparable digital photography.

Going back to recording equipment, if you happen to have a phonograph with a working needle, you can still play vinyl and other old records nearly a century old.  You certainly can’t do that with tapes even half that old, and a single light scratch effectively destroys the usefulness of a CD.  That’s fine for entertainment products that aren’t meant to outlast the current fad, but is it acceptable for recording data or information with a longer lifespan?

So why aren’t newer products always better?  The plain fact is that superiority is often far down the list in product qualities, usually behind cost of production/operation, novelty appeal, style, ease of operation, and profitability.  Another factor is that, especially in computer and communications products, manufacturers try to cram in as many applications as possible so as to appeal to the widest possible number of consumers. The multiplicity of applications generally results in the overall degradation of the capability of all functions, but that degradation usually isn’t perceptible, or relevant, to most users.

This often results in cheaper products, but the downside is that those products often don’t suit the needs of professionals in specialized fields… and because it’s getting harder and harder to develop or produce products for users with particular needs – such as my professorial wife – those users have to make do with either improvised or older equipment… and risk being termed dinosaurs and out of date,

In the end… newer isn’t always better; it’s always only newer.

Complete Piracy at Last

It’s now official.  According to my editor and Macmillan Company, the parent of Tor Books, every single one of my titles has now appeared somewhere as pirated edition, in some form or another.  I’d almost like to claim this as a singular distinction.  I can’t. Macmillan also believes that every single book they’ve published in recent years – something like the last three decades – has appeared in pirated editions of some sort.

I can’t say I’m surprised.  Every time I attempt to check up on how my books are doing, I discover website after website offering free downloads of everything I’ve ever written, including versions of titles that never were issued in electronic format and even those that haven’t been in print in those particular editions offered in more than twenty years.  I could spend every minute of every day trying to chase them down… without much success.  So I grit my teeth and bear it.

Ah… the wonders of the electronic age.

Coincidentally, and unsurprisingly, the sales of paperback mass-market fiction books have also begun to decline.  Part of this is likely due in part to the collapse of a section of the wholesale distribution system, but that shrinkage doesn’t account for most of it, because it’s also occurring in the case of titles and authors who were never distributed widely on a wholesale basis, and whose books were largely sold only through bookstores. This hasn’t been so obvious in the F&SF field, because, while the average paperback print run has decreased, the number of paperback titles has increased slightly, but according to knowledgeable editors, the decrease is happening pretty much across the board, and some very big name authors – far bigger names than mine – have seen significant decreases in paperback book sales… and that’s without a corresponding increase in e-book sales.  Obviously, this isn’t true for every single author, and it’s impossible to determine for newly published authors because, if they haven’t published a book before, how can one accurately determine if their paperback sales are falling off from those of their previous book?

Despite all the talk, it appears that the popular mantra that information and entertainment need to be free remains in force for a small but significant fraction of former book buyers – even if such “free editions”  reduce authors’ incomes and result in publishers eliminating yet more mid-list authors because declining sales have made them unprofitable, or even money-losing.

The other day I came across an outraged comment about the price of an e-book version of my own Imager’s Challenge. The would-be reader was outraged that the electronic version was “only” a few dollars less than the hard-cover edition, especially since the paperback edition won’t be out for four months or so.  Somehow, it doesn’t seem to penetrate that while paper may be the single largest component of “physical” publishing costs, it still only amounts to something like 10-15% of the publisher’s cost of producing a book, i.e., a few dollars. Even without paper, the other costs remain, and they’re substantial – and publishing remains, as I have written, time and time again, a very low margin business. That’s why publishers really don’t want to cannibalize their hardcover revenues by undercutting the hardcover prices before the paperback version is on the shelves, especially given the decline in paperback sales.

There are many problems with piracy, including the fact that authors essentially get screwed, but the biggest one for readers seems to be overlooked.  The more piracy exists and the wider-spread it becomes, the less the choice readers will have in finding well-written, well-edited books, and especially of books that are not popular best-sellers.  The multi-million selling popular books – and the “popcorn books,” as my wife calls them – will survive piracy.  The well-written books for smaller audiences won’t.  So readers could very well be left with dwindling choices… and scrambling through thousands of self-published e-volumes, most of which are and will be poorly written and unedited in search of that rare “gem” – a good and different book that doesn’t appeal to everyone.

But… after all, information and entertainment want to be free.

The Instant Disaster Society?

Last Thursday, the stock market took its single biggest one day drop in its history, somewhere slightly over a thousand points, as measured by the Dow Jones Industrial Index.  While the market recovered sixty to seventy percent of that drop before the close Thursday, the financial damage across the world was not inconsiderable.  Did this happen because Greece is still close to a financial meltdown, or because economic indicators were weak?   No… while the leading cause or precipitating factor may have been a typographical error – a trader entered a sell order for $16 BILLION of exchange futures, instead of a mere $16 million, there are a number of other possibilities, but the bottom line [literally] was that, whatever the cause, all the automated and computerized trading engines immediately reacted – and the market plummeted.  Later, the NASDEQ canceled a number of trades, but that was long after the damage had been done.

From the Terminator movies onward, there have been horror stories about computers unleashing doomsday, but the vast majority of these have concerned nuclear and military scenarios – not world economic collapse.  While I don’t fall into the “watch out for those evil computers” camp, I have always been and remained greatly concerned about the growth and uses of so-called “expert systems” – in all areas of society, largely because computers are the perfect servants – they do exactly what their programming tells them to do, even if the result will be disastrous.

For example, Toyota is now having all sorts of problems with runaway acceleration.  When this first occurred, my question was simple enough:  Why didn’t the drivers either shift into neutral or turn off the ignition.  Apparently, it turns out, at least some of them may not have been able to, not quickly, because they had keyless ignition systems.  Yet the automakers are talking about cars that will be not only keyless but also totally electronic, that is, even the shifting will be electronic and not physical/manual.  And if the electronics malfunction, exactly how will a driver be able to quickly “kill” the system?  Let’s think that one over for a bit.

President Obama and the health care reformers want all medical records to be electronically available, both for cost-saving purposes and for ease of access.  The problem with that kind of ease of access is that it also offers greater ease of hacking and tampering, and, I’m sorry, no system that offers the kind of ease the “reformers” are proposing can be made hacker-proof.  The access and security requirements are mutually antithetical. Years ago, Sandra Bullock starred in a movie called “The Net,” and while many of the computer references are outdated and almost laughable, one aspect of the movie was not and remains all too plausibly real.  At least two characters die because their medical records are hacked, and changed.  In addition, national databases are manipulated and identities switched.  Now… the computer experts will say that these sorts of things can be guarded against… and they can be, but will they?  Security costs money, and good security costs a lot of money, and people use computers to cut costs, not to increase them.

As far as economics go, now that an “accident” has shown just how vulnerable securities markets are to inadvertent manipulation, how long before some terrorist or other extremist group figures out how to duplicate the effect?  And then all the programmed trading computers will blindly execute their trades… and we’ll get an even bigger disaster.

Why?

Because we’ve become an instant-reaction society, and electronic systems magnify the effect of either system glitches or human error. Those programmed securities trading computers were designed to take advantage of market fluctuations on a micro if not a nano-second basis.  For better or worse, they make decisions faster than any human trader could possibly make them – and they do so based on data that may or may not be accurate.

We’re seeing the same thing across society.  Today’s young people are being trained to react, rather than to think.  Instead of letters or even email, they use Twitter.  Instead of bridge or old fashioned board games like Risk or Diplomacy, they prefer fast-acting, instant reaction videogames with a premium on speed.  More and more of the younger generation cannot form or express complex concepts, even as technology is taking us into an ever more complex world.  Business has a greater and greater emphasis on short-term gain and profits.  People want instant satisfaction.

The societal response to the increase in speed across society is to use computers and electronic systems to a greater and greater extent – but, as happened last Thursday, what happens when one’s faithful and obedient electronic servants do exactly what their inputs dictate that they’re supposed to do – and the result is disaster?

Do we really want – and can our society survive – a world where a few high-speed mistakes can destroy more than a trillion dollars worth of assets in seconds… or do even worse damage than that?  Not to mention one where thinking is passé… or for the old fogies of an earlier generation… and where all that matters is instant [and shallow] communications and short-term results that may well result in long-term disaster.