Archive for the ‘General’ Category

More on Poetry

As some readers may know, my wife is a professional singer who is also a professor of voice and opera.  Among her many duties is that of teaching aspiring classical singers diction and literature.  One notable type of song literature required of these students is that of “art song,” and a significant percentage of art song consists of poetry set to music by composers.  Various forms of art song, if called by different names, have been composed in many languages, although classical singers usually begin by learning art songs in English, French, German, and Italian.

Earlier this semester, my wife was beginning the section on American and English art song, and out of a class of fifteen students, she found that what they had read in high school appeared to be limited to a bit of Chaucer and Shakespeare, along with Emily Dickinson, and perhaps T.S. Eliot.

None of the students had learned any poetry by such greats as John Milton, William Blake, Shelley, Byron, Keats, Yeats, Robert and Elizabeth Barrett Browning, Christina Rosetti., Robert Louis Stevenson, Thomas Hardy, W.H. Auden, A.E. Housman, Edna St. Vincent Millay, and Amy Lowell. In fact, none of them even appeared to have read Robert Frost. Moreover, none of them could actually read verse, except in a halting monotone.  This lack of background in poetry puts them at a severe disadvantage, because these are the poets whose words have been put to music in art song and even in choral works.

These were not disadvantaged students. They came out of high school with good grades and good standardized test scores.  Yet they know essentially very little about the historical written arts of their own native language.  In turn, this lack shows up in their narrowness of word usage, metaphor, and general weakness in both oral and written expression. Whether it’s related or not, it does appear that there’s also a correlation between the loss of  solid English instruction and the growth of such phrases as “you know”; “I mean”; “like…dude”; and scores of other meaningless phrases used to cover lack of even semi-precise expressiveness.

Bring back the great old poets… all of them.

Bookstores, Literacy… and Economics

Although I was surrounded by books growing up, I can’t recall ever going to a bookstore to obtain a book until I was in college.  I was a frequent visitor to the local library, and there were the paperback SF novels my mother picked up at the local drugstore, but bookstores weren’t really a part of my orbit, and their absence didn’t seem to affect my voracious reading habit.  As an author, however, I’ve become very aware of bookstores, and over the past twenty years, I’ve entered over a thousand different bookstores, in forty-two of the fifty states, over 120 in the space of three weeks on one tour.  And because I was once an economist I kept track of the numbers and various other economics-related aspects of those bookstores.

The conclusion?  Well… there are many, but the one that concerns me the most are the changes in bookselling and where books can be obtained and what those changes mean for the future functional literacy of the United States.

When I first became a published novelist thirty years ago, for example, the vast majority of malls had small bookstores, usually a Waldenbooks or a B. Dalton, often two of them, one at each end of the mall, or perhaps a Brentano’s or another chain. And I was very much aware of them, because I spent more times in malls than I really wanted to, which is something that occurs when one has pre-teen and teenaged daughters.  According to the statistics, at that time, there were over 1500 Waldenbooks in malls nationwide, and hundreds of B. Daltons, not to mention all the other smaller bookstores. Today, the number of Waldenbooks stores totals less than 200 hundred, and the majority of those were closed because Borders Books, the present parent company of Waldenbooks, did not wish to continue them once it acquired the chain, preferring to replace many small stores with larger Borders stores.  Even so, Borders has something less than five hundred superstores.  The same pattern holds true for Barnes and Noble, the parent of the now-or-almost-defunct B. Dalton stores.  The actual number of bookstores operated by these two giant chains is roughly half what they operated twenty-five years ago.  At the same time, the growth of the chain superstores has squeezed out hundreds of smaller independent bookstores.

Prior to 1990, there were somewhere in the neighborhood of 400 book wholesalers in the United States, and there were paperback book racks in all manner of small retail establishments.  Today there are only a handful of wholesalers, and the neighborhood book rack is a thing truly of the past.

Add to this pattern the location of the book superstores.  Virtually all of these stores are located in the most affluent sections of the areas they serve.  In virtually every city I’ve visited in the last fifteen years, there are huge sections of the city, sometimes as much as 60 percent of the area, if not more, where there is no bookstore within miles, and often no convenient public transport. There are fewer and fewer small local bookstores, and most large bookstores are located in or near upscale super malls.  Very few, if any, malls serving the un-affluent have bookstores.  From a short-term economic standpoint, this makes sense for the mega-store chains.  From a cultural standpoint, and from a long-term customer development standpoint, it’s a disaster because it limits easy access to one of the principal sources of books largely to the most affluent segments of society.

What about the book sections in Wal-Marts?  The racks and carrels in the average super Wal-Mart number roughly a third of those in the size of the smallest of the Waldenbooks stores I used to visit, and the range of books is severely limited, effectively to the best-sellers of each genre.

Then, because of recent economic pressures, the local libraries are seeing their budgets cut and cut, as are school libraries – if the school even has a library.

Research done for publishing firms has shown that so-called impulse book purchasing – the kind once made possible be neighborhood book racks and ubiquitous small mall bookstores, accounted for a significant percentage of new readers… and the comic book racks that were next to the book racks provided a transition from the graphic format to the books.

Some have claimed that books will be replaced by the screen and the I-phone and other screen “aps,” and that well may be… for those who can already read… but the statistics show that while fewer Americans are totally illiterate, an ever-increasing percentage is effectively functionally illiterate.

Is that functional illiteracy any wonder… when it really does take a book to start learning to read and when books are becoming harder and harder to come by for those who need them the most?

Voting Influence

Decades ago, the late science fiction writer Mack Reynolds wrote a novel depicting a future United States in which citizens received one “basic” vote, and then could “earn” additional votes for various accomplishments, such as earning advanced degrees, completing a period of military and/or public service, etc.  At the time of the book, Reynolds received a great deal of flak for that concept, and I suspect, were anyone to advance such an idea today, the outcry would likely be even greater.

But why?  In point of fact, those with great sums of money already exert a disproportionate amount of influence over the electoral process, especially in the United States now that the U.S. Supreme Court has granted corporations and wealthy individuals access to the media that is only limited by the amount of their resources, in effect granting such entities the impact of millions of votes. The rationale for the court decision, which has in effect been legally sustained, is that restriction on the use of money for advertising one’s political views and goals is in effect a restriction on first amendment freedom of speech rights.  The practical problem with this decision is that, in a culture dominated by pervasive mass media, the result is to multiply the effect of exercising freedom-of-speech rights manifold for those who have large amounts of wealth.  Since, given the costs of effectively using mass media, only the top one or two tenths of one percent of the population can exercise such media-enhanced rights, the result of the decision is to give disproportionate influence to a tiny fraction of the population.  Moreover, as a result of the decision, in most cases, donors to groups and corporations availing themselves of this “right” do not even have to disclose their donations/spending.

The Court’s decision essentially grants greater weight in determining who governs us strictly on the basis of income and wealth.  Are not other qualities and accomplishments also of equal or greater value to civilization?  And if so, why should they not be granted greater weight as well? That was really the question Reynolds was addressing in postulating such a change in American society, and it’s a good question.

Before you dismiss the idea out of hand, consider the fact that the way in which our current system operates grants greater governmental influence to a small group of people whose principal talent is making money.  It does not grant such influence to those who teach, who create, or who perform unheralded and often dangerous military and public service, and as the revelations about Iraq have showed, at times such money-making operations have in fact been based on taking advantage of American soldiers deployed abroad, so that those with great sums of money not only gained electoral influence, but did so at the expense of those who served their country… and many of whom died doing so.

Then… tell me again why we don’t need an electoral or regulatory counterbalance to unbridled use of wealth in trying to influence elections.

Boring?

The other day, someone commented on the blog that, unfortunately, Imager’s Intrigue and Haze were boring and major disappointments.  I replied directly, something I usually avoid doing, at least immediately, because the comment punched several of my buttons.  As many of my readers well know, my first fantasy, The Magic of Recluce, features Lerris, a young man who, at the beginning of the novel, finds virtually everything in his life boring, and everything that he railed against at the beginning far less so at the end… yet the world in which he lives has changed very little.

I have no problem with readers saying that they personally found a book of mine – or anyone else’s – boring… or whatever.  I have great problems when they claim the book is boring, without qualifications.  A book, in itself, is neither exciting nor boring.  It simply is.  When a reader picks up a book and reads it, there is an interaction between what the reader reads and what the writer wrote.  What a reader finds interesting depends at least as much on the reader as the writer.  There are some books that have been widely and greatly acclaimed that I do not find interesting or enjoyable, and that is true of all readers.  In general, however, books that are well-written, well-thought-out, and well plotted tend to last and to draw in a greater percentage of readers than those that are not.  The fact that books with overwhelmingly positive reader and critical reviews that also sell large numbers receive comments like “dull,” “boring,” and “slow” suggests that no book can please everyone.  That’s not a problem.

The problem, as I see it, is that there are more and more of such unthinking comments, and those comments reflect an underlying attitude that the writer must write to please that particular reader or the author has somehow failed if he or she has not done so.  This even goes beyond the content of the books.  A number of my books – and those of many other authors – are now receiving “one-star” or negative reviews, not because of faults in the book, but because the book was not available immediately in cheaper e-book versions at the time when the hardcover is published.  Exactly how many people in any job would think it fair that they received an unsatisfactory performance review because they didn’t offer their services at a lower rate?  Yet that’s exactly what the “one-star-reviewers” are essentially saying – that they have the right to demand when and at what price what version of a book should be released.

It took poor Lerris exile and years to understand that Wandernaught was not boring, but that he was bored because he didn’t want to understand.  But that sort of insight seems lacking in those whose motto appears to be: Extremism in the pursuit of entertainment (preferably cheap) is no vice, and moderation in the criticism of those who provide it is no virtue.

The Failure of Imagination

On my way to and back from the World Fantasy Convention, I managed to squeeze in reading several books – and a bit of writing.  One of the books I read, some three hundred plus pages long, takes place in one evening.  While I may be a bit off in my page count, after reading the book, I thought that of the more than three hundred pages, the prologue and interspersed recollections and flashbacks amounting to perhaps fifty pages provided the background for the incredibly detailed action, consisting of sorcery, battles, fights and more fights, resulting in… what?  An ending that promised yet another book. To me, at least, it was more like a novelized computer game [and no, it’s not, at least not yet].  If I hadn’t been on an airplane, and if the book hadn’t come highly recommended, I doubt I would have finished it.

The more I’ve thought about this, the more it bothered me, until I realized that what the book presented, in essence, was violence in the same format as pornography, with detailed descriptions of mayhem in realms of both the physical and the ghostly, with just enough background to “justify” the violence.  While I haven’t done as much reading of the genre recently as I once did – I read 30-40 books in the field annually, as opposed to the 300 plus I once read – to offer a valid statistical analysis, it seems to me that this is a trend that is increasing… possibly because publishers and writers are trying to draw in more of the violence-oriented gaming crowd.  Then again, perhaps I’ve just picked the wrong books, based on the recommendations of reviewers who like that sort of thing.

And certainly, this trend isn’t limited to books. In movies, we’re being treated – or assaulted, depending on one’s viewpoint – with more and more detailed depictions of everything, but especially of mayhem, murder, and sexually explicit scenes. The same is true across a great percentage of what is classified as entertainment, and I’m definitely not the first commentator to notice that.

Yet… all this explicitness, at least to me, comes off as false.  Older books, movies, and the like that hint at sex, violence, terror, and leave the reader and viewer in the shadows, so to speak, imagining the details, have a “reality” far more realistic than entertainment that leaves nothing to the imagination.

This lack of reader/viewer imagination and mental exploration also results in another problem, lack of reader understanding. I’m getting two classes of reader reviews on books such as Haze, in particular, those from readers who appear truly baffled and those who find the book masterful. The “baffled” comments appear to come largely from readers who cannot imagine, let alone understand, the implications and pressures of a society different from their own experience and preconceptions… and they blame their failure to understand on the writer.  The fact that many readers do understand suggests that the failure is not the writer’s.

All this brings up another set of questions.  Between the detailed computer graphics of games, the growth of anime, manga, and graphic novels, the CGI effects in cinema, what ever happened to books, movies, and games that rely on the imagination? A generation ago, children and young adults used their imagination in entertainment and reading to a far greater extent. The immediate question is to what degree the proliferation of graphic everything minimizes the development of imagination. And what are the ramifications for the future of both society and culture?

The Technology Trap

Recently, I read some reader book reviews of a science fiction novel and came across a thread that surfaced in several of the reviews, usually in a critical context.  I realized, if belatedly, that what I had read was an underlying assumption behind much science fiction and something that many SF readers really want.  The only problem, I also realized, is that what they want is something that, in historical and practical contexts, is as often missing as present.

What am I talking about?  The impact of technology, of course.

Because we in the United States live in a largely technology-driven, or at least highly technologically supported, society, there is an underlying assumption that technology will have a tremendous impact on society, and that every new gadget somehow offers an improvement to society.  I have grave doubts about the second, but that’s not the assumption I’m going to address, but rather the first, the idea that in any society, technology will triumph.  I’d be the first to agree that one can define, to some degree, a culture or society by the way in which it develops and uses technology, but I’d have to disagree on the point that developing technology is always a societal priority.

Imperial China used technology, but there certainly wasn’t a priority on developing it past a certain point, and in fact, one Chinese emperor burned the most technologically advanced fleet in the world at that time.  The Chinese developed gunpowder and rockets, but never developed them to anywhere close to their potential.  As I’ve noted in a far earlier blog, the Greeks developed geared astronomical computers thousands of years in advance of anyone else… and never applied the technology to anything else.  Even the British Empire wasn’t interested in Babbage’s mechanical computer.  And, for the present, at least, western civilization has turned its back on supersonic passenger air transport, even though it’s proved to be technically feasible.

Yet, perhaps because many SF readers are enamored of technology, there seems to be an assumption among a significant fraction of readers that when an author does not explore or exploit the technology of a society and give it a significant role, at least as societal background, he or she has somehow failed in maximizing the potential of the world depicted in the novel in question.

Technology is only part of any society, and, at times, and in some places, it’s a very tiny part.  Even when it underpins a society, as in the case of western European-derived societies in our world, it often doesn’t change the societal structure, but amplifies the impact of already existing trends.  Transportation technology improves and expands the existing trade networks, but doesn’t create a new function in society.  When technology does change things, it usually does so by changing the importance of an existing structure, as in the case of instant communications.  And at times, as I noted above, a society may turn its back on better technology, for various reasons… and this is a facet of human societies seldom explored in F&SF and especially in science fiction, perhaps because of the myth — or the wish — that technology always triumphs, despite the historical suggestions that it doesn’t.

Just because a writer doesn’t carry technology as far as it might go theoretically doesn’t mean the writer failed.  It could be that the writer has seen that, in that society, technology won’t triumph to that degree.

Election Day… and the Polarization of Everything?

The vast majority of political observers and “experts” – if pressed, and sometimes even when not – will generally admit that the American political climate is becoming ever more polarized, with the far right and the far left refusing to compromise on much of anything.  For months now, the Republican party in the U.S. Senate has said “No!” to anything of substance proposed by the Democratic leadership, and in the health care legislation, for example, the Democrats effectively avoided dealing with any of the issues of interest to the Republicans, some of which, such as medical malpractice claims reform, have considerable merit.

Yet, if one looks at public opinion polls, most Americans aren’t nearly so radical as the parties that supposedly represent them, although recently that has begun to change, not surprisingly, given the continual public pressure created by the tendency of media news outlets to simplify all issues to black and white… and then to generate conflict, presumably to increase ratings.

Add to this the extreme media pressure placed on any politician who seeks a compromise or another approach outside of either party positions or his or her own past pronouncements, and we have a predictable outcome – polarization and stalemate.

There are times when stalemate may be preferable to ill-considered political action, but at present, there are a number of areas affecting the United States where some sort of action is and has been necessary.  A relative of mine just got her latest health insurance bill – over $1,000 a month for single-party coverage – and this wasn’t a gold-plated health plan by any means.  For two people, the premium would have been over $1,600 monthly, or over $19,000 a year.  Now… the median family income in the United States runs around $50,000 at present, and a $1,600 a month health insurance bill is over 35% of that – and doesn’t include deductibles and co-payments.  Single parent households have a median family income of  roughly $35,000, and $1,000 a month is more than a third of before-tax income.  These figures do tend to suggest that some sort of action on health care insurance was necessary, but the vast majority of one party effectively declared that they weren’t interested in anything proposed by the majority party, and the majority party effectively refused to consider any major issues brought to the table by the minority.  By parliamentary maneuvers, the majority slid through legislation thoroughly opposed by the overwhelming majority of the minority – and further increased the political polarization in Washington.

Similar polarization can be seen on other major issues, from immigration to energy policy and climate change legislation, and, of course, taxation.   One party wants to soak those who have any income of substance, and the other wants to reduce taxes so much that we’ll never dig our way out of the deficit.  Those who would suffer the greatest taxation don’t have enough to cover the deficit, and cutting or eliminating taxes, as some have proposed, would destroy us as a nation.

Tell me… exactly how does this polarization resolve anything?

Transformational… Reflective…?

In response to one comment on a recent blog, I noted that vocal music had changed over the last forty years, and another commenter made the point that languages evolve… both of which raised in my mind the question of the role art plays in societal evolution. Put bluntly, does art lead such transformations, or does it merely reflect them?  Or is it the usual mix of a little leading, and a great deal of reflection?

While I’m no art historian, it does appear to me that changes in the predominant or critically acclaimed styles of painting do not follow a pattern of gradual change, but occur irregularly, and at times, at least, preceded significant societal changes, as in the case of the rise of the impressionists, or the modern art movement of the 1950s.  Such changes also do not appear to be primarily gradualistic.

Music historians have placed classical music into periods, but how does one analyze the changes from one period to the next?  Were giants such as Bach and Mozart so dominant in their mastery that they forced the composers who followed them to innovate?  Beethoven’s great Ninth Symphony, which is unlike any other of its time and, for that matter, unlike any of quality any time soon thereafter, was composed at a time when the “old order” had been restored.  Was he reacting to the currents of past revolution, or anticipating the changes to come?  It’s easy enough to say that such questions were irrelevant to Beethoven, except that it’s unlikely that any creative soul is impervious to the environment, especially in Beethoven’s case, since the currents of politics swirled around Vienna during the period, especially after 1800, when his most daring works were composed.

Popular music, especially in the United States, underwent radical changes in the 1960s, and significant societal changes also occurred.  Did they occur in tandem, or did the music reinforce the impetus for change?  Can anyone truly say?

Science fiction aficionados often like to claim that SF leads the way into the future, but does it?  Isaac Asimov did foresee the pocket calculator, but the success record of the genre is pretty weak, either in predicting or inspiring social and technological changes.  Almost 40 years ago, in my very first story, I predicted computer analysis and economic modeling, somewhat accurately, as it turned out, and cybercrime as well, and while cybercrime has indeed become a feature of current society, I never predicted the most predominant type.  I did predict institutional cybercrime of the general type that caused the last economic meltdown, and, so far as I can tell, that story was one of the first, if not the first, to suggest that type of crime, but… somehow… I don’t think my little story inspired it.  I just saw where technology and trends might lead.

But, of course, that leaves open the question… how much do the arts influence the future?

The Resurgence of Rampant Tribalism

Several pieces of archeological “trivia” clicked together for me the other day.   First was an event in the early history of the United States, during the time period when the Indians had had enough and decided to push the English out of New England – in a conflict known as King Philip’s War, named for the young chief of the Wampanoag Indian tribe. Despite differing religious beliefs, the English colonists were united, while the Indians were fragmented into more than half a dozen local tribes, two of which, the Pequot and the Mohegan, supported the English.  On top of that, at a point when the English colonists were having great difficulty, the neighboring Mohawk tribe, rather than support King Philip, attacked the Wampanoag.

The second piece of informational trivia was the recollection that one of the contributing reasons for the Spanish success against the Aztecs was that tribes conquered by the Aztecs united with the Spaniards.  The third was an article in Archeology revealing recent discoveries about the ancient Etruscans, one of which was that, despite their initial control of the central Italian peninsula and a higher level of technology than the Romans, in the end Rome triumphed, largely because the Etruscan cities could never form a truly unified nation.  Greece is another example.  The ancient Greek city-states never could form a unified nation – except briefly in short-lived alliances and then under the iron fist of Alexander and, despite their comparatively advanced technology and civilization, ended up dominated by the Romans.

The largest single difference between a nation and a collection of tribes is that a nation is held together by an overriding set of common beliefs.  The United States began as a “tribal” confederation, but succeeded in unifying what amounted to regional tribes through the idea and principles of a federal republic… for a period of little more than sixty years before the beliefs of the southern “tribes” resulted in rebellion.  One of the contributing factors to the defeat of the South was the lack of cohesion between the “tribal states” of the southern confederacy, a lack exemplified by the fact that some southern railways had different gauge track systems from others – and it does get hard to move supplies when you have fewer railways and they don’t interconnect.

While history does not repeat itself in any exact fashion, patterns and “echoes” do, and one of the patterns of history is that large and unified countries almost always triumph over nations that are or resemble tribal confederations or over smaller nations.  Another pattern is that confederations or unions seldom endure.  They either merge into a nation of shared values, as did the United States, or they fragment, as did the former USSR.

The problem facing the United States, and the world, today is that tribalism is again becoming rampant, if more in the form of values, largely religious, that are increasingly intolerant of those with other values.  This tribalism, instead of seeking common ethical and practical grounds, manifests itself in demanding that those with other beliefs be repudiated, if not exiled or exterminated, and often demonizes those with comparatively minor differences in beliefs.

More than a few political scientists have theorized that this trend could conceivably, if unchecked, result in the political fragmentation of the United States into several nations.  While I’m not that skeptical, I do see that this tribalization has resulted in a growing failure of society and government and an increasing inability to deal with critical national problems, ranging from failing infrastructure to financial overcommitment and endless wars around the globe.

And… as another symptom… is it that surprising that one of the top-rated media shows is the “tribally-based” Survivor series?  More tribalism, anyone?

Beliefs… and the Future

Superficially, human beings differ to some degree, with variations in hair, eye, and skin color, as well as moderately differing musculature and size, but those external differentiations are as nothing compared to the differences in what we believe.  Here, too, there are degrees of variation, generally, but not always, based on culture.  That is, for example and for the most part, belief structures within Anglo-American culture fall along a certain spectrum, while those in Middle-Eastern Islamic cultures fall along another, while belief structures in East Asian cultures follow another general spectrum.  Obviously, the beliefs of any given individual may be wildly at variance with the culture spectrum or norms of where that person lives, but by definition, as a result of cultural development, in most cases either a majority or, where no majority culture exists, either the largest or the most powerful minority tends to dictate cultural norms and beliefs.

One area of belief in which there is little variation among human beings is the belief that “what I believe is the ‘right’ belief, and everyone else should believe as I do.”  There is little variation in this internal dictum because thinking organisms who do not innately have such a guide tend to die out quickly.  The difference among humans does not lie in the first part of that dictum, but in the second, in how much tolerance an individual or a culture has for the beliefs of others.

Now… obviously for any society to survive, there has to be a shared set of values… or chaos and societal dissolution, or revolt and disaster, will soon follow. But the question facing any society is what values are absolutely necessary to be shared and how will that sharing be enforced. Historically, such “belief” domination/values sharing has been established not just through cultural and religious pressure, but through force, including, but not limited to, war; genocide; economic, political, legal, and social discrimination.

In addition, those groups who see their values threatened have a tendency to protest and oppose the loss of such values, often with great violence.  Today, much of the Islamic Middle East feels enormously threatened by the secularistic, less-gender-role-driven, and materialistic western European value structure. In the United States, in particular, fundamentalist Christian faiths clearly feel threatened and angered by beliefs that run counter to their views on such issues as abortion and marriage, and one well-known writer has gone as far as to suggest he will oppose any U.S. government that creates a legal definition of marriage counter to the “traditional” one of a man and a woman.

This “need” for values domination has often been carried to extremes by individuals, groups, and even governments who have happened to believe that the world only belongs to the “chosen people” or  “the master race” or “those who can afford it” or some other exclusive definition… almost always with disastrous results and extremely high loss of life.

Enter technology.  Technology requires certain shared values.  It also creates great dissemination of knowledge, as well as being an extremely effective tool for indoctrination and communication.  These factors, as well as a number of others, threaten many “traditional” values.  At the same time, the higher the level of technology, the greater the need for certain core shared values, that is, if one wants to keep that technology operational in a world that is getting smaller and smaller.

The additional problem today is that, like it or not, small groups, and even individuals, and certainly governments all have the power to create large-scale disasters, with violent societal and physical disruptions, either to impose their values, or to rebel against the imposition of other beliefs and values.  Moreover, as recent studies have begun to indicate, as such disruptions escalate, another group of individuals enters the dynamic, what one might best call “opportunistic terrorists,” who either use similar tactics for commercial profit, such as the Latin and South American drug cartels, or for personal fame or just because they enjoy the acts of terrorism.

In my view, and it is only my opinion, human society as a whole faces three possible futures:  (1) technological collapse because the values conflicts cannot be resolved; (2) the gradual imposition of  shared values through indoctrination and commercial and political pressure, as is happening in China today, and to a lesser degree in western cultures; or (3) greater understanding and cooperation in working out a “core values” framework that will allow a range of differing beliefs around the world.

The way matters are going right now, it appears options one and two are fighting it out, because no one wants to compromise enough to give option three a chance.

In Praise of Poetry – True Poetry

The other day I was reading a well-known “literary” periodical with large circulation… and I noticed something… and then I read another periodical of the same ilk – and I noticed the same thing.  So I went back, both through the various magazines, as well as my memory, and realized that, no indeed, my memory was not playing tricks on me.

And what was it that I noticed?  I’ll get to that… in a moment.

But first… poetry.  According to A Handbook to Literature, “The first characteristic of poetry, from the standpoint of form, is rhythm…”  The rather lengthy definition also notes that poetry is “characterized by compactness, intense unity, and a climactic order,” expressed with the vital element of concreteness and noting that one of the strengths of Shakespeare’s poetry is that almost every line “presents a concrete image.”

Many years ago, both when I studied poetry and later published some in long-vanished small magazines, there were still poets who believed and worked along those lines, who regularly wrote sonnets, sestinas, villanelles, and other strict poetic forms and who understood and could work with a range of metric forms and rhythms.  And because I tend to appreciate the beauty of language and form, those are the poets whom I read and praise… and the kind that I still seek and seldom find.

That most of what is published as poetry today, even by many publications with literary credentials and pretensions, is what one critic [whose name I can’t recall, or I’d cite him or her] called “greeting card free verse,”  devoid of strict (or even loose) metrics.  And much of the popularity of current so-called poetry rests on the spoken presentation of the work, rather than upon the structure and the words themselves.  Great poetry should not require a great speaker, but should sound great and shake the mind when recited by anyone of average intelligence and speech.

This trend toward greeting card verse and emphasis on presentation rather than substance is certainly why I take out my well-worn copies of William Butler Yeats, T.S. Eliot, Wallace Stevens, W.H. Auden, Dylan Thomas, and William Shakespeare, among others, when I wish to read poetry.  And yes, when I go to various bookstores, I do browse through the “current” poetry sections… and carefully replace the books I’ve perused on the shelves.  Now… I won’t claim that there’s no one out there who’s actually writing full-fledged poetry, but I will claim that, based on a fairly wide reading habit, there certainly aren’t many “poets” who are published today that merit the title in terms of the standards of the past.

As for what I noticed in those “literary” publications… it was that none of what was published as poetry in the issues I read or could find in recent months would have been called poetry until the last half century or so.  Robert Frost once made the observation that writing so-called free verse was like playing tennis with the net down.  Almost anyone could do it and call themselves a poet.

And that is why I praise the great poets who could and can encapsulate vivid images and meaning in rhythmic, rhymed forms without sounding stilted or forced and with words whose sounds, allusions, and connotations stir the mind and soul.

Those who can do all that… they are true poets.

English… Please

There’s a growing, if underground, backlash against bilingualism in the United States, against the proliferation of directions and instructions in languages other than English, against ballots printed in Spanish, against ATMs with foreign language options.  Yet, from what I’ve observed, while I do believe that the legal language of the United States is and should remain English, so many of those who demand action or legislation to reinforce this are missing the linguistic boat.

The United States is indeed a nation of immigrants, and all too many youngsters today seem to have lost some of the skills of their parents.  For example, fewer and fewer of them can write adequately the language written and spoken by their parents.  This wouldn’t be such a loss… except the language I’m talking about is English, American English in particular. And I’m also not talking about those young people from disadvantaged backgrounds.  I’m referring to the vast majority of white Caucasian high school graduates from “good” urban or suburban high schools.

This linguistically disadvantaged majority – and actual tests of proficiency in English reading, writing, and comprehension show clearly this lack of ability – does not know basic grammar, basic spelling, or the construction and use of their native tongue. This spills over into everything, from essays to business correspondence, from newspaper and magazine articles even to headlines, not to mention blogs and advertisements. The number and percentage of grammatical and spelling errors in publications has increased dramatically.  I’ve gone back and checked older publications, and such lack of skill and care either didn’t exist or was caught by editors and proof-readers.

The same lack of precision in language permeates popular music – assuming one can even decipher the abysmal diction of most singers in order to suffer through grammatical inaccuracies and debasement of a once-proud language.  In point of fact, it’s amazing to realize that the music once considered almost degraded and backwoods-derived – country music, to be exact – is perhaps the only form of current popular vocal music where the majority of the lyrics can actually be understood.

Yes… a small percentage of Americans continue to write well and skillfully, but that proportion is declining every year, paradoxically at a time when recent studies show that the mastery of language equates directly to the mastery of thought and ideas.  Might it just be possible… just possibly… that the decline in the ability of Americans to articulate and understand the complexities of our society lies in the decline of their linguistic abilities?  Mastery of language is not merely the knowledge of vocabulary, but the ability to construct sentences that are clear and logical, and to understand those that are logically complex.  In short, clear thinking requires a good command of language, and there’s definitely a shortage of clear thinking today.

Why are simplistic political or commercial sound-bites so successful?  Is it because the euphony of simplicity appeals so much more readily to those who are linguistically disadvantaged?  Or because those whose language skills have atrophied or were never developed have difficulty in understanding anything more complex?

Whatever the reason, the English-only partisans seem unwilling and unable to understand that they’re well on the way to losing their battle… and they’re losing it from within.

While citing history is usually doomed to failure, because so few understand its parallels, or want to, I will point out that Latin was once the language that ruled the world.  As it became debased, so did Rome… to the point where Latin is a dead language, and Italian bears but a passing resemblance to the language it replaced… and… oh… Italy couldn’t even reunite itself until more than 1,900 years after the death of Julius Caesar.

The Cult of Self and the Decline of Manners

Last weekend, we went to a party, one that marked a significant set of dates in the lives of some friends and one to which we were invited with a large engraved invitation.  I did note a phrase at the bottom which read, “Cocktail Attire.”  Now it may be that I come from a very conservative background, but to me that suggested a coat and tie at the very least, and apparel of a similar nature for my wife – which indeed we did wear. The event was catered and featured an array of excellent foods, from appetizers to deserts, and a range of beverages from water to expensive liquors and champagnes.  Each couple, or individual, was given a set of wineglasses with the dates and the symbols in gold lettering.

But frankly, I was appalled at what many of the guests wore – faded jeans and polo shirts, women in beach capris.  I will admit I didn’t see any tee-shirts and short-shorts, but that was more likely due to the fact that the temperature was in the high 60s than to the taste, or lack of it, on the part of some of those attending. At one point, a famed and world-class pianist performed… and almost no one listened or moderated their conversations, even after the host asked for quiet.

What was even more surprising to me was that none of those attending would have been considered less than substantial members of the community.  The guests included doctors, lawyers, accountants, university officers and professors, prosperous ranchers, business professionals, and the like.  Exactly what did perhaps a quarter of those attending fail to understand about “cocktail attire”?  And if they did not wish to dress for the occasion, there was no need to attend.  It certainly wasn’t even an indirectly compulsory event.

This sort of behavior isn’t limited to events such as these.  Even after warnings that cell phones, cameras, texting, and the like are prohibited at local concerts, there are always those who still persist in electronic disruptions – or other disruptions – of  performances, and despite stated policies against bringing infants to performances, there are still would-be patrons who protest.

All of these instances, any many more, reflect a lack of courtesy and manners.  Dressing appropriately for an event equates not only to manners, but also to respect for those giving the event.  Being quiet in an audience is a mark of respect for the performers.

So… why are so many people – especially those who, from their levels of education and professions, should know better – so ill-mannered and often disrespectful?  Part of it may be that, frankly, their parents failed to teach them manners.  Mostly, however, I think it is the growth of the cult of self – the idea that each person is the center of his or her universe and can wear what he or she wants whenever he or she wants to, can say what they like whenever they want.  Yet these same individuals can become extremely bellicose if anyone ever suggests that their behavior infringes on someone else’s freedom to speak, etc.  The parents who insist that their children be respected by a teacher are all too often totally disrespectful of the teacher. Then there are the citizens who demand that law enforcement officers be civil and respectful under the most trying of circumstances, but who are anything but that when stopped for traffic or other infractions.  Or customers who would bridle at the slightest hint of frustration by a sales clerk, but who have no hesitation about berating those clerks over matters beyond the control of the salespeople.

This goes beyond personal interactions as well, so that we have a political arena filled with name-calling, misrepresentation, and hatred.  I’m not saying that we should all agree, because we never will on all matters, but we might well have a more livable world if we remembered that not a single one of us is the center of the world and that shouting at someone is only going to make them want to shout back.  Manners were developed in order to reduce unnecessary conflict and anger, and it’s too bad that all too many people seem to have forgotten that.

The Arrogance of Religious Leaders

On Sunday, Boyd K. Packer, the President of the Quorum of the Twelve Apostles of the Church of the Latter Day Saints, thundered forth against the “immorality” of same sex attractions and declared that the only marriage was that of a man and a woman and that such marriage was one of “God’s laws.”  Packer went on to equate this “law” with the “law of gravity” by stating “A law against nature would be impossible to enforce. Do you think a vote to repeal the law of gravity would do any good?”  While some members of Congress might well try that if they thought it would get them re-elected, I find Packer’s statements not only chilling in their arrogance, but also typical of the ignorance manifested by so many high-profile religious figures.

Like it or not, same-sex attraction has been around so long as there have been human beings.  The same behavior pattern exists in numerous other species of mammals and birds.  What Packer fails to grasp, or willfully ignores, is that laws of nature aren’t violated.  The universe does not have large and significant locations where gravity [or Einstein’s version of it] doesn’t exist, and there certainly haven’t been any such locations discovered on Earth.  Were the heterosexual behavior that Packer extols actually a “law of nature,” there would be no homosexual behavior, no lesbian behavior.  It couldn’t happen.  It does.  Therefore, the heterosexual patterns demanded and praised by Mormon church authorities are not God’s inflexible laws; they’re codes of behavior created by men [and except for Christian Science, pretty much every major religious code has been created by men] attempting to discern a divine will in a world where there is absolutely no proof, in the scientific sense [regardless of the creationist hodgepodge], that there even is such a supreme deity. God may exist, or God may not, but actual proof is lacking.  That’s why religious systems are called “beliefs” or “faiths.”

Thus, to assert that a particular code of human behavior is “God’s law” is arrogance writ large.  For a Mormon church authority to do so, in particular, is not only arrogant, but hypocritical.  Little more than a century ago, the Mormon culture and beliefs sanctioned polygamous relationships as “God’s law.”  Well… if God’s laws are immutable, then why did the LDS Church change them?  If the LDS Church authorities recognized that they were wrong in the past, how can they claim that today’s “truth” is so assuredly God’s law?  What will that “truth” be in a century?

While Newton’s “law of gravity” has been modified since its promulgation centuries ago, it still operates as it always did, not as men would have it operate, unlike so many of the so-called laws of God promulgated by men.  Since time immemorial [human time, anyway], humans have exhibited a range of sexual attractions and practices.  Like it or not, those suggest that the laws of nature, and presumably of God, for those who believe in a supreme deity, not only allow, but require for at least some people, differing sexual attraction.  Societies may in fact need to, and should, prohibit cruel and depraved practices, such as those involving unwilling participants or children… but to declare that one set of sexual customs is the only acceptable one, under the guise that it is God’s law, remains arrogant, ignorant, and hypocritical.

The Leadership Problem

Political, organizational, and corporate leaders are  either outsiders or insiders.  Insiders who rise to leadership positions almost always do so by mastering the existing structures and ways of doing things.  In short, the best of them do what has always been done, hopefully better, while the worst cling to the most comfortable ways of the past, often rigidly enforcing certain rules and procedures, whether or not they’re the best for the present times.

On the other hand, outsiders who become leaders of established organizations or institutions are generally far more open to change.  In addition, such leaders carry with them ideas and practices that have worked in other settings.  As a result, as I’ve observed over the years, both in government and business, “outsider” leaders all too often impose changes without any understanding of the history and processes that created the practices and procedures that worked in the past for the organization…and that still do, even if not so well as the leader and those the organization serves would like.

Like it or not, there are reasons why institutions behave the way they do, and a leader needs to understand those reasons and the conditions that created them before attempting to make changes.  Also, at times, the environment changes, and the impact of those changes affects behavior.  One of the greatest changes in the political environment in the last century has been the combination of the electronic information revolution with the pervasiveness of the media.  The end result has been to make almost any sort of political compromise impossible, as witness the recent electoral defeats of politicians who have attempted or supported compromise.  While “purists” attack and condemn any politician who even attempts a compromise political solution, governing is difficult, if not impossible, without compromise, since most nations, especially the United States, are composed of people with differing interests.

Thus, a political leader who wishes to hold on to power cannot compromise, at least not in any way that the media can discover, but since actual change requires at least partial support from those with other views, any leader who manages change effectively destroys his own power base.

In the corporate world similar factors play out, with the major exception that a corporate leader is under enormous pressure to maintain/increase market share and profits.  So is every division head under that leader, and, as I’ve observed, time after time, subordinates are all too willing to implement changes that benefit their bottom line but increase the burdens and costs on every other division/part of the organization.  Likewise, I’ve seen so-called efficiency/streamlining measures imposed from the top end up costing far more than the previous “inefficiencies” because all too many organizational leaders failed to understand that different divisions and/or subsidiaries had truly different cost structures and needs and that “one size does not fit all.”

In the end, a great deal of the “leadership” problem boils down to two factors: lack of understanding on the part of both leaders and followers and the unwillingness/inability to compromise.  Without understanding and compromise, organizations…and nations… eventually fragment and fail.

Corruption [[Part III]

According to recent news reports, a significant amount of the damage caused by the flooding in Pakistan may well be the result of pressure on officials not to breach certain dams in order to release the flood waters into a designated flood plain – because individuals and families of the elite who were well-connected were using the flood plain to grow cash crops and didn’t want to lose their investment.  In short, these individuals pressured an official to do something to their benefit and to the detriment of millions of small farmers who had no such influence.

Corruption?  Certainly, at least one news story played it that way.

But what is corruption exactly?  Is it the use of money or influence to gain special favors from officials that others cannot obtain?  Is it using such influence to avoid the restrictions placed on others by law?

Are such practices “corruption” if they are widely practiced in a society and if anyone can bribe or influence an office-holder or law enforcement official, provided they have enough money?  What is the ethical difference between a campaign contribution and a direct bribe to an elected official?  While one is legal under U.S. law, is there any ethical difference between the two?  Aren’t both seeking to influence the official to gain an advantage not open to others?

And what is the ethical difference between hiring a high-priced attorney to escape the consequences of the law and bribing a police officer to have the charges dismissed… or never brought?  In the USA, such bribes are illegal and considered corrupt, but those with fame and fortune hire legal champions to effect the same end… with means that are legal.  So Paris Hilton and Lindsay Lohan and others escape the legal consequences of their actions – or get off with wrist slaps – while those without resources serve time.

In legal and “official” terms, Northern European derived societies generally have the least “permissive” definition or outlook on what they term corruption. But are these societies necessarily more ethical – or do they just have more rules… and perhaps rules that restrict how money and influence can be used to accomplish personal ends?  Rules that limit what most individuals can do… but not all individuals?

Under the current law – at least until or unless Congress finds a way to change it – corporations now have the right to spend essentially unlimited funds to campaign for legislative changes during an election. As I read the Supreme Court decision, corporations can’t directly say that Candidate “X” is bad because he or she supports or opposes certain legislation, but they can say that any candidate who does is “bad.”  In effect, then, U.S. law allows unlimited funding to influence public policy through the electoral process, but strictly forbids the smallest of direct payments to office holders.  One could conclude from this that the law allows only the largest corporations to influence politicians.  If corruption is defined as giving one group an unfair advantage, isn’t that a form of legalized corruption?

But could it just be that that, in ethical terms, corruption exists in all societies, and only the definition of corruption varies?  And could it also be that a society that outlaws direct bribery of officials, but then legalizes it in an indirect form for those with massive resources is being somewhat hypocritical?  In the USA, we can talk about being a society of laws, but we’ve set up the system so that the laws operate differently for those with resources and those without.  While I’m no fan of the Tea Party movement, this disparity in the way the “system” operates is another facet behind that movement, one that, so far, has not been widely verbalized.  Yet… who can blame those in the movement for feeling that the system operates differently for them?

Double Standards

Recently, there was a sizable public outcry in the great state of Oklahoma.  The reason?  A billboard.  It was just a standard oversized highway billboard that asked a question and provided a website address.  But the question was: “Don’t believe in God?”  Following that was the statement, “Join the club,” with a website for atheists listed. The outcry was substantial, and that probably wasn’t surprising, since surveys show that something like 80% of Oklahomans are Christians of some variety.

There is another side to the issue, of course.  You can’t drive anywhere, it seems to me, without seeing billboards or other signs that tout religion.  And there are certainly hundreds, if not thousands, of religious programs on television, cable/satellite, and radio.  Why should so many people get upset about atheists advertising their “belief” and reaching out to others who believe there is no supreme deity?  Yet many religious people were calling for the removal of the message, claiming it was unChristian and unAmerican.  UnChristian, certainly, and, I suppose, unIslamic, unHindu, etc…. but unAmerican?  Not on your life, not while we live under a Constitution that provides us with a guarantee of the freedom to believe what we wish, or not to believe.

The double standard lies in the belief of the protesters that it’s all right for them to champion their beliefs publicly and to seek converts through public airspace and billboards, but not to allow that to those who disavow a supreme deity.

Unhappily, we live in the age of double standards.  Those who champion subsidies and “incentives” for business, but who oppose earned income tax credits or welfare, practice a double standard as well.  For all the rhetoric about such corporate incentives creating jobs, so do income supports for the poor, and neither is as effective at doing so as their respective supporters would claim.  But… arguing for one taxpayer-funded subsidy and against another on so-called ethical or moral grounds is yet another double standard.

Here in Utah, the governor has claimed that he’s all for better education, but when his opponent for the office suggested a plan to toughen high school graduation requirements, the governor opposed it because it would limit the “release time” during the school day that allows LDS students to leave school grounds and attend religious classes at adjoining LDS seminaries – and then the governor blasted his opponent for sending his children to parochial schools.  Wait a minute.  Using the schedules of taxpayer-funded schools to essentially promote religion is fine, but spending your own money (and saving the taxpayers money to boot) to send a child to a religious school is somehow wrong?  Talk about a double standard.

Another double standard is the legal distinction between crack and powdered cocaine, especially since the legal penalties against the powdered form are far less stringent than those for crack, and since the powdered form is used by celebrities and others such as Paris Hilton, while crack is the province more of minorities and the denizens of poorer areas.  I may be misguided, but it seems to me that cocaine is cocaine.

I’ve also noted another interesting trend in the local and state newspapers.  Crimes committed by individuals with Latino names seem to get more coverage, and more prominent positioning in the same issue of the paper, than what appear to be identical crimes committed by those with more “Anglo” surnames. Coincidence?  I doubt it.  While it may be more “newsworthy,” in the sense that reporting that way increases sales, it’s another example of a double standard.

Demanding responsibility from teachers, but not from students, a practice I’ve noted before, is also a double standard.  So is the increasing practice of colleges and universities to require better grades and test scores from women than from men, in order to “balance” the numbers of incoming young men and women.  Whatever the rationale, it’s still a double standard.

Going into Iraq theoretically to remove an evil dictator and to improve human rights, but largely ignoring human rights violations elsewhere, might be considered a double standard – or perhaps merely a hypocritical use of that rationale to cover strategic interests… but why don’t we have the courage to say, “Oil matters to us more than human rights violations in places that don’t produce goods vital to us” ?

Double standards have been a feature of human societies since the first humans gathered together, but it seems to be that the creativity used in justifying them is increasing with each passing year.  Why is it that we can’t call a spade a spade… or a double standard just that?

The Coming Decline and Fall of American Higher Education?

The September 4th edition of The Economist included an article/commentary entitled “Declining by Degree” that effectively forecasts the collapse of U.S. higher education, citing a number of facts and trends I’ve already mentioned in previous blogs and adding in a few others.  For example, an American Enterprise Institute study found that in 1961, on average, U.S. students at four year colleges studied 24 hours a week, but today only study 14.  While U.S. household income has grown by a factor of 6.5 over the period, the cost of attending an in-state public college or university has increased fifteen times, while the cost of private universities, pricy even in 1960, has increased 13 times.  Yet educational outcomes are no better, and less than 40% of all students graduate in four years.

While the commentary identifies many of the causal factors I’ve mentioned, such as the incredible administrative bloat and building elaborate facilities not directly related to academics, i.e., football stadiums and lavish student centers, it addresses the problem of faculty as “indifference to student welfare” and inflated grades to faculty preoccupation with personal research and scholarship. I’d agree that there is considerable institutional indifference to student welfare, despite all the inflated claims and protestations to the contrary.  Based on my own years of teaching and more than twenty years of observing my wife and several offspring who teach at the university level, that indifference generally does not come from the individual faculty member, but from the combination of administration, parental, and student pressures that most faculty – especially non-tenured, tenure track junior professors – are unable to withstand if they wish to keep their positions.

Like it or not, grades have become the sine qua non for entry into graduate programs or jobs, and, also, like it or not, virtually all university professors are judged in large part on how good their student evaluations are, and, according to studies, the higher the grades a professor gives, and the less demanding the student workload, the better the student evaluations.  The other principal aspect of gaining and retaining tenure – especially now that more and more universities are instituting post-tenure review – is the faculty member’s scholarship and/or research.  In addition, to cope with the incredible increase in tuition and fees, more and more students are working part-time or even full-time and/or taking out significant student loans, which they intend to pay back by getting a high-paid job after completion of their education, and they see the pay they will receive as at least in part determined by their grades/class ranking.  As an illustration, an incoming student at my wife’s university inquired about the percentage of As granted in each class for which he was registered – and immediately dropped the hardest class after the first week of classes. He wasn’t the only one; it’s a pattern that faculty members recognize and note year after year.

The combination of these pressures effectively communicates to faculty that their own welfare is determined by their popularity and by their scholarship and research, not by how well they prepare students. For at least ten years, the vast majority of professors I’ve known who require in-depth preparation and learning on the part of students have had to resist enormous pressures from their superiors, and sometimes even from colleagues, not to be “too hard” on the students. Under these circumstances, it’s not hard to see why American college graduates are, as a whole, less prepared than their predecessors, and why more than half of the graduating college seniors are effectively marginally literate… or why The Economist cites the coming decline of U.S. universities. You can’t put professors in a situation where, to improve student performance, they effectively have to destroy their own future, and expect the vast majority to be that self-sacrificing.

The problems and trends are indeed real, but, as in so many cases that I discuss, few want to look at the root causes.

Let’s Try This Again

A while back I commented on the fact that one of the problems with all the education “reformers” was that virtually all the rhetoric and the effort was concentrated on teachers and schools, but primarily upon teachers. In recent weeks, there have been new programs, press interviews with the Secretary of Education, Arne Duncan, and the national head of the teachers’ union, not to mention all sorts of other commentary to coincide with the beginning of the new school year.  And what do we continue to hear?  It’s all about how getting better and more inspiring teachers will improve education.

Who can disagree with that?

Except… it’s only focusing on half the problem.  It’s like saying that a good coach will always have a good team, no matter what sort of players the coach has, no matter what their background and motivation are.  That is, pardon me, bullshit.  Good teams require good coaches and good players.  Likewise, good education requires good teachers and good students, and unlike coaches, teachers don’t have the luxury of selecting and educating only the best students.  Putting all the focus on teachers, especially at a time when teachers have less and less respect from students and parents and, frankly, fewer and fewer tools to maintain discipline in a culture that has multiplied manifold the possible distractions and student problems, is not only unrealistic, but short-sighted.  Placing all the responsibility on the teachers is, however, far more politically and personally attractive than addressing the “student problem.”

What almost all of these “reformers” overlook are some of the key reasons why private schools and the best charter schools have better records in improving student performance.  In addition to better teachers, the parents are more involved, and they play a far greater role in demanding more of their children.  In addition, disruptive and disinterested students can be dealt with, and removed if they don’t improve their behavior. In short, they deal with student motivation and aspiration, and provide a supportive and disciplined structure for learning.

The other problem in focusing on teachers is that the growing emphasis is on test scores and their improvement.  Teachers tend to oppose this focus – and for very good reasons.  No matter how good the teacher, a classroom composed of inner-city students with poor educational backgrounds and difficult personal situations will not progress as fast as one composed of the best and most highly motivated students in the school.  How do you measure what progress represents a “good” teacher?  It’s easy enough to determine a terrible teacher, but an excellent teacher may put more effort and skill into creating a modest improvement with a difficult class while a competent teacher may show greater improvement with a less educationally-challenged class.

In addition, excessive test-oriented teacher evaluation creates pressures to “teach to the tests,” rather than pressure to teach students how to learn.  This further emphasizes teacher behavior and test-related causality, rather than dealing with the long-term needs and requirements of the students.

So… when are we as a society, especially the educational reformers, going to address the entire spectrum of problems with education, rather than placing the entire responsibility on the teachers?

Communications Technology – The Path to Devolution?

One of the key elements in human society and human relations is the capacity for communication on a person-to-person basis.  People who have trouble reading emotions and responding appropriately to them – whether through a genetic factor, such as Asperger’s Syndrome or autism, or brain injuries – are severely disadvantaged. Humans are a social society. In interacting with others, we learn to read people’s body expressions, their tone of voice, the minute expressions in their eyes, and scores of other subtle signals.  These skills are increasingly more vital in a complex society because, frankly, the majority of people don’t understand the technology and the institutions.  What most people are left with is their ability to read other people. In addition, one of the factors that reduces hatred and conflict is empathy with others, and that’s generated through face-to-face experiences. Electronic technologies, particularly cellphones and hand-held texting devices, are expanding to the point where they’re largely replacing face-to face and even aural communications.  Texting, in particular, removes all personal interaction from communications, leaving only a written shorthand.

High school and college students walk around with earbuds all the time, ignoring those around them, often fatally, as when they walk in front of light-rail, cars, and buses.  But that’s not the only danger.  The excessive volume used in such devices, perhaps boosted to isolate them from others, has resulted in permanent hearing loss in roughly 20 percent of the teenaged population of the United States.  In addition, the self-selecting effect of electronic communications removes or limits the interactions with others who are different – at a time when in the United States in particular, cultural homogeneity is disappearing in a multicultural society.  Perhaps some of the impetus for electronic isolation or segregation is a reaction to that trend, because a less homogenous society represents unpleasant change for some… but ignoring it through the filter of self-selecting electronic social networking does nothing to address a growing cultural and communications gap.

The vast majority of users of Facebook and MySpace and other social networking sites reveal all sorts of personal information that can prove incredibly helpful to identity thieves, information that most people would balk at telling to casual acquaintances – yet they post it on networks for other users – and hackers across the world – to see and use.

Likewise, for all the rhetoric about multi-tasking, study after study has shown that multi-taskers are less efficient than “serial-taskers” and that, in many cases, such as texting while driving or operating machinery, multi-tasking can prove fatal.  Equally important, but more overlooked, is the fact that electronic multi-tasking erodes the ability to concentrate and to undertake and complete tasks that require sustained continuous effort and concentration.  In essence, it can effectively create attention-deficit-disorder.

Add to that the fact that even email is becoming a drag on productivity because all too many supervisors use it to demand more and more reports – and those reports only detract from more productive efforts.

So again… why do we as a society tout and rush to buy and gleefully employ electronic equipment that is ruining our hearing, reducing our abilities in assessing others and thus handicapping us in making good decisions while amplifying negative traits such as negative stereotyping, seducing us into often dangerous patterns of behavior, increasing the chances for costly identity-theft, and reducing the productivity of millions of Americans? Or, put another way, why are we as a society actively promoting and advocating technology that will effectively replicate the effects of such handicaps as Asperger’s Syndrome or attention-deficit disorder?

If the Islamic terrorists released a virus that accomplished these ends, we’d consider it an act of war… but we seem to be doing it all on our own, and, at the same time, denouncing anyone who suggests that all this personal and social-networking high-tech communications isn’t in our best interests as a technophobe or a“dinosaur”… or “not with the times.”

But then, thoughtful consideration seems to be one of the first casualties of extreme technophilia.