Archive for the ‘General’ Category

When Elites Fail…

Like it or not, every enduring human civilization has had an elite of some sort. By elite, I mean the relatively small group – compared to the size of the society – that directs and controls the use of that society’s resources and sets that society’s goals and the mechanisms for achieving or attempting to achieve those goals.

Historically, and even at present, different countries have different elites, based on military power, economic power, political power, or religious power, or combinations of various kinds of power, and as time passes the composition of those elites tends to change, usually slowly, except in the cases of violent revolution. In general, the larger the country, the smaller the elite in proportion to the total population. In addition, the work of the French economist Thomas Piketty also suggests that economic inequality is the historical norm for most countries most of the time.

Since elites are a small percentage of the population, the members of the elite need a means of control. In the United States that means has largely been economically based from the very beginning of the United States. Initially, only white males could vote, and effectively, only white males of a propertied status could afford to run for office, where they associated with others of the propertied status. What tends to get overlooked by many about the Civil War was that, for the southern elite, the war was entirely economic. Slaves were a major form of wealth, and without that slave “property” many of the great southern plantations were essentially bankrupt. Thus, the southern elites were fighting for preservation of their unchallenged status as elites.

The rapid industrialization of the United States resulted in a change in the economic and social structure with the numbers of small farmers being gradually but inexorably reduced, with a concomitant growth in factory workers, who, initially were in practice little more than wage slaves, especially child and female workers. The growth in concentration of wealth and power in the “robber barons,” such as Astor, Vanderbilt, Carnegie, Gould, Mellon, and others, without a corresponding increase in the worth and income of the workers was one of the factors behind the candidacy of William Jennings Bryan for the presidency in 1896, as exemplified by his statement to the National Democratic Convention, where he stated that “The man who is employed for wages is as much a businessman as his employer…” From there Bryan went on to suggest that the Republican candidate [McKinley] was basically the tool of the monied interests, concluding with the famous line, “You shall not crucify mankind upon a cross of gold.” But Bryan lost the election by 600,000 votes after industrialist Mark Hanna raised huge contributions from industry.

With McKinley’s assassination in 1901, Theodore Roosevelt became president, and over an eight year period pushed through a host of reform measures that improved public health, working conditions, and restricted and sometimes eliminated monopoly powers, and his successor, William Howard Taft, continued those efforts. In 1907, when a financial panic threatened to bring down the entire U.S. financial system, Roosevelt and his Treasury Secretary worked with financier J.P. Morgan to stave off the crisis. These efforts, and an improved economy, defused much of the working and lower middle class anger.

Roosevelt, however, wasn’t so much a supporter of the working class as what might be called a member of “responsible elite,” a man who felt that business and power had gone too far.

In contrast is what happened in Russia. People tend to forget that in the early 1900s Russia was the fifth most powerful economy in the world, but unlike Roosevelt and Taft, Czar Nicholas II and the Russian aristocracy continued to bleed the small middle class, the workers, and the serfs, with the result of continued revolts and unrest. Nicholas agreed to the creation of a parliament [the Duma] and then did his best to eliminate or minimize what few powers it had. And, in the end, the old elite lost everything they had to the new elites, whose power was based on sheer force, rather than a mixture of money and force.

There are more than a few other examples, but what they tend to show is that all societies have elites, and that those elites control society until they become incompetent… and another elite takes power.

From what I’ve observed, it appears that an increasing percentage of the American people is anything but pleased with all too many members of the current American elite, especially with business executives, the media, and politicians, and that most of those visible elites seem almost dismissive of or oblivious to that displeasure… and, more important, unwilling to deal with the root causes of that displeasure, except with words and, so far, empty promises.

Supporting the Short Stories…

Most of my readers, I suspect, associate my name with books that are, shall we say, substantial in length and scope. Some may know that I occasionally have written shorter works, and a few may recall that a long, long time ago, for the first ten years of my writing career, I only wrote short fiction.

At present, I’ve written and had published forty-five short works of fiction, mostly short stories, but including two novellas, and that total doesn’t include the novella I later expanded into a novel. By comparison I just turned in the manuscript for my seventy-fourth novel [Endgames, the sequel to Assassin’s Price].

Back in 1972, when I’d just sold my very first story to ANALOG, I had no idea of ever writing a novel, and I might never have written one if I hadn’t essentially been forced to by Ben Bova, the then-editor of ANALOG, who rejected another story of mine (one of many that were rejected) with the note that he wouldn’t consider another story of mine until I wrote a novel, because he felt I was primarily a novelist, rather than a short story writer. That was an incredibly perceptive observation because he’d never seen any work of mine in excess of a few thousand words.

I took his advice, and as the cliché goes, the rest was history… and lots of novels. But I never lost the love of short fiction, and occasionally wrote a story here and there, usually, but not always, by request for anthologies. But stories, even brilliant outstanding stories, cannot sustain a writer in this day and age, as they could in the 1920s and even into the 1940s. I did a rough calculation, and all of my earnings from short fiction, and that includes the two book collections, total roughly half of what I now receive for a single fantasy novel.

This is an example of why, so far as I’ve been able to determine, there are essentially no full-time F&SF short-story writers making a living wage. So I was very fortunate to have gotten Ben’s advice and just smart enough to have taken it… and equally fortunate that readers have liked the books I’ve written.

All of which brings me to another point. As I mentioned earlier, I’ve agreed to write a story for a kickstarter anthology from the small press Zombies Need Brains, entitled The Razor’s Edge. The neat thing about the anthology is that half the stories are written by name authors and the other half are selected from open submissions. I’ve finished the first draft of the story, and that’s good because it takes me much longer to write short fiction, but it won’t see print unless the kickstarter is funded, which it isn’t at present. Also, you won’t see new stories from other favorite authors, and even more important, you won’t be giving a chance to new authors.

Yes, I’ll be paid, but it’s not much, and I wrote the story for the story, not for the very modest sum – and that’s definitely true for pretty much all the name authors. So… if The Razor’s Edge is something you might like, or if you want to give some up and coming authors a chance, pledge something at the kickstarter [ The Razor’s Edge Kickstarter ]. I’ll appreciate your efforts, and so will a few new authors, some of whom might graduate to writing big thick books that you might also like in the future.

Preconceptions

There’s the old saying that goes “it isn’t what you don’t know that gets you in trouble, but what you know that isn’t so.” All too often what we know that isn’t so lies in the preconceptions that we have. Because erroneous preconceptions are usually feelings and/or beliefs that we seldom examine, we run far greater risks with them than with what we know we don’t know.

Of course, one of the greatest erroneous preconceptions is that we know something that we really don’t, as recently demonstrated by Donald Trump’s statements about how easy it would be to fix healthcare and taxes, neither of which is amenable to a simple “fix,” at least not without totally screwing tens of millions of people.

Erroneous preconceptions by U.S. military leaders about how the Vietnamese would react to U.S. forces were the one of the major factors in why the U.S. became mired in one of the longer-drawn-out conflicts, yet military figures seem to have the same problem in Afghanistan, and it appears that this is also a problem with U.S. views on both China and North Korea, because too many U.S. leaders have the preconception that people from other cultures think of things in the same way – or they look down on others and draw simplistic conclusions based on arrogant assumptions.

On a lighter note and in a slight digression, I’ve gotten several reader comments about Assassin’s Price to the effect that those readers were upset that an imager wasn’t the main character, and several said that they couldn’t get into the book because of that. I can understand a certain disappointment, if you’ve been looking forward to a book about imagers, but… every synopsis about the book mentions Charyn, and Charyn is definitely not an imager in the previous two books, and he’s much older than the age when imagers manifest their talents. In addition, the book is still an adventure, and it still has imagers… if not as the main character. These readers had such preconceptions about the book that they couldn’t really read and enjoy what was written.

The older I get, the more I’ve seen how preconceptions permeate all societies, but it seems to me that in the U.S., erroneous preconceptions are on the increase, most likely because the internet and social media allow rapid and easy confirmation bias. What tends to get overlooked is that human beings are social animals and most people have a strong, and sometimes overpowering, desire to belong. Social media allows people, to a greater extent than ever before, to find others with the same mindset and preconceptions. This allows and often even requires them to reinforce those beliefs, rather than to question them, because in most groups, questioners are marginalized, if not ostracized… and that practice goes much farther back than the time of Socrates.

Trump’s hard-core supporters truly seem to believe that he can bring back manufacturing jobs and that the U.S. would be better off if all eleven million illegal immigrants were gone. Neither belief holds up to the facts. Far-left environmentalists believe that the world can be totally and effectively powered by renewable energy. Not in the foreseeable future if we want to remain at the current levels of technology and prosperity. Pretty much every group holds some erroneous preconceptions, and pretty much every group is good at pointing out every other group’s errors, while refusing to examine their own.

And, at present, we’re all using communications technology to avoid self-examination and to blame someone else, rather than using it to figure out how to bridge the gaps and recognize the real problems, because you can’t fix a problem you refuse to acknowledge, nor can you fix a problem that only exists in your preconceptions. Nor, it appears, at least for some people, can they even get into a book in a series that they like because the main character doesn’t fit their preconceptions.

Research

Over the past several years, I’ve heard a number of variations on the theme that the younger generation doesn’t need to learn facts, that they just need to learn methods. I have to disagree – vehemently!

The younger generations not only need to learn, if anything, MORE facts, and those facts in their proper context, more than any other previous generation. Those who disagree often ask why this is necessary when computers and cloud databases have far more “storage” than the obviously limited human brain.

In fact, the very size of computer databases are what makes the need for humans to learn facts all the greater. That’s because of a simple point that tends all too often to get overlooked… or disregarded. To ask an intelligent question and to get an answer that is meaningful and useful, you have to know enough facts to frame the question. You also have to have an idea of what terms mean and the conditions under which they’re applicable.

While the computer is a great help for “simple” research, the computerization of research sources has often made finding more detailed information more difficult, particularly since algorithms often prioritize search results by popularity, which can make finding more out-of-the-way queries difficult, if not impossible, if the searcher doesn’t know the precise terms and key words necessary.

Already, there are too many young people who don’t know enough arithmetic to determine whether the numbers generated or shown by a point-of-sale terminal or a computer screen are even in the right ballpark. And from what I’ve seen, grammar checkers actually are inaccurate and create grammatical errors more often than they correct errors.

Then there’s also the problem of trying to use computers when they shouldn’t be used. Trying to get directions from Siri while actively driving qualifies as distracted driving. It’s fine if a passenger is arguing with Siri, but anything but that if the driver is.

Then there’s the problem that surfaced in the last election. When people don’t have a long-established in-depth personal store of knowledge and facts, they’re at the mercy of the latest “information” that pops up on the internet and of whatever appeals to their existing prejudices and preconceptions. And that doesn’t serve them — or the rest of us — well at all.

Literary Pitches… and Timing

I’m committed to do a story for The Razor’s Edge, an anthology from the small press Zombies Need Brains. The theme of the anthology is about just how little the difference is between the freedom fighter and the insurgent and the question of when fighting for a cause slips from right to wrong… or whether that’s just a matter of perspective.

As part of the PR for the anthology, the editors asked the contributing “anchor” writers if they’d be willing to write a blog post on one or all of the topics of creating an elevator pitch, a query, or a plot synopsis for one of their projects.

This posed a problem for me. Strange as it may sound in this day and age, I’ve never done any one of those things in order to sell a book or a story. I will admit that I’ve often managed to develop a plot summary or an “elevator pitch” for at least some of my books – after they’ve been bought… and I’ve hated doing either, and still do.

Why? Well… some of you who read my books might have a glimmering of an idea, but my personal problem is that any “short” treatment of a book – whether it’s an elevator pitch, a query, or a plot synopsis – has to focus on a single element. For what I write and how I write it, this is a bit of a problem, because focusing on a single element tends to create massive distortion of what I write.

Sometimes, questions help, or so I’ve been told. And some of those questions might be: What’s the most important facet of the book? What’s the hero’s journey? To what kind of reader does it appeal? The problem, for me, is that such questions make what I write come off as one-dimensional.

One of my most popular books is Imager, the first book in the Imager Portfolio. It features Rhennthyl – or Rhenn, who at the beginning of the book is a journeyman portrait artist in a culture vaguely similar to 1840s France, except with later steam-power. Rhenn is a good artist, good enough to be a master, but it’s likely he never will be for a number of reasons, and especially after the master painter for whom he works (under a guild system) dies in an accident that may have been caused by Rhenn’s latent magical imaging abilities.

Now, the book could be pitched as “young artist develops magical abilities and gets trained by mysterious group to use magical imaging powers.” And if it had been pitched that way, it would likely have flopped as a YA imaging-magic version of Harry Potter, because Rhenn is far more deliberate, not to mention older, than Harry Potter. Also the Collegium Imago makes Hogwarts look like junior high school.

Imager could also have been pitched as “a magic version of Starship Troopers,” since it does show the growth and education of a young man into a very capable and deadly operative, but Rhennthyl is operating in a far more complex culture and society, and one that’s far more indirect than what Heinlein postulated.

Then too, Imager could be pitched as a bildungsroman of a young man in a world where imaging magic is possible. And that, too, contains a partial truth, but ignores the fact that Rhenn’s basic character is already largely formed and many of his problems arise from that fact. Such a description also ignores the culture.

Because I never could find a short way to describe any book I wrote, not one that wasn’t more deceptive than accurate, I never did pitch anything I wrote that way. I just sent out the entire manuscript to a lot of people, and, of course, it took something like three years before someone finally bought my first book.

And… for some kinds of books, as it was in my case, letting the book sell itself may be better than trying to shoehorn it into a description or pitch that distorts what the book is all about. Now, authors aren’t always the best at describing their own work, but over time, I discovered that even my editors had trouble coming up with short pitches. So… if those who read your work also can’t boil it down into a pitch… then it just might not be a good idea.

Free speech?

The extremes of free speech on both the left and the right, as exemplified by Middlebury and Berkeley and then Charlottesville, bring home a point that no one in the United States seems comfortable to discuss.

In a working society there can be NO absolute freedoms. Particularly with regard to “free speech,” this seems to be an issue that has come up time and time again, its lessons only to be forgotten for a generation or two, until some extremist, or extremists, push the limits of “freedom” beyond what a working free society can permit.

Sometimes, society overreacts, as in the Schenck case in 1919, when the Court disallowed the use of the First Amendment as a defense for a socialist peacefully opposing the draft in the First World War, and sometimes, as in 1969, it reacts in a more moderate fashion, when the Supreme Court’s decision in Brandenburg v. Ohio effectively overturned Schenck by holding that inflammatory speech – and even speech advocating violence by members of the Ku Klux Klan – is protected under the First Amendment, unless the speech “is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.”

One could certainly argue that the neo-Nazi protesters in Charlottesville, who not only chanted vile and racist slogans, but many of whom also carried weapons, were using speech and those weapons to incite lawless action. By the same token, armed protesters opposing the BLM at the Bundy ranch weren’t just relying on words but weapons. But what about the numerous speakers on college campuses who have been shouted down or who have had their appearances canceled because the protesters didn’t like what they might have said?

The First Amendment states: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.”

It seems to me that the neo-Nazis, the Bundys, and all too many of the campus protesters weren’t exactly in accord with the right “peaceably to assemble.”

Back in 1945, the political philosopher Karl Popper published The Open Society and Its Enemies, in which he laid out what he called “the paradox of tolerance.” Popper argued that unlimited tolerance carries the seeds of its own destruction and that if even a tolerant society isn’t prepared to defend itself against intolerant groups, that society will be destroyed – and tolerance with it.

Extremist groups, by both definition and by their very nature, are intolerant. The real question for any society is to what degree their intolerance can be tolerated and at what point must it be limited. The simplest bottom line might well be what the Supreme Court laid down in the Brandenburg decision – that speech directed at inciting lawless or violent action is not permissible, and that includes the violence of protesters which denies those they oppose the right to speak… provided, of course, that the speakers aren’t inciting lawless or violent action.

Do You See What I See?

That phrase comes from a Christmas carol (not Dickens’s A Christmas Carol), but it’s also an appropriate question for both readers and authors.

Over the years I’ve been writing, I’ve been pummeled and praised from all sides about the philosophic underpinnings of what I write, and called, if sometimes indirectly and gently, “every name in the book.” At times, it may have been merited, but most times, it’s because the reader and I don’t see the same thing.

There’s another old saying – where you stand depends on where you sit. And where you sit depends also on where you’ve been, what you’ve done, and what you’ve seen, really seen.

I now live a comfortable life. I admit it, but there were more than a few times when the money ran out before the month, so to speak, and there were a few times when there was no money and no job, and months of pounding the pavement and sending out resumes and following up leads. I’ve been hired, and I’ve also been fired. For all that, I always had a roof over my head, and one that didn’t leak, or at least not much. I’ve been married, and divorced, a single custodial parent with four small children, again married and divorced, and, thankfully,for the past twenty-five years, very happily married.

From my time in politics and in the business and consulting world, I’ve also been close enough to gilded world of the very rich and very powerful, briefly passing through it on assignment, as it were, but I’ve also been in mines, factories, refineries, and in worn-down farms deep in Appalachia, in the near dust-bowl plains in parts of Colorado and Kansas. I was an anti-protest protester during the Vietnam War, and then I was first an enlisted man and then an officer in the Navy… and a search and rescue pilot. I’ve seen grinding poverty off the beaten track in South America and Southeast Asia, and I’ve seen incredible showplaces of now-vanished British nobility and the Irish ascendancy.

I started at the bottom in grass-roots politics and ended up as a fairly senior political staffer in Washington, D.C. I’ve run my own businesses, not always as successfully as I should have, from the first one doing fairly physically demanding manual labor to white-collar regulatory consulting. Along the way, there were stints as a life-guard, a radio DJ, and several years as a college lecturer.

That’s why what I see may not be what some of my readers see, but all good writers write from what they know and where they’ve been, and if you read closely, you can tell where an author’s been… and often where they haven’t.

The Time-Saving Waste

Recently, a certain university insisted that tenured and tenure-track faculty turn in their annual required faculty activity reports in electronic format in order to save time. This particular university requires extensive documentation as proof of faculty activities and teaching skills, but set out a helpful format, theoretically supported by a template, as well as a tutorial on how to comply with the new requirement.

The result was a disaster, at least in the College of Performing and Visual Arts. The template did not work as designed, so that faculty couldn’t place the documentation in the proper places. Even the two faculty members with past programming experience couldn’t make the system work properly. The supposed tutorial didn’t match the actual system. In addition, much of the documentation required by the administration existed only in paper format, which required hours of scanning, and to top it off, the links set up by the administration arbitrarily rejected some documentation. Not any of these problems have yet been resolved, but the time spent by individual faculty members is more than double that required by submitting activity reports in hard paper copy, and more time will doubtless be required.

Yet, this is considered time-saving. To begin with, the system was poorly designed, most likely because the administration didn’t want to spend the resources to do it properly. Second, to save a few administrators time, a far larger number of faculty members were required to spend extra time on paperwork that has little to do with teaching and more to do with justifying their continuation as faculty members, despite the fact that even tenured faculty are reviewed periodically.

Over the years, I’ve seen this in organization after organization, where the upper levels come up with “time-saving” or “efficiency” requirements that are actually counterproductive, because the few minutes they “save” for executives create hours of extra work for everyone else.

This tendency is reinforced by a growing emphasis on data-analysis, but data analysis doesn’t work without data. This means that administrators create systems to quantify work, even work, such as teaching, that is inherently unquantifiable, especially in the short term. When such data-gathering doesn’t result in meaningful benchmarks, instead of realizing that some work isn’t realistically quantifiable in hard numbers, they press for more and more detailed data, which not only wastes more time, but inevitably rewards those who can best manipulate the meaningless data, rather than those who are doing the best work.

Output data for a factory producing quantifiable products or components is one thing. Output data for services is almost always counterproductive because the best it can do is show how many bodies moved where and how fast, not how well or effectively the services were provided. Quantification works, to a degree, for a fast-food restaurant, but not for education, medicine, law, and a host of other activities. Yet forms and surveys proliferate as the “business model” invades everywhere, with the result of wasted time and meaningless or misleading “data.”

And yet the pressure for analysis and quantification continues to increase yearly, with administrators and executives failing to realize that their search for data to improve productivity is in so many cases actually reducing that very productivity. Why can’t they grasp when enough is enough?

The Decline of the Non-Imperial Empire?

In her book, Notes on a Foreign Country, Suzy Hansen points out that the United States has created an empire that Americans, for the most part, refuse to believe exists. From the beginning, she writes, “Americans were in active denial of their empire even as they laid its foundations.”

An empire? Surely, you jest?

Except… the United States still maintains nearly 800 military bases in more than 70 countries and territories abroad, while Britain, France, and Russia, in comparison, have about 30 foreign bases combined. More than 300,000 U. S. troops are deployed not only in those 70 countries, but in 80 others as well. In effect, the U.S. dollar is the default currency of the world, and English is either the primary language or the back-up language in world commerce.

So just what is the difference between an undeclared and unacknowledged empire and one that declares its imperial status, as did the British Empire or the Roman Empire?

There are doubtless a number of similarities and some differences, but I’d say that the principal difference is that, in denying its status as an empire, the United States is minimizing, if not denying, its responsibilities to its territories and dependencies. Over the last two and possibly three decades, in pursuit of perceived American “interests,” the United States has effectively destroyed country after country, as opposed to the two decades after World War II, when the primary interest was rebuilding nations, if only in order to create an economically and militarily strong coalition against the USSR.

Exactly how has either the United States or the world benefited from the chaos in Iraq, Afghanistan, Syria, Libya, and Somalia, in all of which we’ve had troops fighting and resolving nothing? We intervened… and then decided we couldn’t afford the cost of putting those countries back together again. We didn’t behave responsibly, and we haven’t been exactly all that responsible for the care and needs of the veterans we sent there.

Have these interventions been good for either the U.S. or the world? The list of fragmented countries across the world is growing, not declining, and now the American president seems to be picking fights with neighbors and allies alike.

In the last election, in a sense, we had a choice that I’d caricature as one between “Big Momma” and “Caligula.” The American electorate chose Caligula as the lesser of two evils. Now, before everyone jumps on that, I’d like to point out that when Caligula became the Roman Emperor, everyone was initially pleased. He was a change from the severe, dour, and often cruel Tiberius. He was outspoken and outgoing, but he had no sense of morals, propriety, or responsibility, and he definitely couldn’t manage money, and he lavished money on pleasure palace after pleasure palace, some of which would have made Trump’s Mar-a-Lago seem small and even tawdry.

Now, we have a government that’s abandoning its responsibilities to its citizens, not only in terms of health care, but in terms of basic fiscal responsibility, just as the Roman Senate abandoned its responsibilities. After that, the Praetorian Guard assassinated Caligula, and the last vestiges of a government being responsible to the people dissipated, and the Empire began the long slow decline, although that wasn’t visible immediately as the territory conquered expanded for a time, just as the number of countries in which our soldiers serve continues to expand.

Just how much of that history might we see repeated… or at least rhyme, as Mark Twain put it?

The Razor’s Edge

As mentioned elsewhere, I’ve agreed to write a story for a military science fiction and fantasy anthology entitled The Razor’s Edge, which is one of three anthologies to be published by the small press Zombies Need Brains and being funded by a kickstarter.

The Razor’s Edge explores the thin line between being a rebel and an insurgent in military SF&F, while Guilds & Glaives features slashing blades and dark magic. The third anthology – Second Round — allows readers to travel through time with Gilgamesh in a time-traveling bar.

If you’d like to help bring these themes to life, you can back the Kickstarter at www.tinyurl.com/insurgenturbar and find out more about the small press at http:www.zombiesneedbrains.com!

Does It Make Sense?

“Does it make sense?” That sounds like a simple enough question that can be applied to a business proposition, an invention, a novel or story, or even a proposed law. Then… why do we see so many impractical business ideas, inventions that never pan out, stories that are ludicrous, and laws that seem to us to make the situation worse?

At the same time, I’ve seen ideas that I’ve thought were preposterous result in millions of dollars in sales of one sort or another. Back when I was a teenager, there was the hula-hoop craze. Why would anyone want to gyrate around so that they could keep a plastic ring some three feet in diameter continuously whirling around their mid-section?

And then there were – and still are – lava lamps, in which a glob of gloop in a sealed and lighted glass container gets heated, expands and rises, then cools and falls. There must have been thousands of different combinations of colored liquid and differently colored gloop, all so people could either sit and watch gloop or not watch gloop but have it for background visuals. Exactly why has never made sense to me.

I even question the popularity of golf. Why would any sane individual really want to whack a round hard ball across 7,000 odd yards of grass, sand, and water… merely to see who wins by whacking it the fewest times between eighteen holes in the ground. Now… being somewhat commercial, I can see why professional golfers do it. There’s a LOT of money there when you’re whacking for money, but three to four hours of solid masochism for pleasure?

I also can’t say I understand the spectator side of NASCAR racing. Sitting in the sun or rain or whatever watching cars go around in a circle for hours on end, while drinking too much beer [but then, maybe that’s part of the “enjoyment”] makes little sense to me.

But that’s not really the question. The better question is not whether something makes sense, but to whom it makes sense, or to whom it appeals.

A law requiring sloped curb cuts makes little sense to a healthy individual, but a four inch curb to someone in the wheelchair is as much of a barrier to them as a ten foot fence is to someone healthy. For many disabled individuals, stairs are not a way to the next floor but a barrier to them.

Golf may not make sense to me, but it was my father’s exercise [he carried his own bag and walked], relaxation, and escape. I, obviously, love fantasy and science fiction. F&SF never made sense to him.

And those are some of the reasons why “Does it make sense?” can be incredibly misleading.

One Thousand

For what little it’s worth, I’ve now posted over 1,000 entries just in the “Blog Entry” section, the first one being in March of 2007. That doesn’t count the less frequent entries in the other sections of the website. For the most part, that’s meant writing a post of at least 400 words, and often over 1,000 words, twice a week for over ten years. At a minimum, that’s well over half a million words, or roughly the equivalent of 2.8 “average” Modesitt novels.

I don’t have any intention of stopping soon, since we live in “interesting times,” and that means there is always something to speculate about, whether it’s why such diverse fields as hard science, computer technology, history, and Fortune 500 CEOs are far more misogynistic [in general] than other fields, or why we still haven’t found a commercial way to fly a supersonic passenger aircraft, or why so many people pit religion against science, as if they don’t both exist in the same world.

Then there’s ongoing and fascinating question of why Congress has accomplished less each session, even though the intelligence levels of individual members of Congress are largely much higher than were those of their predecessors. I also have the suspicion, but no way to prove it, that more often than not, the less intelligent candidate for President has been the winner. Is that just my perception, happenstance, or does the American electorate have a distrust of “elites,” intellectual and otherwise?

And then there’s technology and all the questions that it raises. Just last week, the Atlantic ran an article entitled, “Have Smartphones Destroyed a Generation?” I don’t know about “destroyed,” but I’m not so sure that it hasn’t at least impaired part of a generation, particularly their attention span, given what I’ve seen on college campuses and elsewhere. We certainly have a generation, as well as some of those of older generations, who can’t walk or drive safely because they’re too enamored of their smartphones, and that doesn’t speak much for either their upbringing or their intelligence – but then, maybe it’s just a latest manifestation of teenagers’ [and those who haven’t ever outgrown being teenagers]unthinking belief in personal invulnerability.

As for books, we’re seeing the greatest change in publishing and reading since the introduction of the mass market paperback in the 1950s, and there’s no telling exactly where it’s going, except that, in fantasy and science fiction, that once-vaunted mass market paperback is taking a far bigger hit than in other genres. Is that because F&SF readers are technological opinion-leaders or just because we’ve all run out of shelf space at a time when the price of housing continues to rise?

For those of you who’ve followed the site for its more than ten years, and for those who joined along the way, even if today’s your first read, thank you all!

Reality…

Reality doesn’t care what you believe. Or as Daniel Patrick Moynihan [and quite a few others] said, “You’re entitled to your own opinion, but not your own facts.”

Put another way, just because you believe in something with all your heart and soul doesn’t mean that it’s so. President Trump’s assertion that his inaugural crowd was the largest ever doesn’t make it so. Nor is climate change a hoax perpetrated by the Chinese. It’s not a matter of opinion that the latest iceberg that broke off the Larson C ice shelf is roughly the size of Delaware, nor is it a matter of opinion that the Arctic ice cover is diminishing radically.

No matter what conservative politicians claim, lowering taxes won’t increase higher paying jobs for the working and middle classes; lower taxes will benefit primarily the upper middle class and the upper class, particularly the top tenth of one percent, simply because they make more money. For example, the average household in the middle 20 percent of earners [the average American taxpayer] pays slightly more than $8,000 in federal taxes, on income of about $56,000. The average household in the top one percent [the rich taxpayer] pays about $430,000 in federal taxes on an income of $1,500,000. A one percent cut in the tax rate means the average family would get back less than $800, while a one percent cut for the rich taxpayer would give back more than $16,000. For an ultra-rich taxpayer, with an income of $100,000.000, a one percent tax cut would give back one million dollars.

No matter what anyone claims, U.S. manufacturing has not declined. In fact, the U.S. now manufactures twice as much as it did in 1984. The political “problem” is that it does so with five million fewer workers than it did in 2000.

The holocaust did exist; the Germans killed more than eleven million people, including six million Jews and five million others they thought “undesirable,” the second largest group of which totaled than a million gypsies. The Armenian genocide at the hands of the Turks also took place from 1914 through 1918, with the deaths of between 1.5 and 1.9 million Armenians, yet the present Turkish government contends that the massacre was not genocide. Both events have been documented extensively.

Various surveys show that Americans believe that immigrants, defined as people not born in the United States, account for between thirty-two and forty percent of the population; federal statistics place the number at slightly above thirteen percent. People also believe Muslim immigrants are 16% of the U.S. population; the actual number is one percent.

We all have a choice. We can look at the facts and then form or change opinions, or we can form opinions and then invent or search for facts of dubious origin to justify them. Which do you do?

Priorities?

This coming week classes will begin at the local university, and with those classes come expenses, tuition, fees, room and board, and, of course, textbooks. Except, unfortunately, more and more students aren’t buying textbooks.

The dean of the university library cited a study that found as many as half the students in college classes, especially classes that required expensive textbooks, never purchased those textbooks – and unsurprisingly those who failed to purchase textbooks had lower grades and a greater chance of failure. But why don’t students purchase textbooks? The usual reason students give is cost. The cost of textbooks for the “average” student runs from $500 to $800 a year, depending on the college and the subject matter, and in some fields the costs can exceed $1,000.

But are those costs unreasonable historically? I still have a number of my college texts, and some of them actually have the prices printed on them. I ran those numbers through an inflation calculator and discovered that, in terms of current dollars, I paid far more for books in 1963 than students today pay on a book for book basis, and back then we were required to read far more books than most college students read today.

Today’s student priorities are clearly different, and for whatever reasons, a great number of them aren’t buying textbooks [cellphones and videogames, fast food, but not books]. For this reason, the local university is promoting “open texts,” i.e., textbooks written by professors or others and placed without cost on the university network for students to use. Not surprisingly, students love the idea. It costs them nothing, and they don’t even have to go to a bookstore.

The idea bothers me, more than a little. And no, I’ve never written a textbook, and despite what people claim, those professors I know who have didn’t write them to make money. They wrote them because what they wanted their students to learn wasn’t in the available existing books. The royalties and/or fees they received usually barely reimbursed them for their time and effort in creating the text. So how did textbooks get so expensive? First, they’re not that expensive, given the time and expertise it takes to create a good text – and all of the diagrams, tables, and the like are expensive to print [even in electronic books they take a lot of time and effort]. Second, because fewer and fewer students are buying the textbooks, the unit costs of producing them go up.

Maybe I’m just skeptical by nature, but so far with each year that the internet expands, the percentage of accurate information declines. With all these professors producing these “open texts,” where exactly is the quality control? Where is the scrutiny that at least produces some attempt at objectivity? When a textbook is printed, it’s there in black and white. It can’t be altered and anyone who wants to pay the price can obtain it. Just how available are these so-called open texts to outsiders? Against what standards can they be measured? Is there any true protection against plagiarism?

I have yet to see these questions being addressed. The only issue appears to be that because students think textbooks are too expensive, they aren’t buying them, and those that aren’t buying aren’t learning as well. So, the university answer is to give them something to read that doesn’t cost them anything.

Yet I can’t dismiss the textbook problem. It does exist, and part of the problem is also the typical college bookstore. They’re under pressure not to lose money. So what do they do? They only order the number of books that a course sold the previous year or semester. Even when half the students in a class can’t get books and want to pay for them, too many bookstores can’t be bothered, and students get screwed, especially the poor but diligent ones for whom every dollar counts, and who can’t afford to rush to the bookstore immediately.

On more than one occasion, my wife the music professor has had to order opera scores personally [and pay for them] and then sell them to students [since it’s rather hard to learn the music and produce an opera if the singers don’t have the music to learn] so that her performers all had the music. And, of course, doing so is totally against university policy. But then, cancelling a scheduled opera because the music isn’t available isn’t good, either, and copying the scores is not only against copyright law, but also runs up the copying budget.

But this is what happens when the “business model” of the bookstore meets the realities of publishing costs and students who are either unwilling or unable to afford textbooks.

Mass Market Paperbacks – The Death Spiral

The other day I got a striking reminder that the distribution of mass market paperback books, at least in the fantasy and science fiction field, is getting close to a death spiral (perhaps I’m exaggerating, but the situation isn’t good for lovers of the mass market paperback).

I was contacted by an independent book store that informed me that one of the mass-market paperbacks in the Imager Portfolio was being listed as indefinitely out of print. When I contacted Tor, I learned that the paperback in question wasn’t selling all that well. That struck me as rather odd, because I was under the impression that the Imager books were all selling nicely. Well… I obviously hadn’t looked closely enough at my royalty statements. The book in question has been selling quite nicely. It sold well in hardcover and e-book, and sold well – initially – in mass market, but in the last two years, it’s tanked in mass market, although e-book sales remain strong.

I wanted to know why paperback sales had dropped. So I asked. The reason given by Tor was that, while mass market paperbacks still sell well in independent bookstores, that’s because they’re more frequently carried as back stock by independent bookstores, while Barnes & Noble, the largest brick and mortar outlet for physical books, has been cutting back on carrying back stock paperbacks that aren’t selling extremely quickly.

Without the demand by B&N, the publishers can’t afford to reprint backlist titles nearly so often, since there are so few independent bookstores that have large stocks of fantasy and science fiction, and the publishers can’t afford to keep large inventories because of the federal tax laws under the Thor Power Tool precedent. As explained here: Thor Decision

But… if the titles aren’t on the shelves, that reduces the demand, which means that fewer backlist mass market paperbacks get reprinted, which in turn reduces demand, and readers either order the e-book or move on to another author or series that is available.

So if you can’t find as many mass market paperbacks by your favorite author, all that just might be why.

Groupthink?

Human beings are social. Most of us form groups. The problem is that while some groups are helpful and socially beneficial, others are socially toxic, and when a socially toxic group becomes powerful enough, the greater society always suffers. Sometimes, this is immediately obvious, as demonstrated by the white supremacist demonstration in Charlottesville. Other times, it’s hushed up, as I discovered, months after the fact, when the president of my college alma mater “disinvited” a conservative speaker. While I scarcely agree with the views of the speaker, I don’t believe in disinviting speakers whose views don’t match those of an institution.

At the same time, I also don’t believe in violent demonstrations. No matter what the aggrieved partisans who feel disenfranchised say, violent demonstrations have no place in a democracy, particularly since they strengthen the opposition and weaken the cause of the demonstrators. Demonstrations, yes. Violence, no.

All of this, however, also obscures an understanding of a critical aspect of the problem, and that’s a failure to distinguish between perceived groups and real groups. Skin color and ethnicity don’t often, if ever, correspond to groups. Just look at Africa today, or Europe in the 1600s, or England in the Elizabethan era. Muslims in Afghanistan are killing other Muslims of the same ethnicity and skin color.

Groups almost always have an identity based on a belief of some sort, whether it’s a religious faith, a belief that members of the group are oppressed or otherwise disenfranchised, a sense of supremacy, or some mixture of beliefs.

Groups also have two basic goals/drives: first, to reinforce the identity of all group members as part of that group and, second, to become more powerful as a way of strengthening the group and its identify. These drives motivate all groups, from gangs and drug cartels to philanthropic organizations and political parties, even religious groups.

One of the ways groups strengthen group identity is by claiming some sort of superiority — moral, spiritual, physical, intellectual, cultural, or some combination thereof, but in the case of toxic groups that “superiority” is based on stigmatizing and minimizing non-group members. The “better” types of groups trade more on some form of superiority based on service, morals, cultural uplift, or another form of cultural elitism, rather than emphasizing the negatives of non-members.

But all groups trade on their group identity in some fashion, ranging from very slightly to the point that, in some groups, nothing matters to the group but the group.

Toxic groups are the problem, not ethnicity, skin color, wealth, poverty, degree of education, or so many other “indicators” that people so easily cite.

Language and Culture

In an article recently republished on Tor.com, the linguist David J. Peterson took dead aim the underlying premise of Jack Vance’s The Languages of Pao. Vance postulated that language influences cultural behavior and that changing a culture’s language could change the culture. Peterson’s assessment was blunt: “The premise of this book is pure fantasy and has absolutely no grounding in linguistic science.”

In a less direct manner, he also mentions Suzette Hayden Elgin’s Native Tongue, noting that the language creation was “extraordinary,” but reiterates the idea that changing culture solely through changing language is “pure science fantasy”

Oh… really?

Peterson’s certainly not the only authority on linguistics, and his blanket statement is a bit suspect (as are most vast generalizations). While he has an M.A. in linguistics and has created a number of languages, Suzette Haden Elgin had a Ph.D. in linguistics and was a professor of linguistics at San Diego State University for a number of years, and also created at least one complete artificial language. She apparently didn’t seem to think that the use of language to change culture was infeasible or pure science fantasy. And for years, she taught people how to use language more effectively. Peterson seems either totally unaware of this, or chooses to ignore it, neither of which is exactly praiseworthy or honest.

Also, from a logical point of view, one can argue that language has no impact on culture or that it has some impact. I don’t see how any rational individual can claim that language doesn’t have an impact on human behavior, and anything that affects human behavior affects culture. It seems to me that the question of impact is only one of degree.

To be fair, Peterson makes the argument that changing a language alone can’t change culture. But that’s a straw man argument, an all or nothing argument. No single factor will by itself change society. Society is influenced by a myriad of factors, and the use of language is definitely one of them. Witness the use of language by demagogues, notably by Adolph Hitler, but also by Donald Trump in the U.S. Presidential campaign of 2016.

I’d be the first to admit that both Vance and Elgin exaggerated the effect of language in their books, but authors often exaggerate to make a point. I’ve certainly been known to do so. What Peterson doesn’t seem to get is the fact that, while language by itself may not change an entire society in a generation, over time language and its patterns do reshape society, and that individuals in every generation use language to do just that, turning nouns into verbs and vice-versa and inventing new terms and usages, not just in reaction, either – and that’s not “pure science fantasy.”

Plot?

Starting with Aristotle, there’s been a great deal of controversy about what “plot” means. Aristotle called plot “the arrangement of incidents,” incorporating a beginning, middle, and an end. My dictionary defines plot as “the scheme of events or situations in a story.” The novelist E. M. Forster distinguished between story and plot, saying that a story was “a narrative of events in their time sequence. A plot is also a narrative of events, the emphasis falling on causality.” Later critics suggested that the purpose of a plot was to show the interplay of events and character, with those events requiring conflict.

Yet I’m finding that there is a certain small subset of readers who equate “plot” with “action,” especially physical action with lots of violence, threats of death, and a high body count. That is, for these readers, if there is not a cascade of continuing action, the story or novel has no plot or point.

I fully understand that some readers read primarily, if not solely, for excitement and physical action, and there are more than enough books that provide such action. Many of them, I would contend, actually are without any vestige of a plot, in the sense that those books contain minimal character development, and no emotional or intellectual conflict, aspects of a novel that most readers and scholars would consider as necessary elements of plot. I certainly do.

A series of high energy actions isn’t necessarily a plot. The big-bang creation of the universe was violent and high energy, but it has no plot. For that matter, the Biblical take on creation is only a series of events, with neither character nor conflict [after the serpent and Cain and Abel, things change].

This also brings up a subsidiary but vital point. The lack of violent action doesn’t necessarily mean the lack of conflict, or for that matter, the lack of tension. Hitchcock’s acclaimed picture Vertigo contains no actual scenes of violence, only one apparent suicide, and an accidental death, yet the tension builds throughout, and it can hardly be called plotless.

In the end action doesn’t equal plot, and a well-plotted and tense story may contain little physical action or violence.

Writing for Hire?

Over the years, fans and even other writers have suggested ideas that might fit in my series, and I’ve always nodded politely and said kind words, but I’ve never taken up any of those ideas. Nor have I ever been approached for or pursued doing “work for hire,” such as Star Wars novels or the like. Then the other day, a long-time reader emailed me and offered an idea, declaring he wasn’t interested in anything, no royalties, no acknowledgments… nothing, and I had to think about the matter more deeply before I could answer him.

It’s not as simple as rejecting, subconsciously or consciously, other people’s ideas. For years, I was a successful consultant, developing, packaging, and presenting their case to clients or to the government. I had absolutely no problem in taking ideas from anywhere and using them. To this day, when I’m dealing with technical presentations or commentary on the website, I still have no problem in taking or expanding on other’s insights, especially those of my wife.

But with novels… it’s different. But why?

That was when I realized something that I’d known all along, but never really verbalized. I don’t tell stories nearly so well when I don’t come up with the ideas – even in my own “universes.” Part of this is because others simply don’t know my universes/worlds as well as I do. Especially in my fantasy series, but also in a series like the “Ghost” books, so much of that world lies in my mind and not on paper or in outlines that often ostensibly workable story ideas really won’t work and be true to that world or universe.

The other part lies, I believe, in how much of my creative process is subconscious. With all writers, I believe, a good part of the creation is subconscious, but from what I’ve observed, I tend to rely on the intuitive/subconscious feel of what I’m writing more than many writers. This is neither good nor bad. Different writers have different ways of creating. Some writers very successfully can cold-plot a novel, write it to that plot, and come up with an interesting and readable work. I can’t. Yes, I know the story arc before I start, and I know the character, and the main points, challenges, and the society and culture. But, for me, not only does the story have to hang together logically, but it has to feel right.

This also might explain why I’ve never been interested in writing something like a Star Wars book or a Dune novel… or anything created by someone else, however much I may have enjoyed those stories or those worlds. I simply can’t get into those worlds as deeply as I can into my own, nor, if I’m going to be honest, do I really wish to.

This also leads to another problem, one that my editors have usually been able to catch before it manifests itself on the page before a reader – that I know something so well and so intuitively about a world that I forget to make it clear to others, because in the end, the story has to feel right to them as well. That’s also why I suspect that what I write tends not to appeal as much to those readers who prize action and technology/magic in the extreme over character.

And all that is also why it would most likely be a very bad idea for me to try to write a novel in someone else’s universe.

Attacking the Symptoms

The recent debacle in Congress over the Affordable Healthcare Act is all too representative of an ever greater societal problem – the fact that far too much legislation and too many governmental programs are aimed at dealing with symptoms and avoid addressing the real underlying problems.

The issue isn’t really about healthcare insurance. It’s about the cost of healthcare itself. In 1970, the average American spent $380 on healthcare. In 1980, it was $1,180, but by 2013 [the latest data available], that cost had risen to $9,810. In essence, the cost of healthcare has increased at four times the rate of inflation, at a time when middle class earnings, adjusted for inflation, have remained roughly the same.

Then there’s the opioid crisis, which the media splashes everywhere. And yes, it’s a real and painful problem, but the root of the problem lies in the fact that we still don’t have non-addictive painkillers, especially for nerve-rooted pain. Yet the big push is to restrict the use of opiates, when there’s literally no other type of pain-killer available.

What about the high cost of education? Since 1980, the real cost of a college education has increased four and a half times the rate of inflation, and now the average debt of a recent college graduate exceeds $35,000. Students graduating from elite institutions or pursuing graduate degrees can easily end up owing more than $100,000 in student loans. Yet, as I can attest from both statistics and personal experience, that money isn’t going, for the most part, to college professors, not when average professor’s salary has increased by less than half a percent a year over the last thirty years and when the majority of new teachers are underpaid adjuncts. The problem lies in the fact that there’s been a huge increase in the number of students attending college and that state legislatures have refused to fund that increase in students and have passed the costs on to the students and their parents. At the moment, and as I’ve noted earlier, there are now twice as many college graduates each year as there are college-education-required jobs for them. Yet all the solutions proposed seem to be designed to bail out the states and to produce more college graduates for jobs that don’t exist, while neglecting non-college training for well-paying jobs that do exist and have shortages.

Another problem requiring a solution is the current U.S. air control system. There are more and more passengers, and more and more demands for passenger rights, but, so far, no one seems to be seriously looking at the underlying problem of a technologically outdated air control system.

I’ve just listed four areas, but, if I wanted to do the research, I have no doubt I could find many more examples of policy-makers and well-intended activists vigorously trying to address the symptoms of a problem, rather than the root causes.

Why does this happen? Largely, I suspect, because addressing the root causes upsets all too many apple carts, and is often initially far more expensive, even if cost-effective over time, while addressing the symptoms is far less controversial.