Archive for the ‘General’ Category

Showing… or Telling?

A while back, I got an interesting comment from my editor about a section of a manuscript I’d submitted. Her words were, roughly, “Here, it’s better to tell than show.”

I bring this up because one of the maxims pounded into beginning writers is: “Show; don’t tell.” That means the writer should describe the action as it happens, letting the reader “see” it as it happens, so to speak. In general, that’s good advice, but not everything needs to be shown. Not every step of a fifteen mile march, let alone a hundred mile march, needs to be described. Nor does every part of most love letters need to be read to the reader.

The pertinent wording of a law lends a certain authority if the speaker is an advocate, attorney, or judge… or a trader trying to pull off a shady deal, but what those words are isn’t necessary for a scrivener engaged in copying book after book – unless they bear on the plot specifically or a sentence is used to show how boring the tome truly is.

On the other hand, some excruciating detail in certain situations may be vital. The detailing of woodworking in The Magic of Recluce or of barrel-making in The Wellspring of Chaos are necessary in terms of defining the character and character development of Lerris and Kharl.

And sometimes, there’s no happy medium, as I discovered when Solar Express was published. As a technology-based near-future SF novel, the detail is vital for some readers and drags the story for others, which is why Solar Express is fast-moving for one category of readers and “slloooww” for others. Without the technical detail the story wouldn’t feel real to the first readers, and for those not into such technical intricacies, the details just got in the way. Some readers have been delighted when I’ve gone into the details of food and food preparation…and complained when I didn’t in a later book.

What book was my editor talking about? And what aren’t you ever going to read? I’m not saying. That’s one of the uses of a good editor – to make the book better than it would have been. And I’m not about to show you that it wasn’t as good as it turned out to be.

Law

What’s the point of law? Or law and order?

I’d say that it’s to provide a common set of rules that everyone in a society can understand and accept, ideally to accept as providing a degree of fairness. Others have or might have another concept – law as a hard and fast rule that defines good and evil in terms similar to their theological beliefs – and still others might feel that law is a tool for the elites of a society to control those beneath them. Some lawyers, I know, believe that the law is a tool they use in attempting to obtain justice, meaning a “fair” outcome for their clients, but, of course, what “fair” is always depends on individual viewpoints. From a technical point, in the United States, a law is essentially a statement by enacted by a governmental body which allows or prohibits certain acts, or imposes certain limitations on them.

And I’m certain there are other definitions of law, but why do we need laws? And why, once we have laws, do we seemingly need more and more of them?

Human societies need laws because there are always individuals who refuse to accept limitations on their acts, even when those acts harm others.

The answers to the second question are more multifold. Every law has areas where it lays down absolutes. Every time an absolute is codified into law, it creates situations where the absolute imposition of that law is unfair and unjust, or perceived as such. And someone often wants to remove that unfairness, which requires another law. In addition, every law excludes as well as including, and people want to “clarify” the law to assure that something heretofore excluded gets included. Then add to that that certain groups want certain laws for their benefit.

When people who share the same culture enact laws, they see those laws similarly among themselves and in a different way than do people who come from a different culture or economic class. That’s one reason why more egalitarian and homogenous societies tend to have lower crime rates.

In addition, equal penalties or “requirements” under law have differing impacts on people from differing social and/or economic strata.

The entire issue of so-called “voter fraud prevention” laws” being pushed by affluent white Republicans in the U.S. provides a good example of this, because those laws are regarded essentially as voter suppression laws by those of minority and lower income levels.

The difference in viewpoint comes from the difference in situation. For me, a photo ID isn’t a problem. It’s a slight hassle at most, a few hours once every five years, when I renew my driver’s license, and because I travel occasionally internationally, I have a passport as a back-up. Because I live in a moderate sized town, it’s a ten minute drive to the post office or the Department of Motor Vehicles, and because I was educated to the need for certain papers, I’ve always kept copies of things like birth certificates.

That’s all very easy and convenient – for me. My offspring, however, all live in large metropolitan areas where obtaining or renewing a driver’s license – or a passport — can be a lengthy affair, requiring travel and time. But they’re well-off enough that they can arrange the time and deal with the costs… and they had parents who prepared and educated them to those needs.

A minority single parent working a minimum wage job who lives in a state requiring a photo I.D. has a much tougher time of it. First off, most of the offices that can issue an I.D. are only open during working hours, and most minimum or low-wage earners don’t have much flexibility in working hours and often have to forgo paying work to get through the process. Also, the fees for getting such an I.D. take a greater percentage of their income. Then, even before that, they may have to obtain a certified birth certificate – taking more time and money. They are likely renting, rather than owning a home, and that requires more documents to prove where they live.

And the impact of other laws falls harder on the poor. If you don’t have the money to immediately fix a broken tail-light or a faulty muffler, that risks getting a ticket, and the cost of the ticket just adds to the burden. If you can’t drive the car, you may not be able to work. What is a modest cost and inconvenient repair for a middle-class worker can literally be a disaster for a poor worker.

What so many Americans fail to realize is that “equal” laws, even assuming that they’re enforced equally, which study after study shows they’re not, fall more heavily on the poorer members of society.

In reality… the “law” isn’t the same for everyone, nor is it seen as the same by everyone…but we’d like to pretend that it is… or that it’s stacked against us – and which belief you’re likely to hold depends on where you come from…and, often, how well off you are.

Formality in F&SF

All civilizations have at least two sets of rules. The two most basic sets of rules are laws and custom, and the most obvious subset of custom is manners. With the recent revival/ renaissance of Jane Austen and various spin-offs, there are a number of writers who focus on manners and social etiquette, generally in such sub-genres as steampunk or Regency-style fantasies.

But all cultures, in all times and places, have unspoken codes of manners, and they’re not restricted to just attire, although at times, cultures have gone so far as to legally define and restrict what people could wear, based on their wealth and social position, through sumptuary laws, which carried significant penalties.

As one of the older practicing and producing writers, I grew up in household where manners and custom were drilled into me. Of course, they had to be, because I was, to put it politely, socially oblivious. The majority of human beings have innate social senses. Mine were largely absent. That made little difference to my parents. I was drilled in every possible social grace and situation by my mother, while my father made certain I was more than adequate in sports, particularly those of social value, while both emphasized the importance of achievement in education. For the time, place, and setting in which I grew up, this was the norm.

What tends to get overlooked by a number of younger writers is that such an upbringing is not an aberration in human cultures, and for the majority of human history, those who have ruled and shaped society have had an upbringing that emphasized what was required to succeed. Those who were well-off but not of the elite also did their best to instill such education and manners in hopes that their offspring would have the background and manners to rise economically and socially.

At present, in the United States, the iron requirements of formality required prior to roughly the 1960s have been relaxed, or battered into scattered remnants of a once-uniform code of elite conduct, just as the former elites have been disparaged and often minimized.

This situation is not usual for cultures. More social rigidity is the norm, just as the studies of Thomas Piketty have shown that, historically, high levels of income inequality have also been the norm. Whether less rigid standards of manners and social behavior are the result of higher technology remains to be seen, but writers should consider [more carefully than many do, and no, I’m not naming names] whether the manners and social conduct of their characters match the actual culture that they’re depicting. The shepherd boy who attains power will never fit [and this almost never happens, except in fiction], except through brute power. His children might, if his wife/consort is from the elite and is in charge of their upbringing.

Also, contrary to what some believe, manners don’t reflect weakness, but are a way of displaying and reinforcing power. The decline of formal manners in the United States reflects the decline of old elite structure, and the often enforced casualness of new up-and-comers is meant as a symbol of a new elite, one problem of which is that an apparent lack of manners too easily suggests a lack of control… and a certain level of chaos and uncertainty.

In any case, any culture will have a form of mannered behavior that reinforces whatever elite governs, something that writers should consider.

Diversity… and Diversity… in Fiction?

At present in fantasy and science fiction, and, it seems to me, especially in fantasy, there’s a great push for cultural and ethnic diversity, especially in the last few years, at least in part as a reaction to the history in the genre, where stories and books largely focused on white male characters. That’s not to say that there haven’t been quite a number of notable exceptions that dealt with non-European ethnicities or with female characters, or even hermaphroditic characters, as in the case of LeGuin’s The Left Hand of Darkness. But the criticism that the field has been too “white male oriented” definitely has validity.

I certainly applaud works that effectively create or celebrate different racial or ethnic backgrounds, and even those that tastefully explore sexual diversity, but I’d like to sound a note of reality and caution for authors in dealing with “diversity.”

Some writers explore “diversity” by creating and exploring a culture very different from those traditionally depicted in fiction, and that can be enlightening and entertaining, but that’s very different from presenting a civilization/society which contains large numbers of people from diverse ethnicities.

First, all low-tech powerful civilizations [the kind often depicted in fantasy] have been dominated by an ethnic elite. These elites haven’t been all white, either. The Nubian culture conquered and ruled Egypt for a time, and that was definitely not a “white” culture. Most people know about the Mongol culture, and the fact that it ruled China for a time [until the Chinese absorbed the Mongols in China, which has happened more than once]. I could give a substantial list of non-Caucasian empires throughout history, but the point is that these cultures weren’t “diverse.”

They were different in ethnicity from other cultures, but there have been very few successful civilizations that embodied a great diversity in cultures. One could make the point that the United States, for all its failings, is the largest multicultural nation that has ever existed. Now, there have been empires that included different cultures, but those cultures, for the most part, were geographically distinct and united into the empire by force. About the only places where you might see diversity in any significant numbers were major port cities and the capital city.

Second, diversity in a society creates internal conflicts, sometimes on a manageable level, but if history is any indication, usually not. Even the “melting pot” United States struggles with internal ethnic diversity, and the rest of those nations with significant ethnic minority populations aren’t, for the most part, doing even as well as we are with diversity issues.

That doesn’t mean that a writer shouldn’t write about different cultures. I’m all for that – if it’s well-thought-out and consistent. In reality, however, such stable cultures will likely have a dominant ethnicity/culture, unless, of course, the author is going to explore internal ethnic conflict or unless the author has some truly “magic” potion that can solve the problems of wide-spread cultural internal diversity, because past experience shows that any internally diverse culture is likely to be highly fractious. And that’s something that both writers… and almost everybody else… tend to ignore.

The Multiplier Tool or… Not So Fast…

Technology by itself, contrary to popular beliefs, is neither good nor evil. It is a tool. More precisely, it is a multiplier tool. Technology multiplies what human beings can do. It multiplies the output from factories and farms. It also multiplies the killing power of the individual soldier or assassin. Fertilizers multiply grain and crop yields. Runoff of excess fertilizers ends up multiplying ocean algae blooms and making areas of oceans inhospitable to most life.

Modern social media makes social contacts and communication more widespread and possible than ever before. Paradoxically, it also multiplies loneliness and isolation. As recent events show, this communication system multiplies the spread of information, and, paradoxically, through belief-generated misinformation and “false news” multiplies the spread of ignorance. Use of fossil fuels has enabled great industrial and technological development, but it’s also created global warming at a rate never experienced before.

Those are general observations, but in individual cases, certain technological applications are clearly one-sided. Vaccines do far more good than harm. The harm is almost statistically undetectable, despite belief-inspired opposition. Use of biotechnology to create bioweapons benefits no one. The use of technology to turn rifles into what are effectively machine guns does far more harm than good.

The other aspect of technology is a failure of most people to understand that, with each new technology, or technological adaptation or advancement, there is both a learning curve and a sorting-out period before that technology is debugged and predictably reliable – and that period is just beginning – not ending – when the technology or application first hits the marketplace.

So… the late-adopters of new technology aren’t technophobes… or old-fashioned. They’re just cautious. But one of the problems today is the feeling by too many that it’s vital to be the first to have and use new technology or new apps. Over the years I’ve seen far more problems caused by rushing to new system and gadgets than by a deliberate reserve in adopting “the new stuff.” In addition, changing systems every time a manufacturer or systems producer upgrades wastes the time of employees and also creates anger and frustration that usually outweigh the benefits of being “early adopters.” Adopted too early or unwisely, technology can also multiply frustration and inefficiency.

Add to that the continual upgrades, and it’s very possible that the “drag effect” caused by extra time spent educating employees, installing upgrades, and debugging systems either reduces productivity or actually decreases it until reliability exceeds the problems caused by the “rush to the new.”

All of which is why I’m tempted to scoff at those individuals who rush to be the first with the newest and brightest gadget. But I don’t. I just wait a while until they’ve stumbled through all the pitfalls and most of the debugging. There’s definitely a place for “early adopters.” It’s just not a place where I need to be.

Truth…

Recently, a reader made an interesting comment to the effect that what I personally believed to be true doesn’t necessarily turn out to be true for others. This is a statement that initially sounds very reasonable, and studies indicate that it’s something that most people believe.

But… it’s also incredibly deceptive and dangerous. Now, I may have been correct, or I may have been incorrect. I may have had my facts wrong, or perhaps they were right. But the idea that correctness, accuracy, or “the truth” of something varies from individual to individual, depending on individual perception, is a very dangerous proposition.

Part of the reason why that proposition is dangerous is the use of the word “truth.” The word “truth” embodies a connotation of moral purity and certainty on the part of the individual defining that truth. On the other hand, facts are. How they’re perceived by individuals obviously varies, and different individuals give different weight to the same set of facts. Different individuals cite different sets of facts in support or opposition to policies, proposals, books, laws, or in other settings. But the bottom line should always be based on whether the facts are indeed accurate, and whether they apply to the situation at hand, not upon my beliefs about them or someone else’s beliefs about them.

It appears to me that today we’ve gotten into a societal mindset that places what we feel about anything far above determining what is accurate, what is actually so, and what is not. As feeling beings, this tendency has always been a great part of being human, but one of the great drivers of the advancement of human civilization has been the effort to determine verifiable facts, workable scientific theories based on replicable experiments and solid facts, as opposed to belief based on what cannot be determined to be accurate.

Yes, scientists and true empiricists have beliefs, but they try [and sometimes fail] to base those beliefs on hard evidence.

I’m not dismissing the importance of belief. Every human being needs things or ideals in which to believe, but the idea that what is “true” for one individual is not for another puts individual perception above accuracy and tends to support the idea that each set of beliefs is as valid as any other set of beliefs, when time and history and science have shown that “truth” resides far more often in what can be accurately determined and verified than in what cannot.

Despite the fact that in the third century BCE the Greek astronomer Aristarchus of Samos had presented a proof that the Earth revolved around the sun, more than 1500 years later the Christian Church was burning as heretics those who stated that the Earth was not the center of the universe and that it revolved around the sun. The “moral certainty” of faith trumped the facts, at least until science advanced to the point where the proof was irrefutable.

We’ve now reached a point where individuals realize that they must have at least some facts to support the “truth” of their beliefs… and in welter of “information” that surrounds us, too many individuals pick inaccurate or inapplicable facts in order to support their beliefs.

The idea that truth of belief varies from individual to individual is actually an accurate statement of a dangerous proposition – that “individual truth” is superior to verified evidence and facts, when, in fact the converse should be what we all strive for, that verified evidence and facts support our beliefs, rather than having our beliefs force us to find facts to support those beliefs.

Yet recent study after recent study shows that the majority of people tailor their facts to support their beliefs, rather than using verifiable facts to change their beliefs. Will we revert to faith over facts, as did the Christian Church of the 1500s? Given what I’ve seen over the last few years, it’s anything but an unreasonable question.

When Elites Fail…

Like it or not, every enduring human civilization has had an elite of some sort. By elite, I mean the relatively small group – compared to the size of the society – that directs and controls the use of that society’s resources and sets that society’s goals and the mechanisms for achieving or attempting to achieve those goals.

Historically, and even at present, different countries have different elites, based on military power, economic power, political power, or religious power, or combinations of various kinds of power, and as time passes the composition of those elites tends to change, usually slowly, except in the cases of violent revolution. In general, the larger the country, the smaller the elite in proportion to the total population. In addition, the work of the French economist Thomas Piketty also suggests that economic inequality is the historical norm for most countries most of the time.

Since elites are a small percentage of the population, the members of the elite need a means of control. In the United States that means has largely been economically based from the very beginning of the United States. Initially, only white males could vote, and effectively, only white males of a propertied status could afford to run for office, where they associated with others of the propertied status. What tends to get overlooked by many about the Civil War was that, for the southern elite, the war was entirely economic. Slaves were a major form of wealth, and without that slave “property” many of the great southern plantations were essentially bankrupt. Thus, the southern elites were fighting for preservation of their unchallenged status as elites.

The rapid industrialization of the United States resulted in a change in the economic and social structure with the numbers of small farmers being gradually but inexorably reduced, with a concomitant growth in factory workers, who, initially were in practice little more than wage slaves, especially child and female workers. The growth in concentration of wealth and power in the “robber barons,” such as Astor, Vanderbilt, Carnegie, Gould, Mellon, and others, without a corresponding increase in the worth and income of the workers was one of the factors behind the candidacy of William Jennings Bryan for the presidency in 1896, as exemplified by his statement to the National Democratic Convention, where he stated that “The man who is employed for wages is as much a businessman as his employer…” From there Bryan went on to suggest that the Republican candidate [McKinley] was basically the tool of the monied interests, concluding with the famous line, “You shall not crucify mankind upon a cross of gold.” But Bryan lost the election by 600,000 votes after industrialist Mark Hanna raised huge contributions from industry.

With McKinley’s assassination in 1901, Theodore Roosevelt became president, and over an eight year period pushed through a host of reform measures that improved public health, working conditions, and restricted and sometimes eliminated monopoly powers, and his successor, William Howard Taft, continued those efforts. In 1907, when a financial panic threatened to bring down the entire U.S. financial system, Roosevelt and his Treasury Secretary worked with financier J.P. Morgan to stave off the crisis. These efforts, and an improved economy, defused much of the working and lower middle class anger.

Roosevelt, however, wasn’t so much a supporter of the working class as what might be called a member of “responsible elite,” a man who felt that business and power had gone too far.

In contrast is what happened in Russia. People tend to forget that in the early 1900s Russia was the fifth most powerful economy in the world, but unlike Roosevelt and Taft, Czar Nicholas II and the Russian aristocracy continued to bleed the small middle class, the workers, and the serfs, with the result of continued revolts and unrest. Nicholas agreed to the creation of a parliament [the Duma] and then did his best to eliminate or minimize what few powers it had. And, in the end, the old elite lost everything they had to the new elites, whose power was based on sheer force, rather than a mixture of money and force.

There are more than a few other examples, but what they tend to show is that all societies have elites, and that those elites control society until they become incompetent… and another elite takes power.

From what I’ve observed, it appears that an increasing percentage of the American people is anything but pleased with all too many members of the current American elite, especially with business executives, the media, and politicians, and that most of those visible elites seem almost dismissive of or oblivious to that displeasure… and, more important, unwilling to deal with the root causes of that displeasure, except with words and, so far, empty promises.

Supporting the Short Stories…

Most of my readers, I suspect, associate my name with books that are, shall we say, substantial in length and scope. Some may know that I occasionally have written shorter works, and a few may recall that a long, long time ago, for the first ten years of my writing career, I only wrote short fiction.

At present, I’ve written and had published forty-five short works of fiction, mostly short stories, but including two novellas, and that total doesn’t include the novella I later expanded into a novel. By comparison I just turned in the manuscript for my seventy-fourth novel [Endgames, the sequel to Assassin’s Price].

Back in 1972, when I’d just sold my very first story to ANALOG, I had no idea of ever writing a novel, and I might never have written one if I hadn’t essentially been forced to by Ben Bova, the then-editor of ANALOG, who rejected another story of mine (one of many that were rejected) with the note that he wouldn’t consider another story of mine until I wrote a novel, because he felt I was primarily a novelist, rather than a short story writer. That was an incredibly perceptive observation because he’d never seen any work of mine in excess of a few thousand words.

I took his advice, and as the cliché goes, the rest was history… and lots of novels. But I never lost the love of short fiction, and occasionally wrote a story here and there, usually, but not always, by request for anthologies. But stories, even brilliant outstanding stories, cannot sustain a writer in this day and age, as they could in the 1920s and even into the 1940s. I did a rough calculation, and all of my earnings from short fiction, and that includes the two book collections, total roughly half of what I now receive for a single fantasy novel.

This is an example of why, so far as I’ve been able to determine, there are essentially no full-time F&SF short-story writers making a living wage. So I was very fortunate to have gotten Ben’s advice and just smart enough to have taken it… and equally fortunate that readers have liked the books I’ve written.

All of which brings me to another point. As I mentioned earlier, I’ve agreed to write a story for a kickstarter anthology from the small press Zombies Need Brains, entitled The Razor’s Edge. The neat thing about the anthology is that half the stories are written by name authors and the other half are selected from open submissions. I’ve finished the first draft of the story, and that’s good because it takes me much longer to write short fiction, but it won’t see print unless the kickstarter is funded, which it isn’t at present. Also, you won’t see new stories from other favorite authors, and even more important, you won’t be giving a chance to new authors.

Yes, I’ll be paid, but it’s not much, and I wrote the story for the story, not for the very modest sum – and that’s definitely true for pretty much all the name authors. So… if The Razor’s Edge is something you might like, or if you want to give some up and coming authors a chance, pledge something at the kickstarter [ The Razor’s Edge Kickstarter ]. I’ll appreciate your efforts, and so will a few new authors, some of whom might graduate to writing big thick books that you might also like in the future.

Preconceptions

There’s the old saying that goes “it isn’t what you don’t know that gets you in trouble, but what you know that isn’t so.” All too often what we know that isn’t so lies in the preconceptions that we have. Because erroneous preconceptions are usually feelings and/or beliefs that we seldom examine, we run far greater risks with them than with what we know we don’t know.

Of course, one of the greatest erroneous preconceptions is that we know something that we really don’t, as recently demonstrated by Donald Trump’s statements about how easy it would be to fix healthcare and taxes, neither of which is amenable to a simple “fix,” at least not without totally screwing tens of millions of people.

Erroneous preconceptions by U.S. military leaders about how the Vietnamese would react to U.S. forces were the one of the major factors in why the U.S. became mired in one of the longer-drawn-out conflicts, yet military figures seem to have the same problem in Afghanistan, and it appears that this is also a problem with U.S. views on both China and North Korea, because too many U.S. leaders have the preconception that people from other cultures think of things in the same way – or they look down on others and draw simplistic conclusions based on arrogant assumptions.

On a lighter note and in a slight digression, I’ve gotten several reader comments about Assassin’s Price to the effect that those readers were upset that an imager wasn’t the main character, and several said that they couldn’t get into the book because of that. I can understand a certain disappointment, if you’ve been looking forward to a book about imagers, but… every synopsis about the book mentions Charyn, and Charyn is definitely not an imager in the previous two books, and he’s much older than the age when imagers manifest their talents. In addition, the book is still an adventure, and it still has imagers… if not as the main character. These readers had such preconceptions about the book that they couldn’t really read and enjoy what was written.

The older I get, the more I’ve seen how preconceptions permeate all societies, but it seems to me that in the U.S., erroneous preconceptions are on the increase, most likely because the internet and social media allow rapid and easy confirmation bias. What tends to get overlooked is that human beings are social animals and most people have a strong, and sometimes overpowering, desire to belong. Social media allows people, to a greater extent than ever before, to find others with the same mindset and preconceptions. This allows and often even requires them to reinforce those beliefs, rather than to question them, because in most groups, questioners are marginalized, if not ostracized… and that practice goes much farther back than the time of Socrates.

Trump’s hard-core supporters truly seem to believe that he can bring back manufacturing jobs and that the U.S. would be better off if all eleven million illegal immigrants were gone. Neither belief holds up to the facts. Far-left environmentalists believe that the world can be totally and effectively powered by renewable energy. Not in the foreseeable future if we want to remain at the current levels of technology and prosperity. Pretty much every group holds some erroneous preconceptions, and pretty much every group is good at pointing out every other group’s errors, while refusing to examine their own.

And, at present, we’re all using communications technology to avoid self-examination and to blame someone else, rather than using it to figure out how to bridge the gaps and recognize the real problems, because you can’t fix a problem you refuse to acknowledge, nor can you fix a problem that only exists in your preconceptions. Nor, it appears, at least for some people, can they even get into a book in a series that they like because the main character doesn’t fit their preconceptions.

Research

Over the past several years, I’ve heard a number of variations on the theme that the younger generation doesn’t need to learn facts, that they just need to learn methods. I have to disagree – vehemently!

The younger generations not only need to learn, if anything, MORE facts, and those facts in their proper context, more than any other previous generation. Those who disagree often ask why this is necessary when computers and cloud databases have far more “storage” than the obviously limited human brain.

In fact, the very size of computer databases are what makes the need for humans to learn facts all the greater. That’s because of a simple point that tends all too often to get overlooked… or disregarded. To ask an intelligent question and to get an answer that is meaningful and useful, you have to know enough facts to frame the question. You also have to have an idea of what terms mean and the conditions under which they’re applicable.

While the computer is a great help for “simple” research, the computerization of research sources has often made finding more detailed information more difficult, particularly since algorithms often prioritize search results by popularity, which can make finding more out-of-the-way queries difficult, if not impossible, if the searcher doesn’t know the precise terms and key words necessary.

Already, there are too many young people who don’t know enough arithmetic to determine whether the numbers generated or shown by a point-of-sale terminal or a computer screen are even in the right ballpark. And from what I’ve seen, grammar checkers actually are inaccurate and create grammatical errors more often than they correct errors.

Then there’s also the problem of trying to use computers when they shouldn’t be used. Trying to get directions from Siri while actively driving qualifies as distracted driving. It’s fine if a passenger is arguing with Siri, but anything but that if the driver is.

Then there’s the problem that surfaced in the last election. When people don’t have a long-established in-depth personal store of knowledge and facts, they’re at the mercy of the latest “information” that pops up on the internet and of whatever appeals to their existing prejudices and preconceptions. And that doesn’t serve them — or the rest of us — well at all.

Literary Pitches… and Timing

I’m committed to do a story for The Razor’s Edge, an anthology from the small press Zombies Need Brains. The theme of the anthology is about just how little the difference is between the freedom fighter and the insurgent and the question of when fighting for a cause slips from right to wrong… or whether that’s just a matter of perspective.

As part of the PR for the anthology, the editors asked the contributing “anchor” writers if they’d be willing to write a blog post on one or all of the topics of creating an elevator pitch, a query, or a plot synopsis for one of their projects.

This posed a problem for me. Strange as it may sound in this day and age, I’ve never done any one of those things in order to sell a book or a story. I will admit that I’ve often managed to develop a plot summary or an “elevator pitch” for at least some of my books – after they’ve been bought… and I’ve hated doing either, and still do.

Why? Well… some of you who read my books might have a glimmering of an idea, but my personal problem is that any “short” treatment of a book – whether it’s an elevator pitch, a query, or a plot synopsis – has to focus on a single element. For what I write and how I write it, this is a bit of a problem, because focusing on a single element tends to create massive distortion of what I write.

Sometimes, questions help, or so I’ve been told. And some of those questions might be: What’s the most important facet of the book? What’s the hero’s journey? To what kind of reader does it appeal? The problem, for me, is that such questions make what I write come off as one-dimensional.

One of my most popular books is Imager, the first book in the Imager Portfolio. It features Rhennthyl – or Rhenn, who at the beginning of the book is a journeyman portrait artist in a culture vaguely similar to 1840s France, except with later steam-power. Rhenn is a good artist, good enough to be a master, but it’s likely he never will be for a number of reasons, and especially after the master painter for whom he works (under a guild system) dies in an accident that may have been caused by Rhenn’s latent magical imaging abilities.

Now, the book could be pitched as “young artist develops magical abilities and gets trained by mysterious group to use magical imaging powers.” And if it had been pitched that way, it would likely have flopped as a YA imaging-magic version of Harry Potter, because Rhenn is far more deliberate, not to mention older, than Harry Potter. Also the Collegium Imago makes Hogwarts look like junior high school.

Imager could also have been pitched as “a magic version of Starship Troopers,” since it does show the growth and education of a young man into a very capable and deadly operative, but Rhennthyl is operating in a far more complex culture and society, and one that’s far more indirect than what Heinlein postulated.

Then too, Imager could be pitched as a bildungsroman of a young man in a world where imaging magic is possible. And that, too, contains a partial truth, but ignores the fact that Rhenn’s basic character is already largely formed and many of his problems arise from that fact. Such a description also ignores the culture.

Because I never could find a short way to describe any book I wrote, not one that wasn’t more deceptive than accurate, I never did pitch anything I wrote that way. I just sent out the entire manuscript to a lot of people, and, of course, it took something like three years before someone finally bought my first book.

And… for some kinds of books, as it was in my case, letting the book sell itself may be better than trying to shoehorn it into a description or pitch that distorts what the book is all about. Now, authors aren’t always the best at describing their own work, but over time, I discovered that even my editors had trouble coming up with short pitches. So… if those who read your work also can’t boil it down into a pitch… then it just might not be a good idea.

Free speech?

The extremes of free speech on both the left and the right, as exemplified by Middlebury and Berkeley and then Charlottesville, bring home a point that no one in the United States seems comfortable to discuss.

In a working society there can be NO absolute freedoms. Particularly with regard to “free speech,” this seems to be an issue that has come up time and time again, its lessons only to be forgotten for a generation or two, until some extremist, or extremists, push the limits of “freedom” beyond what a working free society can permit.

Sometimes, society overreacts, as in the Schenck case in 1919, when the Court disallowed the use of the First Amendment as a defense for a socialist peacefully opposing the draft in the First World War, and sometimes, as in 1969, it reacts in a more moderate fashion, when the Supreme Court’s decision in Brandenburg v. Ohio effectively overturned Schenck by holding that inflammatory speech – and even speech advocating violence by members of the Ku Klux Klan – is protected under the First Amendment, unless the speech “is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.”

One could certainly argue that the neo-Nazi protesters in Charlottesville, who not only chanted vile and racist slogans, but many of whom also carried weapons, were using speech and those weapons to incite lawless action. By the same token, armed protesters opposing the BLM at the Bundy ranch weren’t just relying on words but weapons. But what about the numerous speakers on college campuses who have been shouted down or who have had their appearances canceled because the protesters didn’t like what they might have said?

The First Amendment states: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.”

It seems to me that the neo-Nazis, the Bundys, and all too many of the campus protesters weren’t exactly in accord with the right “peaceably to assemble.”

Back in 1945, the political philosopher Karl Popper published The Open Society and Its Enemies, in which he laid out what he called “the paradox of tolerance.” Popper argued that unlimited tolerance carries the seeds of its own destruction and that if even a tolerant society isn’t prepared to defend itself against intolerant groups, that society will be destroyed – and tolerance with it.

Extremist groups, by both definition and by their very nature, are intolerant. The real question for any society is to what degree their intolerance can be tolerated and at what point must it be limited. The simplest bottom line might well be what the Supreme Court laid down in the Brandenburg decision – that speech directed at inciting lawless or violent action is not permissible, and that includes the violence of protesters which denies those they oppose the right to speak… provided, of course, that the speakers aren’t inciting lawless or violent action.

Do You See What I See?

That phrase comes from a Christmas carol (not Dickens’s A Christmas Carol), but it’s also an appropriate question for both readers and authors.

Over the years I’ve been writing, I’ve been pummeled and praised from all sides about the philosophic underpinnings of what I write, and called, if sometimes indirectly and gently, “every name in the book.” At times, it may have been merited, but most times, it’s because the reader and I don’t see the same thing.

There’s another old saying – where you stand depends on where you sit. And where you sit depends also on where you’ve been, what you’ve done, and what you’ve seen, really seen.

I now live a comfortable life. I admit it, but there were more than a few times when the money ran out before the month, so to speak, and there were a few times when there was no money and no job, and months of pounding the pavement and sending out resumes and following up leads. I’ve been hired, and I’ve also been fired. For all that, I always had a roof over my head, and one that didn’t leak, or at least not much. I’ve been married, and divorced, a single custodial parent with four small children, again married and divorced, and, thankfully,for the past twenty-five years, very happily married.

From my time in politics and in the business and consulting world, I’ve also been close enough to gilded world of the very rich and very powerful, briefly passing through it on assignment, as it were, but I’ve also been in mines, factories, refineries, and in worn-down farms deep in Appalachia, in the near dust-bowl plains in parts of Colorado and Kansas. I was an anti-protest protester during the Vietnam War, and then I was first an enlisted man and then an officer in the Navy… and a search and rescue pilot. I’ve seen grinding poverty off the beaten track in South America and Southeast Asia, and I’ve seen incredible showplaces of now-vanished British nobility and the Irish ascendancy.

I started at the bottom in grass-roots politics and ended up as a fairly senior political staffer in Washington, D.C. I’ve run my own businesses, not always as successfully as I should have, from the first one doing fairly physically demanding manual labor to white-collar regulatory consulting. Along the way, there were stints as a life-guard, a radio DJ, and several years as a college lecturer.

That’s why what I see may not be what some of my readers see, but all good writers write from what they know and where they’ve been, and if you read closely, you can tell where an author’s been… and often where they haven’t.

The Time-Saving Waste

Recently, a certain university insisted that tenured and tenure-track faculty turn in their annual required faculty activity reports in electronic format in order to save time. This particular university requires extensive documentation as proof of faculty activities and teaching skills, but set out a helpful format, theoretically supported by a template, as well as a tutorial on how to comply with the new requirement.

The result was a disaster, at least in the College of Performing and Visual Arts. The template did not work as designed, so that faculty couldn’t place the documentation in the proper places. Even the two faculty members with past programming experience couldn’t make the system work properly. The supposed tutorial didn’t match the actual system. In addition, much of the documentation required by the administration existed only in paper format, which required hours of scanning, and to top it off, the links set up by the administration arbitrarily rejected some documentation. Not any of these problems have yet been resolved, but the time spent by individual faculty members is more than double that required by submitting activity reports in hard paper copy, and more time will doubtless be required.

Yet, this is considered time-saving. To begin with, the system was poorly designed, most likely because the administration didn’t want to spend the resources to do it properly. Second, to save a few administrators time, a far larger number of faculty members were required to spend extra time on paperwork that has little to do with teaching and more to do with justifying their continuation as faculty members, despite the fact that even tenured faculty are reviewed periodically.

Over the years, I’ve seen this in organization after organization, where the upper levels come up with “time-saving” or “efficiency” requirements that are actually counterproductive, because the few minutes they “save” for executives create hours of extra work for everyone else.

This tendency is reinforced by a growing emphasis on data-analysis, but data analysis doesn’t work without data. This means that administrators create systems to quantify work, even work, such as teaching, that is inherently unquantifiable, especially in the short term. When such data-gathering doesn’t result in meaningful benchmarks, instead of realizing that some work isn’t realistically quantifiable in hard numbers, they press for more and more detailed data, which not only wastes more time, but inevitably rewards those who can best manipulate the meaningless data, rather than those who are doing the best work.

Output data for a factory producing quantifiable products or components is one thing. Output data for services is almost always counterproductive because the best it can do is show how many bodies moved where and how fast, not how well or effectively the services were provided. Quantification works, to a degree, for a fast-food restaurant, but not for education, medicine, law, and a host of other activities. Yet forms and surveys proliferate as the “business model” invades everywhere, with the result of wasted time and meaningless or misleading “data.”

And yet the pressure for analysis and quantification continues to increase yearly, with administrators and executives failing to realize that their search for data to improve productivity is in so many cases actually reducing that very productivity. Why can’t they grasp when enough is enough?

The Decline of the Non-Imperial Empire?

In her book, Notes on a Foreign Country, Suzy Hansen points out that the United States has created an empire that Americans, for the most part, refuse to believe exists. From the beginning, she writes, “Americans were in active denial of their empire even as they laid its foundations.”

An empire? Surely, you jest?

Except… the United States still maintains nearly 800 military bases in more than 70 countries and territories abroad, while Britain, France, and Russia, in comparison, have about 30 foreign bases combined. More than 300,000 U. S. troops are deployed not only in those 70 countries, but in 80 others as well. In effect, the U.S. dollar is the default currency of the world, and English is either the primary language or the back-up language in world commerce.

So just what is the difference between an undeclared and unacknowledged empire and one that declares its imperial status, as did the British Empire or the Roman Empire?

There are doubtless a number of similarities and some differences, but I’d say that the principal difference is that, in denying its status as an empire, the United States is minimizing, if not denying, its responsibilities to its territories and dependencies. Over the last two and possibly three decades, in pursuit of perceived American “interests,” the United States has effectively destroyed country after country, as opposed to the two decades after World War II, when the primary interest was rebuilding nations, if only in order to create an economically and militarily strong coalition against the USSR.

Exactly how has either the United States or the world benefited from the chaos in Iraq, Afghanistan, Syria, Libya, and Somalia, in all of which we’ve had troops fighting and resolving nothing? We intervened… and then decided we couldn’t afford the cost of putting those countries back together again. We didn’t behave responsibly, and we haven’t been exactly all that responsible for the care and needs of the veterans we sent there.

Have these interventions been good for either the U.S. or the world? The list of fragmented countries across the world is growing, not declining, and now the American president seems to be picking fights with neighbors and allies alike.

In the last election, in a sense, we had a choice that I’d caricature as one between “Big Momma” and “Caligula.” The American electorate chose Caligula as the lesser of two evils. Now, before everyone jumps on that, I’d like to point out that when Caligula became the Roman Emperor, everyone was initially pleased. He was a change from the severe, dour, and often cruel Tiberius. He was outspoken and outgoing, but he had no sense of morals, propriety, or responsibility, and he definitely couldn’t manage money, and he lavished money on pleasure palace after pleasure palace, some of which would have made Trump’s Mar-a-Lago seem small and even tawdry.

Now, we have a government that’s abandoning its responsibilities to its citizens, not only in terms of health care, but in terms of basic fiscal responsibility, just as the Roman Senate abandoned its responsibilities. After that, the Praetorian Guard assassinated Caligula, and the last vestiges of a government being responsible to the people dissipated, and the Empire began the long slow decline, although that wasn’t visible immediately as the territory conquered expanded for a time, just as the number of countries in which our soldiers serve continues to expand.

Just how much of that history might we see repeated… or at least rhyme, as Mark Twain put it?

The Razor’s Edge

As mentioned elsewhere, I’ve agreed to write a story for a military science fiction and fantasy anthology entitled The Razor’s Edge, which is one of three anthologies to be published by the small press Zombies Need Brains and being funded by a kickstarter.

The Razor’s Edge explores the thin line between being a rebel and an insurgent in military SF&F, while Guilds & Glaives features slashing blades and dark magic. The third anthology – Second Round — allows readers to travel through time with Gilgamesh in a time-traveling bar.

If you’d like to help bring these themes to life, you can back the Kickstarter at www.tinyurl.com/insurgenturbar and find out more about the small press at http:www.zombiesneedbrains.com!

Does It Make Sense?

“Does it make sense?” That sounds like a simple enough question that can be applied to a business proposition, an invention, a novel or story, or even a proposed law. Then… why do we see so many impractical business ideas, inventions that never pan out, stories that are ludicrous, and laws that seem to us to make the situation worse?

At the same time, I’ve seen ideas that I’ve thought were preposterous result in millions of dollars in sales of one sort or another. Back when I was a teenager, there was the hula-hoop craze. Why would anyone want to gyrate around so that they could keep a plastic ring some three feet in diameter continuously whirling around their mid-section?

And then there were – and still are – lava lamps, in which a glob of gloop in a sealed and lighted glass container gets heated, expands and rises, then cools and falls. There must have been thousands of different combinations of colored liquid and differently colored gloop, all so people could either sit and watch gloop or not watch gloop but have it for background visuals. Exactly why has never made sense to me.

I even question the popularity of golf. Why would any sane individual really want to whack a round hard ball across 7,000 odd yards of grass, sand, and water… merely to see who wins by whacking it the fewest times between eighteen holes in the ground. Now… being somewhat commercial, I can see why professional golfers do it. There’s a LOT of money there when you’re whacking for money, but three to four hours of solid masochism for pleasure?

I also can’t say I understand the spectator side of NASCAR racing. Sitting in the sun or rain or whatever watching cars go around in a circle for hours on end, while drinking too much beer [but then, maybe that’s part of the “enjoyment”] makes little sense to me.

But that’s not really the question. The better question is not whether something makes sense, but to whom it makes sense, or to whom it appeals.

A law requiring sloped curb cuts makes little sense to a healthy individual, but a four inch curb to someone in the wheelchair is as much of a barrier to them as a ten foot fence is to someone healthy. For many disabled individuals, stairs are not a way to the next floor but a barrier to them.

Golf may not make sense to me, but it was my father’s exercise [he carried his own bag and walked], relaxation, and escape. I, obviously, love fantasy and science fiction. F&SF never made sense to him.

And those are some of the reasons why “Does it make sense?” can be incredibly misleading.

One Thousand

For what little it’s worth, I’ve now posted over 1,000 entries just in the “Blog Entry” section, the first one being in March of 2007. That doesn’t count the less frequent entries in the other sections of the website. For the most part, that’s meant writing a post of at least 400 words, and often over 1,000 words, twice a week for over ten years. At a minimum, that’s well over half a million words, or roughly the equivalent of 2.8 “average” Modesitt novels.

I don’t have any intention of stopping soon, since we live in “interesting times,” and that means there is always something to speculate about, whether it’s why such diverse fields as hard science, computer technology, history, and Fortune 500 CEOs are far more misogynistic [in general] than other fields, or why we still haven’t found a commercial way to fly a supersonic passenger aircraft, or why so many people pit religion against science, as if they don’t both exist in the same world.

Then there’s ongoing and fascinating question of why Congress has accomplished less each session, even though the intelligence levels of individual members of Congress are largely much higher than were those of their predecessors. I also have the suspicion, but no way to prove it, that more often than not, the less intelligent candidate for President has been the winner. Is that just my perception, happenstance, or does the American electorate have a distrust of “elites,” intellectual and otherwise?

And then there’s technology and all the questions that it raises. Just last week, the Atlantic ran an article entitled, “Have Smartphones Destroyed a Generation?” I don’t know about “destroyed,” but I’m not so sure that it hasn’t at least impaired part of a generation, particularly their attention span, given what I’ve seen on college campuses and elsewhere. We certainly have a generation, as well as some of those of older generations, who can’t walk or drive safely because they’re too enamored of their smartphones, and that doesn’t speak much for either their upbringing or their intelligence – but then, maybe it’s just a latest manifestation of teenagers’ [and those who haven’t ever outgrown being teenagers]unthinking belief in personal invulnerability.

As for books, we’re seeing the greatest change in publishing and reading since the introduction of the mass market paperback in the 1950s, and there’s no telling exactly where it’s going, except that, in fantasy and science fiction, that once-vaunted mass market paperback is taking a far bigger hit than in other genres. Is that because F&SF readers are technological opinion-leaders or just because we’ve all run out of shelf space at a time when the price of housing continues to rise?

For those of you who’ve followed the site for its more than ten years, and for those who joined along the way, even if today’s your first read, thank you all!

Reality…

Reality doesn’t care what you believe. Or as Daniel Patrick Moynihan [and quite a few others] said, “You’re entitled to your own opinion, but not your own facts.”

Put another way, just because you believe in something with all your heart and soul doesn’t mean that it’s so. President Trump’s assertion that his inaugural crowd was the largest ever doesn’t make it so. Nor is climate change a hoax perpetrated by the Chinese. It’s not a matter of opinion that the latest iceberg that broke off the Larson C ice shelf is roughly the size of Delaware, nor is it a matter of opinion that the Arctic ice cover is diminishing radically.

No matter what conservative politicians claim, lowering taxes won’t increase higher paying jobs for the working and middle classes; lower taxes will benefit primarily the upper middle class and the upper class, particularly the top tenth of one percent, simply because they make more money. For example, the average household in the middle 20 percent of earners [the average American taxpayer] pays slightly more than $8,000 in federal taxes, on income of about $56,000. The average household in the top one percent [the rich taxpayer] pays about $430,000 in federal taxes on an income of $1,500,000. A one percent cut in the tax rate means the average family would get back less than $800, while a one percent cut for the rich taxpayer would give back more than $16,000. For an ultra-rich taxpayer, with an income of $100,000.000, a one percent tax cut would give back one million dollars.

No matter what anyone claims, U.S. manufacturing has not declined. In fact, the U.S. now manufactures twice as much as it did in 1984. The political “problem” is that it does so with five million fewer workers than it did in 2000.

The holocaust did exist; the Germans killed more than eleven million people, including six million Jews and five million others they thought “undesirable,” the second largest group of which totaled than a million gypsies. The Armenian genocide at the hands of the Turks also took place from 1914 through 1918, with the deaths of between 1.5 and 1.9 million Armenians, yet the present Turkish government contends that the massacre was not genocide. Both events have been documented extensively.

Various surveys show that Americans believe that immigrants, defined as people not born in the United States, account for between thirty-two and forty percent of the population; federal statistics place the number at slightly above thirteen percent. People also believe Muslim immigrants are 16% of the U.S. population; the actual number is one percent.

We all have a choice. We can look at the facts and then form or change opinions, or we can form opinions and then invent or search for facts of dubious origin to justify them. Which do you do?

Priorities?

This coming week classes will begin at the local university, and with those classes come expenses, tuition, fees, room and board, and, of course, textbooks. Except, unfortunately, more and more students aren’t buying textbooks.

The dean of the university library cited a study that found as many as half the students in college classes, especially classes that required expensive textbooks, never purchased those textbooks – and unsurprisingly those who failed to purchase textbooks had lower grades and a greater chance of failure. But why don’t students purchase textbooks? The usual reason students give is cost. The cost of textbooks for the “average” student runs from $500 to $800 a year, depending on the college and the subject matter, and in some fields the costs can exceed $1,000.

But are those costs unreasonable historically? I still have a number of my college texts, and some of them actually have the prices printed on them. I ran those numbers through an inflation calculator and discovered that, in terms of current dollars, I paid far more for books in 1963 than students today pay on a book for book basis, and back then we were required to read far more books than most college students read today.

Today’s student priorities are clearly different, and for whatever reasons, a great number of them aren’t buying textbooks [cellphones and videogames, fast food, but not books]. For this reason, the local university is promoting “open texts,” i.e., textbooks written by professors or others and placed without cost on the university network for students to use. Not surprisingly, students love the idea. It costs them nothing, and they don’t even have to go to a bookstore.

The idea bothers me, more than a little. And no, I’ve never written a textbook, and despite what people claim, those professors I know who have didn’t write them to make money. They wrote them because what they wanted their students to learn wasn’t in the available existing books. The royalties and/or fees they received usually barely reimbursed them for their time and effort in creating the text. So how did textbooks get so expensive? First, they’re not that expensive, given the time and expertise it takes to create a good text – and all of the diagrams, tables, and the like are expensive to print [even in electronic books they take a lot of time and effort]. Second, because fewer and fewer students are buying the textbooks, the unit costs of producing them go up.

Maybe I’m just skeptical by nature, but so far with each year that the internet expands, the percentage of accurate information declines. With all these professors producing these “open texts,” where exactly is the quality control? Where is the scrutiny that at least produces some attempt at objectivity? When a textbook is printed, it’s there in black and white. It can’t be altered and anyone who wants to pay the price can obtain it. Just how available are these so-called open texts to outsiders? Against what standards can they be measured? Is there any true protection against plagiarism?

I have yet to see these questions being addressed. The only issue appears to be that because students think textbooks are too expensive, they aren’t buying them, and those that aren’t buying aren’t learning as well. So, the university answer is to give them something to read that doesn’t cost them anything.

Yet I can’t dismiss the textbook problem. It does exist, and part of the problem is also the typical college bookstore. They’re under pressure not to lose money. So what do they do? They only order the number of books that a course sold the previous year or semester. Even when half the students in a class can’t get books and want to pay for them, too many bookstores can’t be bothered, and students get screwed, especially the poor but diligent ones for whom every dollar counts, and who can’t afford to rush to the bookstore immediately.

On more than one occasion, my wife the music professor has had to order opera scores personally [and pay for them] and then sell them to students [since it’s rather hard to learn the music and produce an opera if the singers don’t have the music to learn] so that her performers all had the music. And, of course, doing so is totally against university policy. But then, cancelling a scheduled opera because the music isn’t available isn’t good, either, and copying the scores is not only against copyright law, but also runs up the copying budget.

But this is what happens when the “business model” of the bookstore meets the realities of publishing costs and students who are either unwilling or unable to afford textbooks.