Archive for the ‘General’ Category

Now is the Time…

…for my occasional rant about when it would be proper to start celebrations and gift buying for Christmas, that is, AFTER Thanksgiving.

Yet in most of the United States, by November first, right after Halloween, Christmas decorations appeared and sales were announced. Sirius’s XM satellite music service shut down the Billy Joel channel and replaced it with Christmas music, too much of it the elevator kind. Some Christmas decorations and sales items were appearing in Walmart in October.

While Christmas has historically been the time to celebrate the birth of Christ, only two of the gospels even mention his birth – Luke and Matthew – and none give any hint of what time of year his birth took place. Some early believers thought it was in April, but the Egyptian Christians decided to celebrate it on January sixth, while others favored the winter solstice. Eventually by the fourth century the Christian Church agreed on December 25th.

Even the Christmas tree had nothing exactly to do with Christmas, but more with non-religious German customs, and wasn’t common even in England until Prince Albert made it a custom of the British royal family in the middle of the nineteenth century.

I can understand the uncertainty about the time of Christ’s birth. I can see why, with the exact date unknown, it made political sense some seventeen centuries ago to agree on a date that matched other celebrations in the hopes of gaining converts.

What I don’t accept is the idea that a religious holiday and supposed celebration of faith should become – cancerlike – a massive commercial sales shill that threatens to gobble up [pun-intended] the comparatively non-commercial celebration of Thanksgiving.

But then, what else should I expect from a religious holiday that was moved to appeal to the pagans, now that it’s been swallowed by worship of another pagan deity – either Plutus [the ancient Greek god of wealth, from which comes the term plutocrat] or Mammon, take your pick.

Role of University President?

Before I married my wife the opera singer and university professor, my primary interests in the arts were literature, especially F&SF and poetry, and painting. My principal musical interests were instrumental classical music and “non-twangy” country music. The country music has faded to the background as I’ve also come to appreciate and enjoy opera and musical theatre.

I’ve also moved from being immersed in national politics to being a close observer of university and faculty politics… and have come to realize that in all too many instances, Henry Kissinger was right – university politics are every bit as bad as national politics, or they were until the last year or so.

One question that keeps coming to mind for me is exactly what is the role of a university president? Are university presidents primarily glad-handers and fundraisers? Or are they supposed to set the course and policies of the institution? Or, in the case of state institutions, to lobby the legislature to obtain a share in increasingly hard to get state funding?

The current president of the local university is a lawyer, a profession that I’m convinced has created a disproportionate share of the mess in which our federal government finds itself. I have nothing against attorneys in their proper place [and I shouldn’t, given the number of them in my family], but I firmly believe that neither accountants nor lawyers should be in charge of anything. Yes, every CEO needs a good attorney for advice, but attorneys as CEOs or university presidents? Not a good idea. I feel the same way about accountants.

I’m certainly not privy to all this president has done, but I have to say that, from what I’ve seen, his priorities are… can I say, of dubious value.

Every door in the music department had to be replaced not once, but twice, for legal reasons, because professors couldn’t be observed teaching otherwise, not that there were ever any complaints. On the other hand, there’s still no funding to replace the sixty year old defective and potentially dangerous lighting system in the recital hall… although plans have finally been discussed, but the replacement has been postponed for three years running. When asked about the possibility to replace the sixty-year old and overcrowded music building, he told the faculty they needed to find a wealthy donor… and that they “knew what they were getting into” when they became music professors.

The university president religiously attends every football game and touts the football team, which has won the conference title for two of the last three years. He hasn’t commented on the fact that singers from the music department have made it to the national finals of the National Association of Teachers of Singing for the past three years. Nor has he noticed the theatre alumni/alumnae who have appeared on Broadway or in touring national productions. He certainly hasn’t noticed the professors who taught and mentored those graduates. But he has forced out the director of the Utah Shakespeare Festival, one of the two men who built it into a national Tony award winning regional theatre, and eliminated the modest stipend paid to the Festival’s founder. The only music department concert he now attends is the annual choral-rock concert, and he’s inordinately proud of the university’s recent achievement as being cited as the most “outdoors-oriented” university in the country.

For more than fifty years, the university president has served as a board member of the Cedar City Music Arts Association, the oldest all-volunteer arts organization in Utah. Some presidents were more active than others; all attended board meetings [nine a year] occasionally. This president never attended and recently resigned.

So far, after three years, he hasn’t managed to land major financial support, nor has he been able to persuade the legislature to come up with significant additional funding, even though the legislature has insisted that the university accept more students every year, so that tuition continues to climb. And despite the increased enrollment, very few additional full-time faculty have been hired, but the number of adjunct faculty has burgeoned.

But the football team is better.

Educational Excellence…and Measuring It

One of the problems with excellence is something that I’ve seldom seen acknowledged, especially by those charged with determining and “measuring” it.

Simply said, excellence is individual, limited, and determined by specific accomplishments or “products.” Mozart and Beethoven wrote specific exceptional musical works. That defined their excellence, and that excellence was independent of their personal behavior, which was, to be charitable, far less than excellent. Einstein’s excellence manifested itself primarily in his theories of general and special relativity and the photoelectric effect.

These days, colleges and universities have essentially tied the idea of excellence in education to bureaucratic systems and accountability to rigid standards that miss the mark. Buzzwords like “essential learning outcomes” and “experiential learning” and “detailed rubrics” and “enhanced student retention” all abound. Syllabi have become detailed tomes that need to be written with near-legal precision. All of this and more is presented as both a means to excellence in education and as a way of measuring what constitutes such “excellence.”

And… of course, whether anyone will admit it, such methods and systems are failing. They’re failing because no one wants to look at what actually measures excellence in education. Excellence is not measured by how many students stay in school and graduate, nor is it measured by inflated grade-point averages, or university student evaluations, or by the immediate post-graduate earnings of such students. Diplomas, in all too many cases, have become almost meaningless paper credentials. First jobs and accomplishments pale, and touting the earnings of former students shows more about their interest in money than their accomplishments or their interest in actual accomplishment.

In the end, what represents a college’s excellence is the accomplishments in life of its students, especially in later years. The problem with this measure of excellence is that it’s long-term, and the educrats need immediate and flashy bookmarks to placate and motive legislators, donors, alumni, and parents and students paying ever-higher tuition and fees.

All too many universities recognize and honor primarily alumni or alumnae who have either attained celebrity status or donated substantial sums of money. Universities who recognize concrete and significant accomplishments of alumni with as much ballyhoo as those who donate enormous sums of money are rare. Student athletes who become successful professional athletes are touted over former students who become successful professionals in other fields.

The same is also true in terms of faculty recognition. Solid, career-long accomplishments of faculty seldom are lauded. Popular awards, awards that can be used for PR purposes, or accomplishments that gain press or increase enrollment, are all too often the faculty “accomplishments” that are touted by universities… and seldom do they represent excellence. Nor, apparently, do most people, even university alumni, even care.

They’re more interested in whether the football team has a winning record.

The Unrecognized Costs of a College Education?

For the past four decades, if not longer, Americans have been told in more ways than one that a college education is the way for a young person to get ahead, in fact, just about the only way. In 2009, 70% of all high school graduates entered college, an all-time high. Today, the figure is around 66%… but only a little more than half of those who enter college actually graduate.

The cost of higher education may be one factor for the recent decline, given that, over the past forty years, college costs to students have risen at an average rate of seven percent per year, roughly twice the rate of inflation. Part of the reason is that state colleges and universities have passed on more and more of the costs to students and their parents, and often neither can actually afford them.

The result? Forty-four million Americans have student loans. Almost 20% of those loans are in default, and the default rate is continuing to rise.

Why? The simple answer is that the former students can’t afford to repay the loans, suggesting that they don’t make enough money to cover both living expenses and loan repayments. That’s one reason why many recent graduates are still living with their parents.

One of the reasons is that, as I’ve mentioned in previous blogs, there aren’t as many jobs out there requiring a college degree – and thus providing the income necessary to pay off substantial loans – as there are graduates seeking those jobs.

Yet education remains “the answer.”

I won’t try to address all the occupations where this is a problem, just one area that I know something about – fiction writing. When and where I graduated from college, there were few degree programs in creative/fiction writing. I could and did take several semesters of creative writing, but had a double major in economics and political science. Today, Master of Fine Arts (M.F.A.) programs in creative writing have proliferated. The first M.F.A. program was established at the University of Iowa in 1936. By 1994, there were 64. By last year, according to the Association of Writers and Writing Programs, there were 381 M.A. or M.F.A. programs in creative writing. Annually, some 3,000 plus students a year graduate with such a degree.

While the Bureau of Labor Statistics (BLS) currently states there are 145,900 “writers and authors” in the U.S., a quarter of them are part-timers, and 56% of them make less than $12,000 annually, which would place them below the federal poverty level for a single person. This isn’t especially surprising, given that Nielson Bookscan reported that of 1.2 million books tracked, only 25,000 — barely more than 2 percent — sold more than 5,000 copies. At current prices and royalty rates, selling 5,000 copies will generate between $12,500 and $15,000 – spread over two years at a minimum. Also consider the fact that, according to Publisher’s Weekly, the average book sells less than 500 copies.

There are roughly 1,900 members of the Science Fiction and Fantasy Writers of America, and less than 10% of them make more than about $30,000 annually, according to a former officer of the association. Noted F&SF editor and author Eric Flint once estimated that only 32 F&SF authors in the U.S. earned a consistent comfortable income, presumably an income above the median family income of $55,000.

Under these circumstances, for how many M.F.A. graduates is the degree really worth the cost? And this isn’t just a problem for would-be authors, but for more than a few other fields, as well. I just happen to know the numbers for writers better.

According to the BLS, there are roughly 800,000 employed lawyers in the U.S. today, and those who are employed make an average of $118,000… BUT the BLS also states that every year there are more unemployed law school graduates because the number of graduates is greater than the number of new positions created. This is also true in higher education, where the job market is so tight in most fields that educators with doctorates from good universities can only find part-time positions as adjuncts.

Such numbers also raise another question. Given the increasing costs of higher education, isn’t insisting on a college education risking becoming another form of economic segregation, potentially bankrupting those with heavy loans who don’t win the “jobs lottery,” not to mention offering unrealistic hopes to far too many young people?

Incompetence

Earlier this week, I flew back from the World Fantasy Convention in San Antonio, with what I thought would be a comparatively simple itinerary, at least for me, given that getting to and from Cedar City isn’t ever a single flight – except to Salt Lake. My first flight was from San Antonio to Salt Lake City on a fairly comfortable aircraft, an Airbus 320.

Boarding was without incident. Then a few minutes before scheduled push-off from the gate, the pilot announced that there was a fuel discrepancy that needed to be resolved. It took more than an hour and ten minutes to resolve the “discrepancy” and handle the paperwork.

The pilot had announced that the aircraft had the proper amount of fuel, but that the discrepancy still had to be addressed. So, believing that we had been sitting around for more than an hour just to unravel a bureaucratic paperwork snafu, I inquired into the nature of the discrepancy. One crew member finally told me that the problem wasn’t the amount of fuel, but that its location was. Apparently, there was a 20,000 pound imbalance of some sort. How this occurred or whether the crew member had it precisely right, I don’t know, but I do know that an Airbus 320 has two tanks in each wing and a center fuselage tank. To me, a 20,000 pound fuel imbalance sounds serious [especially given that the maximum fuel load is roughly 42,000 pounds], and according to FAA regulations, aircraft are prohibited from taking off with significant fuel imbalances, not that I knew that at the time

As a result, once we arrived in Salt Lake, despite my sprinting between gates, I missed my connecting flight to St. George by ten minutes… as did at least three others, who were either smart enough or pessimistic enough not to run. That meant a five hour wait for our connection. Several others couldn’t leave Salt Lake City until the next day, while a few “fortunate” souls could sprint and make their connections. I finally got home at close to one in the morning.

While I’m very thankful that the pilot caught the error/problem, the incompetence of the refueling crew cost everyone time and money, and had the problem not been spotted, it’s possible that matters could have been far worse.

I may not like weather delays for aircraft, or Air Traffic Control delays, or even some maintenance and repair delays, but delays created by incompetence are another thing entirely. Now, it could be that I’m getting more curmudgeonly as I get older [although some of my offspring might claim I’ve always been that way], but it appears to me that I’m seeing a great deal more of this kind of sloppiness. My wife sees it in students; an Army lieutenant colonel who’s a battalion commander tells me that new soldiers need much more training and “reminders” about the importance of details, and has the statistics to back up his statement; and our son, who runs a very high-end retail outlet, has had to fire more people in the last two years than in the previous decade for exactly the same reasons.

Yet I see statistics insisting that the young people of today are more intelligent than ever. In my view, intelligent people don’t misfuel aircraft or require continual occupational reminders and babysitting.

And then I got a survey from Delta asking how they could have better handled the situation. My answer won’t be considered, I’m sure. I suggested that passengers who are delayed and inconvenienced by incompetence should be financially compensated, and that such compensation should be funded by deductions from the paychecks of senior airline executives.

The Opioid Mess

The number of deaths from opioid overdoses and misuse continues to climb. All sorts of legislative and regulatory proposals have been floated, almost entirely, from what I can tell, dealing with controlling or restricting the prescription and distribution of opioids. Most recently, the President has signed an executive order purportedly addressing the opioid crisis.

Almost none of these measures will work, just as the measures proposed to deal with illegal drugs have failed miserably. And these “new” approaches will fail for a very similar reason: They don’t address the real problems leading to opioid deaths.

According to the National Institutes of Health, over 100 million Americans suffer from chronic pain, and opioid-related overdose fatalities have doubled over the past ten years to more than 60,000 last year. While the NIH has recognized that pain and overdose deaths are related, and that medical pain treatment methods need to be improved, the underlying problem is incredibly simple… and presently not solvable for the majority of those suffering long term severe pain.

Opioids are the only legal way to relieve pain for most of those individuals suffering long-term severe pain. Continuous use of opioids requires higher and higher dosages to be effective and also makes users increasingly more sensitive to pain. In addition, chronic intense pain makes sleep difficult, and sleep-deprived individuals have even more difficulty handling pain. The medical profession has also been successful in “saving” people, at the associated cost of painful and chronic medical conditions.

While researchers are seeking other non-addictive pain remedies, so far as I’ve been able to determine, no non-opioid medication useful on a daily and long-term basis for a range of pain conditions has reached the stage of human clinical trials, and until something meeting those criteria is developed we’ll continue to face an “opioid crisis.” Restricting prescription painkillers will only drive people in pain to illegal drugs on a greater basis than at present, and that’s frightening, because overdose rates for illegal synthetic painkillers such as fentanyl are now approaching 20,000 deaths per year, an almost six-fold increase since 2002.

The problem isn’t opioids; the problem is pain. And very little of the rhetoric even acknowledges that.

Who’s Going to Pay?

Early this week, the Department of Interior announced plans to increase the entrance fees to some seventeen of the nation’s largest national parks in 2018, more than doubling the previous fees during the most crowded times. Among those parks are several here in Utah, including Zion, Bryce, Canyonlands, and Arches.

The local reaction was fierce and immediate, not to mention negative, all along the lines that families can’t afford to pay $70 per car [now $30] or $30 per individual [up from $15] just to get into a national park. And if families can’t or won’t do that, Utah tourism will take a significant hit.

I understand the reaction, even if the proposed fee is far less than a day at Disneyland or Disney World. But I also understand the problems facing the National Park Service, which needs desperately to repair decades-old and damaged infrastructure, an infrastructure that gets damaged more each year by the increasing number of visitors. Currently, the Park system’s maintenance/repair backlog exceeds eleven billion dollars.

What also struck me was that this is the same reaction to all too many government programs, whether it’s SNAP/food stamps, health insurance, Medicaid, Medicare, disaster relief, interstate roads and bridges, tuition and fees at state universities… the list is seemingly endless. The least affluent members of society are hit the hardest by either increasing costs or decreasing services, and because politicians don’t ever want to raise taxes on anyone, either things don’t get fixed, or a few things get some help, and federal spending is financed more and more by increasing deficits.

It’s a national epidemic of “We need this, but we don’t want to pay for it.”

And yet, despite ballooning deficits, the Republican-led Congress and the President are pushing for massive tax cuts, claiming that such tax cuts will fuel growth that will wipe out the deficits. This is political bullshit and voodoo economics erroneously based on the experience of the tax cuts proposed by President Kennedy and signed into law by President Johnson as the Revenue Act of 1964, which reduced the top individual rate from 91% to 70%. The corporate tax rate was reduced from 54% to 48%. In fact, there was a moderate but significant growth attributed to those tax cuts.

Today, the tax rates are much different, and much lower than then. The top individual rate is 39.6% for individuals [with taxable incomes above $418,000 a year] and 35% for corporations, although the average rate paid for corporations is closer to 20% [and some large corporations pay no tax at all]. In addition, statistics show that there’s plenty of unused capital that’s not being invested in new businesses or jobs because the demand isn’t there. Since most of the tax cuts will go to the well-off, they won’t increase spending by the bulk of the population, which is what would be required to stimulate demand significantly.

And that means that the problem of “needs” being greater than the funds to pay for those needs is only going to get worse. And while many decry the growth of Social Security and Medicare, exactly how else, at present, are we as a nation going to provide for people too old and too infirm to work? Then, too, regardless of political philosophy, meeting some of those needs, such as our aging infrastructure, an overcommitted military, disaster relief and rebuilding, and yes, the national parks and the environment, are vital to the future of the country.

But no one wants to pay for enough for them… or to agree on what spending can be cut.

Slow Writing?

I can’t say that, with a few notable exceptions, that I’ve found many books to be slow reading. I’ve found books that I thought were less than well-written, books whose action sequences, upon reflection, seemed to have little point, books where I didn’t care about the main character, and books where there was less action, but I didn’t think of them as “slow.” I can only claim to have found one set of books truly slow, the Gormenghast Trilogy, but I know that there are a few readers who don’t find it slow.

One thing I have noticed, though, is that more and more readers are complaining that books are slow. I was astounded to find a huge listing of “slow” fantasy books on Goodreads. Some of those listed as slow included Brandon Sanderson’s Mistborn series, George Orwell’s 1984, the Harry Potter books, Neil Gaiman’s American Gods, Andy Weir’s The Martian, and even George R.R. Martin’s Game of Thrones. The list of “slow popular books” was over 900 books.

As I’ve mentioned before, I try to read a number of new writers every year, and it does seem to me that there are fewer and fewer books with slower pacing every year… and yet the number of readers complaining about slow books seems to be growing.

So… is it about the books? Or is it that more and more readers are used to fast-paced [and often shallow] video-based entertainment and expect books to be “faster” in the same way? Or could it be that more and more Americans have less ability to concentrate and possess reading skills inferior to the readers of previous generations? Given the huge expansion of graphic novels and manga, there certainly seems to be a segment of the “reading” public that prefers fewer words and more pictures. Is this because of declining reading skills or because the expansion of “visual”/video culture has stunted the ability of some portion of the reading public to create mental images of what they read? Perhaps both?

Certainly, the scores of teachers I know and have asked about this all believe that, in general, students in high school or college have more difficulty focusing,tend to avoid reading whenever possible, and complain that reading assignments that would have been considered light or easy a generation ago are too long and too hard – and that includes even students who score high on SAT or ACT tests, suggesting that they’re not lacking raw brainpower.

Slow books? Maybe. But I’m inclined to believe that it’s as much poor and slow readers as slow books.

Manners, Value… and the Appeal of Trump

For the most part, the manners of the first half of the twentieth century have been “modernized,” ignored, trashed, or updated. Which word describes one’s assessment depends on the individual and background, and there may well be additional terms better suited in the minds of others.

The social upheaval that began in the mid-1960s focused on manners as hypocritical and dishonest, among other things, and while that doubtless wasn’t the only factor, it was likely the most significant. What the downgrading or even disposal of manners and social custom ignored or disregarded was the role manners played in affecting individual self-worth.

Hypocritical as manners may be and often are, they require effort on the part of individuals. When people say “please” and “thank you,” when they address or refer to people as Mr., Ms., Mrs., or Miss and the appropriate last name, when they don’t crash or crowd lines, when they open doors for people who need help, when they address letters and emails with names and titles, rather than “Em” or “Bob,” it tends to send a message that others have worth. And when millionaires and billionaires dismiss the concerns of the poor, the working class, and minorities or when political figures call the supporters of an opponent “deplorables,” it’s neither accurate nor useful. More important, such behaviors send messages of devaluation.

How does this tie into Donald Trump and the polarization of the United States?

Both “sides” feel that they’re being devalued by the other side, especially by the leaders of the “other side.” There’s no sense of polite disagreement. The other side is “the dark side,” to be attacked and trashed for their “values” or lack of values. The large majority of Trump supporters, in particular, feel that they’ve been devalued and disregarded and that no one was speaking up for them. Ironically, many of them are willing to overlook Trump’s total lack of manners because they didn’t see anyone with manners able to articulate their views and feelings strongly enough.

Hillary Clinton was more mannered, but far less passionate, and it showed. As a result, too many Democrats drew the conclusion that she needed to be more of a gutter-fighter. Add to that the fact that many people seem to equate crudity with honesty, and manners as a trait of the self-serving elite, and she came across to too many of the undecideds as manneredly dishonest. Trump has proved, rather conclusively, that crudity doesn’t mean honesty. Politeness doesn’t, in itself, mean honesty either, but politeness has a far better record in allowing people to talk over controversial issues.

The more someone feels devalued, the less they’re going to listen to the other side, and the only way to even begin to bridge that gap is for the name-calling and vulgar and incendiary epithets to stop, and for people to address the issues politely. Being polite and mannered doesn’t mean giving up passion. Whether one liked Martin Luther King or not, he was both passionate and fought for his goals in a mannered fashion. The same can also be said of our greatest Presidents.

Like most social conventions, manners are a tool, one devalued in false service of “honesty” and one whose employment would be most useful today.

Showing… or Telling?

A while back, I got an interesting comment from my editor about a section of a manuscript I’d submitted. Her words were, roughly, “Here, it’s better to tell than show.”

I bring this up because one of the maxims pounded into beginning writers is: “Show; don’t tell.” That means the writer should describe the action as it happens, letting the reader “see” it as it happens, so to speak. In general, that’s good advice, but not everything needs to be shown. Not every step of a fifteen mile march, let alone a hundred mile march, needs to be described. Nor does every part of most love letters need to be read to the reader.

The pertinent wording of a law lends a certain authority if the speaker is an advocate, attorney, or judge… or a trader trying to pull off a shady deal, but what those words are isn’t necessary for a scrivener engaged in copying book after book – unless they bear on the plot specifically or a sentence is used to show how boring the tome truly is.

On the other hand, some excruciating detail in certain situations may be vital. The detailing of woodworking in The Magic of Recluce or of barrel-making in The Wellspring of Chaos are necessary in terms of defining the character and character development of Lerris and Kharl.

And sometimes, there’s no happy medium, as I discovered when Solar Express was published. As a technology-based near-future SF novel, the detail is vital for some readers and drags the story for others, which is why Solar Express is fast-moving for one category of readers and “slloooww” for others. Without the technical detail the story wouldn’t feel real to the first readers, and for those not into such technical intricacies, the details just got in the way. Some readers have been delighted when I’ve gone into the details of food and food preparation…and complained when I didn’t in a later book.

What book was my editor talking about? And what aren’t you ever going to read? I’m not saying. That’s one of the uses of a good editor – to make the book better than it would have been. And I’m not about to show you that it wasn’t as good as it turned out to be.

Law

What’s the point of law? Or law and order?

I’d say that it’s to provide a common set of rules that everyone in a society can understand and accept, ideally to accept as providing a degree of fairness. Others have or might have another concept – law as a hard and fast rule that defines good and evil in terms similar to their theological beliefs – and still others might feel that law is a tool for the elites of a society to control those beneath them. Some lawyers, I know, believe that the law is a tool they use in attempting to obtain justice, meaning a “fair” outcome for their clients, but, of course, what “fair” is always depends on individual viewpoints. From a technical point, in the United States, a law is essentially a statement by enacted by a governmental body which allows or prohibits certain acts, or imposes certain limitations on them.

And I’m certain there are other definitions of law, but why do we need laws? And why, once we have laws, do we seemingly need more and more of them?

Human societies need laws because there are always individuals who refuse to accept limitations on their acts, even when those acts harm others.

The answers to the second question are more multifold. Every law has areas where it lays down absolutes. Every time an absolute is codified into law, it creates situations where the absolute imposition of that law is unfair and unjust, or perceived as such. And someone often wants to remove that unfairness, which requires another law. In addition, every law excludes as well as including, and people want to “clarify” the law to assure that something heretofore excluded gets included. Then add to that that certain groups want certain laws for their benefit.

When people who share the same culture enact laws, they see those laws similarly among themselves and in a different way than do people who come from a different culture or economic class. That’s one reason why more egalitarian and homogenous societies tend to have lower crime rates.

In addition, equal penalties or “requirements” under law have differing impacts on people from differing social and/or economic strata.

The entire issue of so-called “voter fraud prevention” laws” being pushed by affluent white Republicans in the U.S. provides a good example of this, because those laws are regarded essentially as voter suppression laws by those of minority and lower income levels.

The difference in viewpoint comes from the difference in situation. For me, a photo ID isn’t a problem. It’s a slight hassle at most, a few hours once every five years, when I renew my driver’s license, and because I travel occasionally internationally, I have a passport as a back-up. Because I live in a moderate sized town, it’s a ten minute drive to the post office or the Department of Motor Vehicles, and because I was educated to the need for certain papers, I’ve always kept copies of things like birth certificates.

That’s all very easy and convenient – for me. My offspring, however, all live in large metropolitan areas where obtaining or renewing a driver’s license – or a passport — can be a lengthy affair, requiring travel and time. But they’re well-off enough that they can arrange the time and deal with the costs… and they had parents who prepared and educated them to those needs.

A minority single parent working a minimum wage job who lives in a state requiring a photo I.D. has a much tougher time of it. First off, most of the offices that can issue an I.D. are only open during working hours, and most minimum or low-wage earners don’t have much flexibility in working hours and often have to forgo paying work to get through the process. Also, the fees for getting such an I.D. take a greater percentage of their income. Then, even before that, they may have to obtain a certified birth certificate – taking more time and money. They are likely renting, rather than owning a home, and that requires more documents to prove where they live.

And the impact of other laws falls harder on the poor. If you don’t have the money to immediately fix a broken tail-light or a faulty muffler, that risks getting a ticket, and the cost of the ticket just adds to the burden. If you can’t drive the car, you may not be able to work. What is a modest cost and inconvenient repair for a middle-class worker can literally be a disaster for a poor worker.

What so many Americans fail to realize is that “equal” laws, even assuming that they’re enforced equally, which study after study shows they’re not, fall more heavily on the poorer members of society.

In reality… the “law” isn’t the same for everyone, nor is it seen as the same by everyone…but we’d like to pretend that it is… or that it’s stacked against us – and which belief you’re likely to hold depends on where you come from…and, often, how well off you are.

Formality in F&SF

All civilizations have at least two sets of rules. The two most basic sets of rules are laws and custom, and the most obvious subset of custom is manners. With the recent revival/ renaissance of Jane Austen and various spin-offs, there are a number of writers who focus on manners and social etiquette, generally in such sub-genres as steampunk or Regency-style fantasies.

But all cultures, in all times and places, have unspoken codes of manners, and they’re not restricted to just attire, although at times, cultures have gone so far as to legally define and restrict what people could wear, based on their wealth and social position, through sumptuary laws, which carried significant penalties.

As one of the older practicing and producing writers, I grew up in household where manners and custom were drilled into me. Of course, they had to be, because I was, to put it politely, socially oblivious. The majority of human beings have innate social senses. Mine were largely absent. That made little difference to my parents. I was drilled in every possible social grace and situation by my mother, while my father made certain I was more than adequate in sports, particularly those of social value, while both emphasized the importance of achievement in education. For the time, place, and setting in which I grew up, this was the norm.

What tends to get overlooked by a number of younger writers is that such an upbringing is not an aberration in human cultures, and for the majority of human history, those who have ruled and shaped society have had an upbringing that emphasized what was required to succeed. Those who were well-off but not of the elite also did their best to instill such education and manners in hopes that their offspring would have the background and manners to rise economically and socially.

At present, in the United States, the iron requirements of formality required prior to roughly the 1960s have been relaxed, or battered into scattered remnants of a once-uniform code of elite conduct, just as the former elites have been disparaged and often minimized.

This situation is not usual for cultures. More social rigidity is the norm, just as the studies of Thomas Piketty have shown that, historically, high levels of income inequality have also been the norm. Whether less rigid standards of manners and social behavior are the result of higher technology remains to be seen, but writers should consider [more carefully than many do, and no, I’m not naming names] whether the manners and social conduct of their characters match the actual culture that they’re depicting. The shepherd boy who attains power will never fit [and this almost never happens, except in fiction], except through brute power. His children might, if his wife/consort is from the elite and is in charge of their upbringing.

Also, contrary to what some believe, manners don’t reflect weakness, but are a way of displaying and reinforcing power. The decline of formal manners in the United States reflects the decline of old elite structure, and the often enforced casualness of new up-and-comers is meant as a symbol of a new elite, one problem of which is that an apparent lack of manners too easily suggests a lack of control… and a certain level of chaos and uncertainty.

In any case, any culture will have a form of mannered behavior that reinforces whatever elite governs, something that writers should consider.

Diversity… and Diversity… in Fiction?

At present in fantasy and science fiction, and, it seems to me, especially in fantasy, there’s a great push for cultural and ethnic diversity, especially in the last few years, at least in part as a reaction to the history in the genre, where stories and books largely focused on white male characters. That’s not to say that there haven’t been quite a number of notable exceptions that dealt with non-European ethnicities or with female characters, or even hermaphroditic characters, as in the case of LeGuin’s The Left Hand of Darkness. But the criticism that the field has been too “white male oriented” definitely has validity.

I certainly applaud works that effectively create or celebrate different racial or ethnic backgrounds, and even those that tastefully explore sexual diversity, but I’d like to sound a note of reality and caution for authors in dealing with “diversity.”

Some writers explore “diversity” by creating and exploring a culture very different from those traditionally depicted in fiction, and that can be enlightening and entertaining, but that’s very different from presenting a civilization/society which contains large numbers of people from diverse ethnicities.

First, all low-tech powerful civilizations [the kind often depicted in fantasy] have been dominated by an ethnic elite. These elites haven’t been all white, either. The Nubian culture conquered and ruled Egypt for a time, and that was definitely not a “white” culture. Most people know about the Mongol culture, and the fact that it ruled China for a time [until the Chinese absorbed the Mongols in China, which has happened more than once]. I could give a substantial list of non-Caucasian empires throughout history, but the point is that these cultures weren’t “diverse.”

They were different in ethnicity from other cultures, but there have been very few successful civilizations that embodied a great diversity in cultures. One could make the point that the United States, for all its failings, is the largest multicultural nation that has ever existed. Now, there have been empires that included different cultures, but those cultures, for the most part, were geographically distinct and united into the empire by force. About the only places where you might see diversity in any significant numbers were major port cities and the capital city.

Second, diversity in a society creates internal conflicts, sometimes on a manageable level, but if history is any indication, usually not. Even the “melting pot” United States struggles with internal ethnic diversity, and the rest of those nations with significant ethnic minority populations aren’t, for the most part, doing even as well as we are with diversity issues.

That doesn’t mean that a writer shouldn’t write about different cultures. I’m all for that – if it’s well-thought-out and consistent. In reality, however, such stable cultures will likely have a dominant ethnicity/culture, unless, of course, the author is going to explore internal ethnic conflict or unless the author has some truly “magic” potion that can solve the problems of wide-spread cultural internal diversity, because past experience shows that any internally diverse culture is likely to be highly fractious. And that’s something that both writers… and almost everybody else… tend to ignore.

The Multiplier Tool or… Not So Fast…

Technology by itself, contrary to popular beliefs, is neither good nor evil. It is a tool. More precisely, it is a multiplier tool. Technology multiplies what human beings can do. It multiplies the output from factories and farms. It also multiplies the killing power of the individual soldier or assassin. Fertilizers multiply grain and crop yields. Runoff of excess fertilizers ends up multiplying ocean algae blooms and making areas of oceans inhospitable to most life.

Modern social media makes social contacts and communication more widespread and possible than ever before. Paradoxically, it also multiplies loneliness and isolation. As recent events show, this communication system multiplies the spread of information, and, paradoxically, through belief-generated misinformation and “false news” multiplies the spread of ignorance. Use of fossil fuels has enabled great industrial and technological development, but it’s also created global warming at a rate never experienced before.

Those are general observations, but in individual cases, certain technological applications are clearly one-sided. Vaccines do far more good than harm. The harm is almost statistically undetectable, despite belief-inspired opposition. Use of biotechnology to create bioweapons benefits no one. The use of technology to turn rifles into what are effectively machine guns does far more harm than good.

The other aspect of technology is a failure of most people to understand that, with each new technology, or technological adaptation or advancement, there is both a learning curve and a sorting-out period before that technology is debugged and predictably reliable – and that period is just beginning – not ending – when the technology or application first hits the marketplace.

So… the late-adopters of new technology aren’t technophobes… or old-fashioned. They’re just cautious. But one of the problems today is the feeling by too many that it’s vital to be the first to have and use new technology or new apps. Over the years I’ve seen far more problems caused by rushing to new system and gadgets than by a deliberate reserve in adopting “the new stuff.” In addition, changing systems every time a manufacturer or systems producer upgrades wastes the time of employees and also creates anger and frustration that usually outweigh the benefits of being “early adopters.” Adopted too early or unwisely, technology can also multiply frustration and inefficiency.

Add to that the continual upgrades, and it’s very possible that the “drag effect” caused by extra time spent educating employees, installing upgrades, and debugging systems either reduces productivity or actually decreases it until reliability exceeds the problems caused by the “rush to the new.”

All of which is why I’m tempted to scoff at those individuals who rush to be the first with the newest and brightest gadget. But I don’t. I just wait a while until they’ve stumbled through all the pitfalls and most of the debugging. There’s definitely a place for “early adopters.” It’s just not a place where I need to be.

Truth…

Recently, a reader made an interesting comment to the effect that what I personally believed to be true doesn’t necessarily turn out to be true for others. This is a statement that initially sounds very reasonable, and studies indicate that it’s something that most people believe.

But… it’s also incredibly deceptive and dangerous. Now, I may have been correct, or I may have been incorrect. I may have had my facts wrong, or perhaps they were right. But the idea that correctness, accuracy, or “the truth” of something varies from individual to individual, depending on individual perception, is a very dangerous proposition.

Part of the reason why that proposition is dangerous is the use of the word “truth.” The word “truth” embodies a connotation of moral purity and certainty on the part of the individual defining that truth. On the other hand, facts are. How they’re perceived by individuals obviously varies, and different individuals give different weight to the same set of facts. Different individuals cite different sets of facts in support or opposition to policies, proposals, books, laws, or in other settings. But the bottom line should always be based on whether the facts are indeed accurate, and whether they apply to the situation at hand, not upon my beliefs about them or someone else’s beliefs about them.

It appears to me that today we’ve gotten into a societal mindset that places what we feel about anything far above determining what is accurate, what is actually so, and what is not. As feeling beings, this tendency has always been a great part of being human, but one of the great drivers of the advancement of human civilization has been the effort to determine verifiable facts, workable scientific theories based on replicable experiments and solid facts, as opposed to belief based on what cannot be determined to be accurate.

Yes, scientists and true empiricists have beliefs, but they try [and sometimes fail] to base those beliefs on hard evidence.

I’m not dismissing the importance of belief. Every human being needs things or ideals in which to believe, but the idea that what is “true” for one individual is not for another puts individual perception above accuracy and tends to support the idea that each set of beliefs is as valid as any other set of beliefs, when time and history and science have shown that “truth” resides far more often in what can be accurately determined and verified than in what cannot.

Despite the fact that in the third century BCE the Greek astronomer Aristarchus of Samos had presented a proof that the Earth revolved around the sun, more than 1500 years later the Christian Church was burning as heretics those who stated that the Earth was not the center of the universe and that it revolved around the sun. The “moral certainty” of faith trumped the facts, at least until science advanced to the point where the proof was irrefutable.

We’ve now reached a point where individuals realize that they must have at least some facts to support the “truth” of their beliefs… and in welter of “information” that surrounds us, too many individuals pick inaccurate or inapplicable facts in order to support their beliefs.

The idea that truth of belief varies from individual to individual is actually an accurate statement of a dangerous proposition – that “individual truth” is superior to verified evidence and facts, when, in fact the converse should be what we all strive for, that verified evidence and facts support our beliefs, rather than having our beliefs force us to find facts to support those beliefs.

Yet recent study after recent study shows that the majority of people tailor their facts to support their beliefs, rather than using verifiable facts to change their beliefs. Will we revert to faith over facts, as did the Christian Church of the 1500s? Given what I’ve seen over the last few years, it’s anything but an unreasonable question.

When Elites Fail…

Like it or not, every enduring human civilization has had an elite of some sort. By elite, I mean the relatively small group – compared to the size of the society – that directs and controls the use of that society’s resources and sets that society’s goals and the mechanisms for achieving or attempting to achieve those goals.

Historically, and even at present, different countries have different elites, based on military power, economic power, political power, or religious power, or combinations of various kinds of power, and as time passes the composition of those elites tends to change, usually slowly, except in the cases of violent revolution. In general, the larger the country, the smaller the elite in proportion to the total population. In addition, the work of the French economist Thomas Piketty also suggests that economic inequality is the historical norm for most countries most of the time.

Since elites are a small percentage of the population, the members of the elite need a means of control. In the United States that means has largely been economically based from the very beginning of the United States. Initially, only white males could vote, and effectively, only white males of a propertied status could afford to run for office, where they associated with others of the propertied status. What tends to get overlooked by many about the Civil War was that, for the southern elite, the war was entirely economic. Slaves were a major form of wealth, and without that slave “property” many of the great southern plantations were essentially bankrupt. Thus, the southern elites were fighting for preservation of their unchallenged status as elites.

The rapid industrialization of the United States resulted in a change in the economic and social structure with the numbers of small farmers being gradually but inexorably reduced, with a concomitant growth in factory workers, who, initially were in practice little more than wage slaves, especially child and female workers. The growth in concentration of wealth and power in the “robber barons,” such as Astor, Vanderbilt, Carnegie, Gould, Mellon, and others, without a corresponding increase in the worth and income of the workers was one of the factors behind the candidacy of William Jennings Bryan for the presidency in 1896, as exemplified by his statement to the National Democratic Convention, where he stated that “The man who is employed for wages is as much a businessman as his employer…” From there Bryan went on to suggest that the Republican candidate [McKinley] was basically the tool of the monied interests, concluding with the famous line, “You shall not crucify mankind upon a cross of gold.” But Bryan lost the election by 600,000 votes after industrialist Mark Hanna raised huge contributions from industry.

With McKinley’s assassination in 1901, Theodore Roosevelt became president, and over an eight year period pushed through a host of reform measures that improved public health, working conditions, and restricted and sometimes eliminated monopoly powers, and his successor, William Howard Taft, continued those efforts. In 1907, when a financial panic threatened to bring down the entire U.S. financial system, Roosevelt and his Treasury Secretary worked with financier J.P. Morgan to stave off the crisis. These efforts, and an improved economy, defused much of the working and lower middle class anger.

Roosevelt, however, wasn’t so much a supporter of the working class as what might be called a member of “responsible elite,” a man who felt that business and power had gone too far.

In contrast is what happened in Russia. People tend to forget that in the early 1900s Russia was the fifth most powerful economy in the world, but unlike Roosevelt and Taft, Czar Nicholas II and the Russian aristocracy continued to bleed the small middle class, the workers, and the serfs, with the result of continued revolts and unrest. Nicholas agreed to the creation of a parliament [the Duma] and then did his best to eliminate or minimize what few powers it had. And, in the end, the old elite lost everything they had to the new elites, whose power was based on sheer force, rather than a mixture of money and force.

There are more than a few other examples, but what they tend to show is that all societies have elites, and that those elites control society until they become incompetent… and another elite takes power.

From what I’ve observed, it appears that an increasing percentage of the American people is anything but pleased with all too many members of the current American elite, especially with business executives, the media, and politicians, and that most of those visible elites seem almost dismissive of or oblivious to that displeasure… and, more important, unwilling to deal with the root causes of that displeasure, except with words and, so far, empty promises.

Supporting the Short Stories…

Most of my readers, I suspect, associate my name with books that are, shall we say, substantial in length and scope. Some may know that I occasionally have written shorter works, and a few may recall that a long, long time ago, for the first ten years of my writing career, I only wrote short fiction.

At present, I’ve written and had published forty-five short works of fiction, mostly short stories, but including two novellas, and that total doesn’t include the novella I later expanded into a novel. By comparison I just turned in the manuscript for my seventy-fourth novel [Endgames, the sequel to Assassin’s Price].

Back in 1972, when I’d just sold my very first story to ANALOG, I had no idea of ever writing a novel, and I might never have written one if I hadn’t essentially been forced to by Ben Bova, the then-editor of ANALOG, who rejected another story of mine (one of many that were rejected) with the note that he wouldn’t consider another story of mine until I wrote a novel, because he felt I was primarily a novelist, rather than a short story writer. That was an incredibly perceptive observation because he’d never seen any work of mine in excess of a few thousand words.

I took his advice, and as the cliché goes, the rest was history… and lots of novels. But I never lost the love of short fiction, and occasionally wrote a story here and there, usually, but not always, by request for anthologies. But stories, even brilliant outstanding stories, cannot sustain a writer in this day and age, as they could in the 1920s and even into the 1940s. I did a rough calculation, and all of my earnings from short fiction, and that includes the two book collections, total roughly half of what I now receive for a single fantasy novel.

This is an example of why, so far as I’ve been able to determine, there are essentially no full-time F&SF short-story writers making a living wage. So I was very fortunate to have gotten Ben’s advice and just smart enough to have taken it… and equally fortunate that readers have liked the books I’ve written.

All of which brings me to another point. As I mentioned earlier, I’ve agreed to write a story for a kickstarter anthology from the small press Zombies Need Brains, entitled The Razor’s Edge. The neat thing about the anthology is that half the stories are written by name authors and the other half are selected from open submissions. I’ve finished the first draft of the story, and that’s good because it takes me much longer to write short fiction, but it won’t see print unless the kickstarter is funded, which it isn’t at present. Also, you won’t see new stories from other favorite authors, and even more important, you won’t be giving a chance to new authors.

Yes, I’ll be paid, but it’s not much, and I wrote the story for the story, not for the very modest sum – and that’s definitely true for pretty much all the name authors. So… if The Razor’s Edge is something you might like, or if you want to give some up and coming authors a chance, pledge something at the kickstarter [ The Razor’s Edge Kickstarter ]. I’ll appreciate your efforts, and so will a few new authors, some of whom might graduate to writing big thick books that you might also like in the future.

Preconceptions

There’s the old saying that goes “it isn’t what you don’t know that gets you in trouble, but what you know that isn’t so.” All too often what we know that isn’t so lies in the preconceptions that we have. Because erroneous preconceptions are usually feelings and/or beliefs that we seldom examine, we run far greater risks with them than with what we know we don’t know.

Of course, one of the greatest erroneous preconceptions is that we know something that we really don’t, as recently demonstrated by Donald Trump’s statements about how easy it would be to fix healthcare and taxes, neither of which is amenable to a simple “fix,” at least not without totally screwing tens of millions of people.

Erroneous preconceptions by U.S. military leaders about how the Vietnamese would react to U.S. forces were the one of the major factors in why the U.S. became mired in one of the longer-drawn-out conflicts, yet military figures seem to have the same problem in Afghanistan, and it appears that this is also a problem with U.S. views on both China and North Korea, because too many U.S. leaders have the preconception that people from other cultures think of things in the same way – or they look down on others and draw simplistic conclusions based on arrogant assumptions.

On a lighter note and in a slight digression, I’ve gotten several reader comments about Assassin’s Price to the effect that those readers were upset that an imager wasn’t the main character, and several said that they couldn’t get into the book because of that. I can understand a certain disappointment, if you’ve been looking forward to a book about imagers, but… every synopsis about the book mentions Charyn, and Charyn is definitely not an imager in the previous two books, and he’s much older than the age when imagers manifest their talents. In addition, the book is still an adventure, and it still has imagers… if not as the main character. These readers had such preconceptions about the book that they couldn’t really read and enjoy what was written.

The older I get, the more I’ve seen how preconceptions permeate all societies, but it seems to me that in the U.S., erroneous preconceptions are on the increase, most likely because the internet and social media allow rapid and easy confirmation bias. What tends to get overlooked is that human beings are social animals and most people have a strong, and sometimes overpowering, desire to belong. Social media allows people, to a greater extent than ever before, to find others with the same mindset and preconceptions. This allows and often even requires them to reinforce those beliefs, rather than to question them, because in most groups, questioners are marginalized, if not ostracized… and that practice goes much farther back than the time of Socrates.

Trump’s hard-core supporters truly seem to believe that he can bring back manufacturing jobs and that the U.S. would be better off if all eleven million illegal immigrants were gone. Neither belief holds up to the facts. Far-left environmentalists believe that the world can be totally and effectively powered by renewable energy. Not in the foreseeable future if we want to remain at the current levels of technology and prosperity. Pretty much every group holds some erroneous preconceptions, and pretty much every group is good at pointing out every other group’s errors, while refusing to examine their own.

And, at present, we’re all using communications technology to avoid self-examination and to blame someone else, rather than using it to figure out how to bridge the gaps and recognize the real problems, because you can’t fix a problem you refuse to acknowledge, nor can you fix a problem that only exists in your preconceptions. Nor, it appears, at least for some people, can they even get into a book in a series that they like because the main character doesn’t fit their preconceptions.

Research

Over the past several years, I’ve heard a number of variations on the theme that the younger generation doesn’t need to learn facts, that they just need to learn methods. I have to disagree – vehemently!

The younger generations not only need to learn, if anything, MORE facts, and those facts in their proper context, more than any other previous generation. Those who disagree often ask why this is necessary when computers and cloud databases have far more “storage” than the obviously limited human brain.

In fact, the very size of computer databases are what makes the need for humans to learn facts all the greater. That’s because of a simple point that tends all too often to get overlooked… or disregarded. To ask an intelligent question and to get an answer that is meaningful and useful, you have to know enough facts to frame the question. You also have to have an idea of what terms mean and the conditions under which they’re applicable.

While the computer is a great help for “simple” research, the computerization of research sources has often made finding more detailed information more difficult, particularly since algorithms often prioritize search results by popularity, which can make finding more out-of-the-way queries difficult, if not impossible, if the searcher doesn’t know the precise terms and key words necessary.

Already, there are too many young people who don’t know enough arithmetic to determine whether the numbers generated or shown by a point-of-sale terminal or a computer screen are even in the right ballpark. And from what I’ve seen, grammar checkers actually are inaccurate and create grammatical errors more often than they correct errors.

Then there’s also the problem of trying to use computers when they shouldn’t be used. Trying to get directions from Siri while actively driving qualifies as distracted driving. It’s fine if a passenger is arguing with Siri, but anything but that if the driver is.

Then there’s the problem that surfaced in the last election. When people don’t have a long-established in-depth personal store of knowledge and facts, they’re at the mercy of the latest “information” that pops up on the internet and of whatever appeals to their existing prejudices and preconceptions. And that doesn’t serve them — or the rest of us — well at all.

Literary Pitches… and Timing

I’m committed to do a story for The Razor’s Edge, an anthology from the small press Zombies Need Brains. The theme of the anthology is about just how little the difference is between the freedom fighter and the insurgent and the question of when fighting for a cause slips from right to wrong… or whether that’s just a matter of perspective.

As part of the PR for the anthology, the editors asked the contributing “anchor” writers if they’d be willing to write a blog post on one or all of the topics of creating an elevator pitch, a query, or a plot synopsis for one of their projects.

This posed a problem for me. Strange as it may sound in this day and age, I’ve never done any one of those things in order to sell a book or a story. I will admit that I’ve often managed to develop a plot summary or an “elevator pitch” for at least some of my books – after they’ve been bought… and I’ve hated doing either, and still do.

Why? Well… some of you who read my books might have a glimmering of an idea, but my personal problem is that any “short” treatment of a book – whether it’s an elevator pitch, a query, or a plot synopsis – has to focus on a single element. For what I write and how I write it, this is a bit of a problem, because focusing on a single element tends to create massive distortion of what I write.

Sometimes, questions help, or so I’ve been told. And some of those questions might be: What’s the most important facet of the book? What’s the hero’s journey? To what kind of reader does it appeal? The problem, for me, is that such questions make what I write come off as one-dimensional.

One of my most popular books is Imager, the first book in the Imager Portfolio. It features Rhennthyl – or Rhenn, who at the beginning of the book is a journeyman portrait artist in a culture vaguely similar to 1840s France, except with later steam-power. Rhenn is a good artist, good enough to be a master, but it’s likely he never will be for a number of reasons, and especially after the master painter for whom he works (under a guild system) dies in an accident that may have been caused by Rhenn’s latent magical imaging abilities.

Now, the book could be pitched as “young artist develops magical abilities and gets trained by mysterious group to use magical imaging powers.” And if it had been pitched that way, it would likely have flopped as a YA imaging-magic version of Harry Potter, because Rhenn is far more deliberate, not to mention older, than Harry Potter. Also the Collegium Imago makes Hogwarts look like junior high school.

Imager could also have been pitched as “a magic version of Starship Troopers,” since it does show the growth and education of a young man into a very capable and deadly operative, but Rhennthyl is operating in a far more complex culture and society, and one that’s far more indirect than what Heinlein postulated.

Then too, Imager could be pitched as a bildungsroman of a young man in a world where imaging magic is possible. And that, too, contains a partial truth, but ignores the fact that Rhenn’s basic character is already largely formed and many of his problems arise from that fact. Such a description also ignores the culture.

Because I never could find a short way to describe any book I wrote, not one that wasn’t more deceptive than accurate, I never did pitch anything I wrote that way. I just sent out the entire manuscript to a lot of people, and, of course, it took something like three years before someone finally bought my first book.

And… for some kinds of books, as it was in my case, letting the book sell itself may be better than trying to shoehorn it into a description or pitch that distorts what the book is all about. Now, authors aren’t always the best at describing their own work, but over time, I discovered that even my editors had trouble coming up with short pitches. So… if those who read your work also can’t boil it down into a pitch… then it just might not be a good idea.