Archive for the ‘General’ Category

Book/Author Buzz

Over just the past few years, I’ve gotten the general impression that the reading public’s attention span in dealing with new books has shortened, but that impression was gained mainly from my observation of other authors’ book sales from the outside and what I observed in far more detail from everything surrounding my own books, i.e., the sales patterns, the reviews, the blogs, internet commentary, letters, etc. What I observed in my own case was that, besides changes in overall sales figures, the initial sales “bump” and associated “buzz” have been compressed into shorter and shorter periods after, first, the initial hardcover release, and then again, at the time when the mass market paperback sale, and lower ebook price, occurred. Because we all have the tendency to generalize based on our own experience, my first thought was that, because I’m an older author, this just might be particular to me… and possibly to other authors with a similar career profile.

So I started looking for other figures that might confirm or refute my impression. A quick look at The New York Times listing of books remaining on the fiction bestseller list for extended periods showed that the number of books with long runs on the bestseller list had a pattern – of sorts. That is, there were about the same number of books with a long time on the list in the 1950s, the 1960s, and the 1990s and 2000s, but not in the 1970s or 1980s… or since 2010. Looking deeper into the lists, I discovered something else. While there still remain a few “mega-seller” books every year that stay on the lists for months, sometimes, years, as in the case of George R.R. Martin’s Game of Thrones, the percentage of fiction bestsellers with moderate runs, say more than five weeks on the list, shifted fairly significant in the late 1990s and early 2000s, so that a far larger percentage of best-sellers are “one and dones,” if you will, or perhaps hold on for two weeks. Speaking frankly, that’s certainly been the case for my books. My only multiple-week best-seller was in the 1990s, although, in hardcover sales numbers, my biggest bestsellers have been after that shift, several even after 2010.

At least in a general sense, those figures tend to confirm that “peak sales” are being squeezed into a shorter and shorter period for all but literally a handful of mega-authors. While a short sales period has been the norm for first-published authors as long as I’ve been writing, the shortened prime sales period for all but the mega-authors appears to be a comparatively recent development. Whether that causes a shorter period of heightened “buzz” about an author, or whether those who create the buzz are only interested when a new book comes out… or a combination of both, I couldn’t determine in any factual sense, not unless I wanted to invest months in market research and analysis… and I left that career behind a long time ago.

At the same time, it’s a mixed blessing to learn that my perceptions based on my own situation weren’t totally off base, because it shows on the one hand that I wasn’t too self-deluded, but on the other it does indicate a much shorter attention span on the part of readers in dealing with new books and authors, and I can’t help but think that hurts not only up-and-coming writers whose work is good and solid, even outstanding, but not flashy, but also mid-list authors who don’t get the media and internet buzz for as long a period as before… and this has to impact their sales to some degree.

 

P.S.  Interesting enough, just after I posted this, I read an entry on Tor.com [ Under the Radar: Mid-Series from the Mid List ] that has a related theme.

Civilization

Why do civilizations or nations fall?  There are many specific reasons, but, in the end, they all boil down to one point – the inability or unwillingness of the majority of inhabitants to pay the price necessary to sustain that civilization or nation.

Sometimes, that inability is largely the result of factors beyond the control of the inhabitants. One can certainly argue that the fall of Angkor Wat, the Pre-Columbian Mayan civilizations, Mohenjo-daro, and the Anasazi cultures of the American Southwest were all due to massive changes in rainfall patterns than rendered physically impossible the continuation of those civilizations as they were then constituted.  That does not mean the inability to live in those areas, but the inability of that “civilization” to survive under those conditions.  A good example of that is the collapse of the Norse settlements in Greenland.  At the same time the Norse were dying off or leaving the Inuit were successfully moving in and thriving.

The classical Greek culture never reached the heights, at least comparatively, of the Periclean era for one simple reason.  The Greeks would never pay the price of political unity, or accept the need for a core of shared values.  This is borne out by the fact that until the modern era, and even perhaps now, Greece has never been united except by an outside conqueror – Philip and Alexander, the Romans, the Ottoman Empire. It wasn’t that the Greeks didn’t lack talent or ability, but they couldn’t harness either in a shared fashion.  Ironically, a huge factor in the success of the Roman Empire was the use of those brilliant Greeks.

 So why aren’t people willing to support their nation and civilization?  Because no one really values civilization per se.  We all want what we want, and so long as our nation/civilization gives us what satisfies our needs, we’ll support it.  When it doesn’t satisfy the needs of a significant majority, however… that’s where the trouble begins.  I’m seeing signs of this all across the United States, and the “range war” problem in Utah and Nevada is symptomatic of what is occurring. 

The ranchers of the southwest want to hold on to their way of life, a way of life that extends back some four generations. But that way of life was developed in the 1880s at a time when there were fewer demands on the land and greater levels of rainfall. As average rainfall has decreased, overgrazing became more common.  The federal government attempted to manage the land by reducing grazing allotments, and, frankly, little else.  The ranchers haven’t seen much active management by the BLM, but they have seen the government reduce their grazing rights in favor of protecting wild horses and the desert tortoise.  On the other hand, as I noted earlier, federal grazing fees are far lower than the fees on privately owned land.  The BLM has not lived up to its own plans in managing wild horses, and that upsets the ranchers because they believe that threatens their way of life.  The environmentalists believe that excessive grazing is destroying the environment, and that will eventually harm everyone. In Iron County, the BLM currently permits grazing rights for 21,000 cattle animal unit months [AUM], and one AUM refers to a cow and a calf.  But… there are currently slightly more than 2,000 feral/wild horses on federal and private lands in Iron County, 1,700 more than allowed under the wild horse protection laws, and the range won’t support both.  Nor does the $1.35 BLM charges the ranchers cover the management costs for just the wild horses.  But the political power of the ranch and farm lobby keeps the fees low, and from what I’ve seen, if the fees were higher, I suspect that the revenues would be diverted for more “urgent” federal programs.  So we have a threatened way of life, a threatened environment… and the majority of Americans are clearly not wanting to pay more for either… and particularly not to support ranchers who are in fact being subsidized by the government.  The ranchers don’t see it that way.  Believe me, they don’t!  That’s why almost a thousand of them turned up in Nevada last weekend, many of them armed, and why the BLM backed down and released the cattle they’d impounded because Cliven Bundy hadn’t paid over a million dollars in grazing fees.

If this were just the case here, it wouldn’t be so bad, but various types of protests are occurring across the United States because people aren’t getting what they want.  Many women are angry because Congress won’t support legislation that would make it easier for them to get equal pay.  Same sex couples want the same “marriage” rights as heterosexual couples;  religious groups see their “rights” and way of life being threatened and want those rights put into law, even if that disadvantages others who don’t share their beliefs.  Businesses want the right to maximize profits, even when that profit maximization denies workers affordable health care.  Workers want affordable health care even if that means their employer is less profitable… and may even close.

And… the thing is, that even though we have as far higher standard of living than two centuries ago, our wants have increased far more than the standard of living.  Yet, when enough of those wants aren’t satisfied, then we’ll end up losing our civilization because, collectively, we won’t pay enough to maintain it.

Crisis/Short-Term Funding

We’ve all seen it, over and over.  A bridge collapses, usually over a river.  Or here in Iron County, a landslide closes a state highway, and it takes eight months moving a huge chunk of  mountainside to repair the damage and re-open the road. All across the country, we have infrastructure teetering on the edge of collapse, with the potential to kill people and cause millions, if not hundreds of millions, of dollars in damage in each case. But nothing gets done until there’s a crisis, and then what’s done is often only the cheapest acceptable fix.

In the case of the landslide here, the eight-month closure was the third that has closed the highway in the twenty years we’ve lived here.  Independent engineers who’ve studied the road suggest that it should have been built on the other side of the canyon where the rock and ground are more stable.  They even suggested it after the last eight month closure and repair that added more than a 100 mile detour to the commutes, delivery routes, and local cargo haulage trips of local residents, businesses, and tourists.  The state highway department turned that proposal down, claiming it was too expensive.  Yet, if the highway had been built where the better engineers suggested, that section wouldn’t have to be rebuilt every five to ten years, and the overall cost to taxpayers would be less, not to mention the possibility of reducing fatalities.

Yet pretty much everywhere in the United States, and likely elsewhere in the world, since I doubt human nature changes that much once one crosses borders – with a few possible exceptions – the same sort of deferred maintenance or “do it cheap now” attitude prevails with regard to the basic structures of society, despite the fact that spending a few more dollars now would save more dollars and lives later.

Why?  Because there seems to be an attitude that keeping taxes as low as possible is prudent.  It’s not.  Keeping taxes as low as possible when calculating costs and expenditures over a twenty or thirty year period is prudent, but keeping them as low as possible every year and deferring every possible maintenance or construction project until something has to be done only results in higher taxes… and higher costs on the community.

There’s an old saying that expresses the point succinctly – “penny wise and pound foolish” – but sayings like that are somehow out of date, which is ironic since it’s usually the Republicans who are looking to cut government spending, even while they keep saying the support traditional values.

 

The BLM Grazing Mess

This past week, what amounted to a small scale range war erupted just across the Utah border in Nevada, and in a spirit of misguided “idealism,” compounded by greed and ignorance, Iron County [Utah] officials weighed in on the side of a long-time law-breaking rancher.  Now… that’s not the way the local media put it, but so far as I can determine, here are the facts.

A Nevada rancher named Cliven Bundy has been grazing over 500 head of cattle on BLM land for over 25 years without paying federal grazing fees, ever since the BLM decreased the number of cattle permitted on the federal lands that the Bundy family had used for roughly a century.  Despite previously paying grazing fees, Bundy has claimed that he and his family own the lands through their “beneficial use.”  After twenty years of contention and two recent court decisions denying Bundy’s claims, the BLM began removing Bundy’s cows this past week.  Fearing violence, BLM issued an order banning the Bundy-family from the federal lands where the cows were being removed and deployed heavily-armed agents to protect the federal-hired wranglers who were removing the livestock.  The Bundy family ignored the order and attempted to videotape the removal.  When they were ordered to leave, one family member refused and offered physical resistance.  He was arrested, and the round-up proceeded.

To make matters worse, the Iron County sheriff and the Iron County Commissioners, citing the Bundy case as another example of BLM trampling over local ranchers in favor of wildlife, offered an ultimatum to BLM, claiming that BLM’s failure to round up excessive numbers of wild horses on BLM lands was jeopardizing the health of the land and thus penalizing local ranchers, whose access to and leasing of federal lands was limited because of the poor state of those lands caused by too many feral horses.  The Iron County officials threatened to round up the horses themselves if BLM failed to come up with an immediate plan.  Late last week, BLM agreed to develop such a plan, after previously saying that it didn’t have the funds necessary for such a round-up.  Yesterday, both Iron County and Beaver County began moving feral horses off private property, claiming that, if the horses weren’t removed immediately, they would “destroy”the range in the next three weeks.

This is not a case of right versus wrong; it’s a case where everyone is wrong, and it’s a mess.

Under federal law, the Bureau of Land Management [BLM] can lease grazing rights to BLM lands in eleven western states, under a formula set by Congress, and last revised in law in 1978.

This past February, BLM set the rate at $1.35 per animal unit per month [AUM], the same rate that has been the case for the past eight years.  An animal unit means either one cow and her calf, one horse, or five goats or sheep.

This rate isn’t a bargain; it’s an absolute steal.  To begin with, according to the General Accounting Office of the federal government, grazing fees amount to less than one-sixth the cost the federal government spends maintaining those lands.  In addition, a 2005 GAO study found private, state, and federal grazing fees running anywhere between $20 and $150 per AUM, with rancher-friendly Texas charging that higher fee for some of its state lands.  According to a Congressional Research Service Report, “The average monthly lease rate for grazing on private lands in 11 western states in 2011 was $16.80 per head.”

In the Bundy case, the family hasn’t paid any fees in over twenty years – and feels that they’re the victims.  What’s more disturbing is that the Bundys report they’ve received letters of support from thousands of people across the west.

The BLM is far from blameless. According to BLM figures, the BLM lands in Iron County should only have 300 wild horses, but BLM estimates there are over 1,200, and BLM has cut grazing allotments to ranchers, claiming that the funding Congress has allowed for dealing with wild horses is inadequate – and it likely is, given that $70 million was appropriated last year for wild horse management, compared to the nearly $1 billion spent on livestock management on federal lands. How could they let these situations go on for over twenty years?  And why has BLM set the grazing fee so low at a time when the government is running massive deficits? And how could they let the wild horse situation get so bad that there are hundreds of horses on the verge of starvation and that certain lands have been badly overgrazed to the point of becoming true deserts?   As for the local ranchers, I find it hard to believe that, first the range could be totally destroyed in three weeks, but if it could be, exactly how long will it take their cows to do the same thing and why didn’t they make such a claim before things got so bad?

And finally, why are all those ranchers so indignant about federal overspending and big government – when they’re right up there with all the welfare cheats, getting huge subsidies at the expense of everyone else?

 

The Illusion of Choice

The other day, a reader commented that I’d chosen to live in the semi-sovereign theocracy of Deseret, otherwise known as Utah. In the abstract, and in the fact that we did move from New Hampshire to Utah, that’s true.  In the real world, it was far from that simple… and that’s true of many major choices most people make in life.

In our case, the facts were that my wife was teaching at Plymouth State University, in a full-time but not tenure track job, when the New England economic downturn in the early 1990s hit the state university system and her job was eliminated on very short notice. She was offered an adjunct position at less than half pay and without benefits.  I had just become a full-time self-employed writer two years earlier, and while we were making ends meet, it would have been rather difficult to do so if she had to accept half-pay, and we had to make ends meet.  In her field, there are very few jobs open or offered in any one year – anywhere in the United States – and especially for women, because singing professorships remain one of the few areas where gender discrimination is permitted, and remains.  All a music department has to do is specify that it is looking for a bass, baritone, or tenor.

So… when she was offered the position of head of the voice and opera program at Southern Utah University, because writers are portable, the choice was between a great and likely downhill financial struggle in New Hampshire or moving to Utah,and  it didn’t take much time to decide to move to Utah… a move we certainly haven’t regretted, despite certain cultural aspects we knew in advance would be difficult… not to mention a long and costly struggle to sell the New Hampshire house, and one that we only could sell at a 40% [yes, that’s correct] loss.  However… was it really a choice?  Technically, you can say it was a choice, and we made it, but most people, I suspect, when faced with those sorts of choices, decide as we did, to accept the choice that makes the most sense occupationally and financially.

While we came through this difficult time eventually better off, there are others faced with so-called choices who aren’t so fortunate.  Poor full-time working single parents with children often are faced with the “choice” of making slightly more money – and losing Medicaid health care for their children, which means that more income results in a lower standard of living.  Is deciding against working more really a choice?  Or the illusion of one?

In cases similar to ours, but unlike us, what if one spouse has a solid job in the local area, and the other spouse can’t find a new job at anywhere near the same skill and pay level in that same area, while the still-employed spouse can’t find one in the new area – and moving will result in a totally lower income?  Either choice is bad… and this is happening to more and more two-paycheck families.  Yet those who come up with the statement, “But you chose,” don’t see that such a “choice” isn’t really a choice for anyone who weighs the options carefully.

What’s also overlooked is that earlier choices in life restrict later choices.  Having children early in life restricts what a couple can do for the immediate years to come, but having them late in life may mean that you won’t be retiring any time soon.  Borrowing vast sums of money to pursue a medical career likely means long years of private practice and likely a specialty field, because those are usually the only parts of the field where the income can pay off massive student loans.  I’ve known lawyers who have turned down judicial appointments for similar reasons.

This “illusion of choice” permeates everywhere.  Although it’s one thing when executive decisions are patently illegal, does a junior executive or a field engineer with a family and large student debts loudly and persistently question executive or corporate decisions that may be questionable?  How often?  How loudly?

What’s so often overlooked or quietly ignored is that so many of the so-called choices in life are anything but the result of choosing between “equal” or close-to-equal possibilities.  I’m not so sure that the only “real” choices one has are in the supermarket, where you have at least several varieties of every product all close to the same price, not to mention the generics. 

In short, in real life, all alternatives of choices have downsides, and most “choices” aren’t between equal alternatives, and, yes, people do make bad choices… all the time.  But from what I’ve seen and experienced in life, at all too many times, no choice is optimal, and suggesting that someone who selects the least damaging choice is at fault for the downsides is disingenuous at best, if not arrogantly dismissive.  It also perpetuates the “illusion of choice.”

 

Reminders

As a writer, for most of every day I work alone, at least in terms of human companionship, although what I do is observed by our three dogs and two cats.  Thankfully, in regard to my professional activities, their communication skills do not extend to writing, proofing, or commenting upon what I produce.  This means that I’m relatively isolated from much of the activity in that area of the United States where we live, Utah, otherwise known as the semi-sovereign theocracy of Deseret.  At times, periodically, events of such a pointed nature surface that even I cannot escape such reminders of the omnipresent theocracy.

Two months ago, it was the pronouncement by the theocrats – or one of the theocracy’s General Authorities, as I recall – that the LDS Church firmly opposed any change in the state’s highly restrictive and often ludicrous liquor laws.  Needless to say, although a number of legislators had suggested bringing the laws into the twentieth century, thus only being a century or so behind the rest of the nation, the legislature immediately decided against considering any changes.

The second reminder arrived with the Monday morning paper, the Salt Lake Tribune.  As part of the paper, there was a “32-Page Special Section,” under the Tribune letterhead, and with no indication that it was an advertising section, entitled “Commencing With a Mission,” featuring the color photograph of a good-looking male high school student who will be skipping his high school graduation ceremony in order to begin his two year LDS church mission.  Only inside the edition next to the page numbers is any indication of the purpose of the section, and there the words “LDS Conference” appear.  In short, the “Special Edition” consists of 32 pages specific to the Mormon Church and its biannual faith-wide General Conference.  The section has ads, just like the rest of the paper, and while some clearly have an LDS slant [more about those later], every story is LDS-themed.  It’s clearly not an advertising insert.

I certainly would have expected such an insert if I subscribed to the Deseret News, which is owned and operated, if through LDS subsidiaries, by the LDS Church – but because I’m anything but of the LDS faith, that is why I subscribe to the Tribune.

As for the ads, some were the “normal” types for tankless water heaters, vacation destinations, eyeglass providers, and furniture stories, etc., while others were clearly aimed at missionaries, advertising the best “missionary suits” and garb or offering “missionary discounts” on luggage, as well as a few others tailored toward church-related goods  The ad that I found screamingly objectionable was the full-page spread by Utah State University, perhaps because the lead “point” in the listing of USU’s features was that it boasts the “World’s Largest LDS Institute Program.”   I have trouble when a state-supported, state institution receiving federal funds advertises in a religious supplement about its religious capabilities… and those capabilities are limited to a single faith.

Somehow, I can’t imagine the Los Angeles Times printing, or getting away with printing, a special supplement devoted to covering Scientology, especially in such a flattering fashion, or the Boston Globe printing a similar section on Christian Science… or state universities in either California or Massachusetts advertising their support of a specific faith.

But then, California and Massachusetts aren’t semi-sovereign theocracies.

 

Anything Will Work at Harvard – Or Similar Institutions

Decades ago, I read a study that compared the abilities and success of teachers in public secondary schools to those of teachers in elite private schools.  The conclusion back then was that the overall teaching capabilities of teachers in public schools were actually better than those of teachers in private schools, but that the students in private schools, on average, learned more.

From what I’ve seen over the years, as a student, as a parent, and as a university lecturer, I suspect that conclusion remains largely accurate.  That’s why I’m extraordinarily suspicious of any “new” idea or concept in education that comes from institutions such as Harvard or Yale, Williams, Amherst, Swarthmore, Wellesley, Smith, Ripon, and any other number of “elite” colleges or universities.  Why?  Because when you have both top level students and professors, almost anything will work in educating those students… and that’s something that all too many educators, particularly the administrations at state colleges and universities, don’t seem to grasp.

Now… if that “new” idea works at an inner city charter school, or public school, where most of the students are well below grade level, then… then I get interested.  Or if it works at a mid-level state university or college. But after teaching at a state university and being married to a professor who’s taught at that level for over thirty years, I’ve seen more “new” ideas floated and fail that I can even begin to count, and almost none of them worked as well as the old-fashioned methods of maintaining standards and accountability and simply requiring students to learn the material.

The same principle also applies to “inspirational” teachers. Certain individuals have the capability to inspire.  From what I’ve observed, the ability can be refined and better directed, but not all teachers have that capability, nor do all CEOs, nor all professionals in any given field.  Yet  I don’t see anyone claiming that the only good CEOs. lawyers, doctors, or dentists are the inspirational ones.  But in education, I’ve seen all too many books and articles based on what inspirational teachers do that claim that inspiration is the only way, and I’ve seen those methods succeed in making students feel better, but fail in improving their learning.

The techniques used by successful professionals [the ones who have been successful for decades, not those who are “flavour du jour”] in any field vary considerably, but the one thing those successful professionals all have in common is subject matter mastery, combined with self-discipline… and both of those are exactly what it takes for students to be successful.  And, just as in the rest of life, not all students have the ability, for whatever reason, to master certain subjects, and of those who do, not all have the self-discipline to keep at it steadily enough to attain that mastery.

What’s overlooked all too often in all the educational fads is that the desired end result is a student with a mastery of the subject, with the ability to think and apply that knowledge with skill, and the self-discipline to do so.  Fads come and go;   those basic requirements don’t.

Profit?

Over the past several centuries, all manner of ethical and practical questions have been raised about the necessity of economic profit, its role in a society, and even whether it is necessary. Most truly thinking individuals [yes, a value judgment on my part] believe that, because there has never been a long-standing civilization that did not incorporate a market-based economic system in some form, some form of profit is also necessary.  Beyond that, I have some doubts that any majority consensus exists.

From my own experience and research, however, I will make two observations:  (1) Absolute maximization of profits results in a minimization of freedom.  (2) Absolute minimization of profits does the same.

The second point is more obvious to most people because a market-based economy, for a practical purposes, ceases to exist if there is no profit at all, and even the egalitarian Scandinavian countries had to pull back from taxation levels that were so high that they effectively destroyed profits.

The first point is continually ignored or disputed by extreme free-market types, despite the plethora of evidence to the contrary. What isn’t obvious to most people, particularly politicians, regulators, and ultra-price conscious consumers is that maximization of both revenue and profits requires keeping wages and costs low, keeping inventory to those items in the highest demand, and eliminating competition.  Politicians and regulators, at least at present, only look at low prices.  The Amazon lawsuit against Apple and the Big Five publishers was a perfect example.  The Department of Justice effectively stated that it didn’t care if Amazon’s practices gave it a ninety percent market share.  All DOJ cared about was that short-term prices were lower.  Well…now that Amazon won, just what happened to all those low prices?  I certainly don’t see much difference to the consumer.  Another example is the cable/satellite television market.  Now that the major communications content providers have largely consolidated and are maximizing their profits, the diversity of content has dropped drastically… and prices have increased. Walmart is yet another example.

Or put in another context, freedom in any area isn’t free.  Just as there’s a cost to political freedom, there’s also a cost to economic freedom of choice, and when low prices completely trump freedom of choice, not only does quality suffer for the goods most people can afford, but only the ultra-rich can afford truly high quality goods and services… and some goods and services aren’t available at any price… and in the long run, prices aren’t even lower.

 

Heroism?

In the early to mid-1970s, a Jesuit priest began a quiet, indeed almost unknown [at the time] effort, to help political refugees, liberals, and reformers, escape incarceration or liquidation by militarily-dominated South American governments.  During this period, the priest never spoke out against these governments, although one Catholic bishop – Jeronimo Podesta – did so and had been promptly suspended as a priest, never to return to the clergy.  At the same time, this Jesuit priest continued to help many of those persecuted by the government, and various reports put the number of those he helped or saved in excess of a hundred, possibly more, at a time when literally thousands of people vanished, never to be seen again. Much later, in 2000, he was the first prominent Catholic to declare that the Argentine Catholic Church needed to put on “garments of public penance” for its failures during the “Dirty War” of the 1970s. That Jesuit priest, of course, was Jorge Mario Bergoglio, now Pope Francis.

Some of his detractors fault him for not speaking out and taking a public position during the Dirty War.  Others, particularly those he did save, praise him for what he did, noting that he even saved people and made efforts for those with whose political beliefs he did not agree.

Was he a hero, as some say, or did he betray his duty as a priest by not taking a public stance? While likely everyone will have an answer, I’d like to pose the question differently – when is taking a public position useful, even heroic, and when is it a futile and meaningless gesture, one that essentially keeps the individual from doing any good at all?

In a way, to my thinking, what happened in Nazi Germany before WWII offers a useful analogy.  In 1930, speaking out against Hitler and the Nazis might have done some good, especially if more powerful and noteworthy individuals had done so.  By 1940, doing so was suicide, and a suicide that accomplished absolutely nothing.

Father Bergoglio had been ordained as a priest in December 1969, and did not even become a Jesuit until 1973.  He was not that well-known, if known at all outside of a limited circle, and, as a man from a humble background, who had been a janitor and a low-level laboratory technician, he certainly didn’t have powerful friends or influential contacts at that point in his life.  When those few in the church who had position and power, such as Podesta, did speak out, they were silenced, in one way or another.  It would appear that Father Bergoglio did what he could do and did his best to save those that he could.

Was it heroism?  Probably not grand and glorious heroism, but it took great courage and strength of will, because, if he had been discovered, he would likely have vanished as so many others did during that time. And is it heroism, truly, to speak out when it is vain, when by being less obvious, one could save more?  Yet…how does one tell?  And without being in Father Bergoglio’s cassock at the time, how can those who would judge him tell, either?

NOTE:  As one whose beliefs approximate Anglican/Episcopalian agnosticism, I’m trying to offer an open question.

 

Moby Dick Is Missing

Moby Dick is indeed missing, but it’s the asteroid, not the Herman Melville book, which, I have to confess, I could never get around to finishing, one of the handful of novels I chose not to struggle through… and considering how many bad novels I’ve had to read over my career, I think that says something.  The asteroid Moby Dick [asteroid 2000 EM26], a chunk of rock some 270 meters long, was supposed to show up sometime last month roughly 2 million miles from Earth… and didn’t.

The fact that telescopes couldn’t find it doesn’t mean that aliens exploded it or that it disintegrated, but that either astronomers didn’t calculate its orbit correctly when it was discovered in 2002 or that various gravitational forces nudged it into a different orbit.  What’s troubling about this is that the “failure to appear” is indicative of our vulnerability to large objects colliding with Earth.  A piece of rock roughly the size of a WWII cruiser falling to Earth doesn’t sound that catastrophic to most people, but most people don’t understand the results produced when even comparatively small chunks of rock slam into the planet.

The Chelyabinsk meteor that exploded over Russia a little over a year ago with a force of 500 kilotons [some 30 times more powerful than the atomic bomb at Hiroshima] was much smaller than Moby Dick, only some 20 meters across, but it injured some 1,500 people seriously enough to require medical treatment and damaged over 7,000 buildings – all from the effects of a shock wave that began more than 20 kilometers away high in the atmosphere. That’s what a comparatively small chunk of rock did after hitting the Earth’s atmosphere at more than 40,000 miles per hour.

In 1908, another object, either a comet or a small asteroid, exploded above the Tunguska River in Siberia, flattening some 80 million trees over an area of 800 square miles, and, of course, there is also the Chicxulub Crater at the edge of the Yucatan Peninsula – a crater 110 miles in diameter formed 65 million years ago when a bolide six miles in diameter struck the earth at 40,000 miles per hour.  Scientists have calculated that the impact would have released two million times more energy than the most powerful nuclear bomb ever detonated, broiling the earth’s surface, igniting wildfires worldwide and plunging Earth into darkness as debris filled clouded the atmosphere. Some suggest this was the event that led to the end of the dinosaurs.  Whether it did or not, such an impact today would effectively destroy pretty much all human societies and their infrastructure.

But for all the suggestions and warnings, what have we done?  Not nearly enough.  Not anywhere close to all the near-Earth asteroids and other objects capable of impacting Earth have been discovered and followed, and we certainly haven’t developed the capability to deflect even moderate sized rocks.  In the meantime, the financial industry spends tens of millions of dollars to execute securities trades in nanoseconds in order to make billions through sheer speculation.

 

Religion and Civilization

From reading some of my posts, readers might get the impression that I’m not extremely fond of religion.  In some ways, I’m not.  I’m especially skeptical of organized religions that, in their attempts to grow and perpetuate their doctrines and “way of life,” succeed in creating a mental state where those who practice the faith become essentially blind to the shortcomings and huge inconsistencies inherent in that faith… and often reject literal physical realities because they conflict with their beliefs.

On the other hand, given human nature, I’m not so sure that human societies without any religion at all, at least today, might not be far crueler, less ordered, and less desirable places in which to live, but then, ultra-theocratic societies tend to be religiously ordered to the point of denying human freedoms, as well as also being crueler and less desirable places to live, especially for women.

As I’ve noted before, the only codes of behavior the majority of human beings have accepted, at least for most of human history, have been those with strong roots in religion.  I suspect that’s because most of us really don’t think another human being has the “right” to declare what rules our conduct should follow, but that “God” does.  Yet, paradoxically, “God” doesn’t tell us that.  Other human beings tell us what God told them is correct behavior, and for most people throughout history, such theologically derived codes of law and behavior have been accepted. I suspect part of the reason for this is not necessarily great unanimity, but a combination of religious belief and simple pragmatism, and it may be that the key to a “good” society is indeed the combination of a theological concern and a secular pragmatism.  Certainly, those few societies without a significant religious “tie,” such as Nazism and Communism, have been anything but “good” places to live, yet the same is true for ultra-religious societies.  Oh… the “true believers” in those societies did well, but not many others.

History does show that societies dominated by religion tend to be short on human freedoms, creativity, and progress.  Societies where religion plays no role in setting cultural values also tend to be short on human freedom and restrict creativity, but often achieve progress for a time by stealing from others in various ways.

So, as much as I may complain or point out the notable shortcomings of religion, and organized religion in particular, it appears that healthy societies require some theological basis, at least at the current level of human ethical development.  The question then becomes to what degree religion should influence government, law, and behavior. Personally, I think the Founding Fathers got it right, but I mean it in the way they wrote the Constitution, and not in the activist way in which too many true believers seem to think that freedom of religion means the freedom to compel others to behave according to their religious beliefs or the freedom to enact laws that in some fashion or another effectively institutionalize those beliefs.

 

Standards and Freedom

Last Friday, my wife and I went to a modern dance concert and then, on an airplane enroute to Denver, I read through a poetry magazine that I had received as a thank you for a speaking engagement.

The modern dance concert was actually a fiftieth year retrospective by an established and respected mountain states company that presented a cross-section of dances previously offered over the years and concluded with a new piece that presented a “prospective” dance  just choreographed by the company’s new director/choreographer.  After the concert, we compared notes, and we both agreed – the older the work, in general, the better we liked it, and since my wife the opera singer and director has worked with music and dancers for more than forty years, she does have some expertise.  The newest “piece,” while theoretically presenting windows into life, seemed almost aimless, themeless, and without much truly musical accompaniment, not to mention the fact that much of the “dancing” seemed to occur with the entire body either on the floor, or extremely close to it.

The poetry magazine, from my professional viewpoint, was even worse, although it was a slick, well-designed, and well laid out effort, bound like a small trade paperback. The magazine has been published semi-annually for over five years.  The issue I read included 82 poems by 49 authors.  The first thing that struck me was that there didn’t seem to be any poetry in the “poems.”  Further scrutiny supported that impression, as I could discover neither end-rhymes, internal rhyme, alliteration, nor any discernable meter, merely an attempt at innovative typography… and that was true for every single “verse.”  After reading the short biographies of the contributors, I was even more astounded. Most had published widely, and several had won prizes for their work.

Now… I will admit to having been skeptical of most “modern” verse for years, and I have wondered, not infrequently, whether the verse [I hesitate to call it poetry] I have read in the pages of publications such as The Atlantic Monthly and The New Yorker was really representative of the state of American poetry today.  It appears so, unfortunately.  Well over a half-century ago, and perhaps longer, one of the last great American poets – Robert Frost – made a statement to the effect that writing free verse was akin to playing tennis with the net down.  From what I’ve seen, all too many would-be poets not only don’t know the rules of the game, they don’t even know that there are rules as to what poetry is., or at least, what the rules historically have been.  And, well, if the lack of rules constitutes the “new rules,” then it ought to be called prosedy or something similar.

In both the instances I’ve mentioned here, the “creators” don’t seem to have the faintest idea that greatness or excellence doesn’t come from ignoring the rules, but from knowing them, using them, and transcending them [which occasionally means breaking them, but doing so effectively requires knowing what the rules are, how they should be broken and why… and what the exact effect one is attempting to achieve by doing so].  The point underlying both these examples is that excellence in anything requires structure, not just scaffolding.  Yet this loss of structure seems to occurring everywhere in the U.S., from the decline in courtesy to our crumbling infrastructure, and  everywhere rules are being broken, “just because.”

There’s a line from an old Janis Joplin song that says, “Freedom’s just another word for having nothing left to lose.”  And when structure goes, you don’t indeed have much left.

 

Ideals and Reality

One of the great advantages of writing science fiction is that I can create a society from relatively whole cloth and try to make it real to my readers, but there are certainly dangers in making that effort.  If you don’t understand some of the basics of societies – such as economics, trade, politics, the role of beliefs, etc. – you may still have a wonderful story, but one that many readers will not finish, or if they do,  they’ll being saying that the society or culture really wouldn’t work.  Most professional writers understand that, but a number of those fail to ask another question:  How did the society/culture get that way? This was a point brought up by another writer at a recent conference I attended [LTUE], who made the comment about a well-known best-selling book, “The society would work, once it reached this point, but I can’t see how it ever got there, given human nature.”

The reason I bring this up is two-fold.  The first is to point out a few things to aspiring writers: (1) gross errors in world-building can hurt, and (2) given the example, so long as it seems to work, even if there’s no way to have gotten there, it probably won’t hurt your sales. The second is to suggest that, even in our world, political ideologues don’t seem to understand that, no matter how good an idea or principle is, you have to have a way, technically and politically, to get there.

I often get comments on various blogs suggesting idealistic solutions to various problems or difficulties we face today.  Many of these comments suggest “whole-cloth” solutions, whether it be a total free-market system or the replacement of the entire income tax system with a value-added tax, or…  There’s been a substantial number of these idealistic solutions over the years, but the difficulty all of them have is… there’s no practical way to get there from where we are now, except via catastrophe.  History suggests, rather strongly, that civilizations either make gradual changes or ossify and collapse… or sometimes, just implode into revolution or chaos.

What that means is, for example, that short of a civil war, a takeover by a dictator, or the complete and total meltdown of the banking and economic system, we are not going to see the total abolition of the welfare system as now practiced in the United States and its replacement by a totally new system.  Why not?  Because there’s no way to get there from where we are now, because too many people will oppose such radical change – unless our system collapses totally.  Even the threat of total collapse won’t do it.

The same thing appears to be true of dealing with global warming.  Until a few island chains cease to exist, until Miami and New Orleans are drowned, until New York City suffers such a storm-surge and hurricane or Nor’easter that all the subways are flooded and inoperative and the east coast is blacked out for weeks, there won’t be the economic or political support for meaningful measures… and by the time that there is, the problem will be so big that no amount resources will be able to save large sections of the planet where literally hundreds of millions of people live… and given who lives where, it appears likely that a great number of those who oppose gradual but meaningful change are going to be hit the hardest – along with a lot of those who would like change, but don’t have the power to effect it.

In the end, while ideals can prevail, they have to change  underlying political or social conditions first, but when ideals conflict with physical reality, reality wins.

 

The Cost of Principles – To Others

At the moment, there are a number of court cases dealing with the conflict between “religious freedom” and statutory law. The core issue in many of them is whether various corporations or organizations should be required under law to provide medical services, primarily those involving contraception and abortion, to employees when those services are against the deeply held beliefs of the corporate/organization owners.

As I see it, there are three fundamental problems with the assertion that withholding such services from health care plans is an exercise of religious freedom, and that compelling the provision of those services is a violation of that freedom.  The first problem is the definition of “freedom of religion.”  The provision of coverage to pay for such services neither obligates the provider to endorse that service nor to require anyone to use it.  Employees are free to exercise their “religious” rights either to use or not use those services.  On the other hand, failure to provide such services requires employees who wish or need those services to pay for them or do without.  Therefore, allowing an exemption to such employers is effectively allowing the employing organization to impose its beliefs on all employees… and imposes an additional burden on the employees if they wish not to follow those beliefs.  This part of the issue has been raised and will doubtless be decided by the courts in some fashion or another, sooner or later.

The second aspect of the problem, however, doesn’t seem to have received much attention, and that’s the full scope of the economic discrimination the exercise of such “religious freedom” can have.  If Corporation A does not provide certain medical services, for whatever reason, the likelihood is that its healthcare costs will be lower than those of Corporation B, which does. In addition, the costs of those services, when used, must be absorbed by the employees of Corporation A.  Thus, Corporation A gains a competitive advantage while its employees are at a disadvantage. Given the fact that jobs remain hard to get, it’s also unlikely that many, if any, of the employees from Corporation A will depart over the additional costs they will incur.  Thus, the exercise of “religious freedom” also results in corporate economic gain while reducing the available income to employees who need the uncovered medical services.

The third aspect of the problem is that, at least in the United States, we don’t allow religious laws or practices to supersede basic laws.  You can’t break speed limits under the cloak of “religious freedom.”  Nor can you pay employees below the minimum wage on the basis of their religion or the lack of it.  You cannot base differentials in pay on religious practices or preferences – and yet, in effect, that is what an exemption from health coverage requirements would allow.

My bottom line is simple.  You have the right to your expression of your religious beliefs, but only so long as what you practice doesn’t harm others or pick their pockets, especially under the guise of religious freedom.  Whether what the courts will decide, and when, comes close to this position is still an open question.

Writing Collaborations

The other day I received an email from a reader who expressed dissatisfaction with the collaborative efforts of several well-known writers and who wanted to know how I had resisted the trend of established writers entering into collaborations that produced weak or less satisfying collaborative efforts.  While it’s an interesting inquiry, upon reflection, I feel, it bears a resemblance to a question along the lines of “How did you possibly escape beating your dog when all the other writers do once they get established?”

That’s not to say that collaborative efforts are always weaker or that they should be avoided. I’ve said on more than one occasion that collaboration ideally should only be attempted when the work is something that neither author could produce alone.  And sometimes, frankly, the collaboration is far better than either could accomplish alone, as in the case of the musical works of Gilbert and Sullivan.  [I’m not about to offer a public comparison in F&SF].

I’ve only done one collaboration, the ill-fated if well-reviewed Green Progression, with Bruce Scott Levinson, and that was a book which would have been difficult for me to do without his expertise in various areas, and it was a relatively easy collaboration because we were also working at the same Washington, D.C., consulting firm at the time. The book is far, far better than its dismal sales would indicate, but it’s also an indication that, even if one of the authors is moderately well-known, the name recognition of an author doesn’t necessarily carry over to a collaboration in terms of sales.

Some “collaborations” also result from necessity.  The final books of The Wheel of Time necessitated what was essentially a collaboration between Robert Jordan, posthumously, and Brandon Sanderson.  Although Sanderson technically wrote more than 90% [if the numbers I’ve heard are correct] of the last three books, the ground work had been laid by Jordan and there was an outline, as well as some 40,000 words or more of Jordan’s prose for Brandon to work with, which, in my mind, at least, makes it a collaboration rather than a ghost-written conclusion. Years ago, Piers Anthony did something similar with a book entitled Through the Ice, in completing a book largely finished by a young author named Robert Kornwise, who suffered an untimely and early death.

In thinking about collaborations I’ve read and the books that I’ve kept, I surveyed my shelves and the volumes on my e-reader and realized that I’ve only kept one collaboration, besides my own, at least ones that I know of, since I do know a number of authors doing collaborations under a single pen name, and there well may be others of which I’m unaware.  While that can’t be mere chance, it does suggest that, for me, collaborations don’t have the feel or flavor of a single-author book.

In my own instance, part of the answer to why I don’t do collaborations any more is simple.  I don’t feel either the desire or need to, and I really enjoy working on my own ideas at my own pace, which might well be just because I’m a type A control freak so far as my writing is concerned.

Slavery

Perhaps because of all the publicity over Twelve Years a Slave or because it’s Black History Month, I’ve been thinking about slavery and a number of points that I seldom see raised, if ever… and probably, by the time I’ve mentioned them, no one will be pleased, but since no one else seems to be pointing them out, most likely because each one will offend someone deeply, someone really ought to… and I appear to be the only one foolish enough to do so.

The first point is that virtually every black person enslaved in Africa was originally captured and sold into slavery by other blacks… and that virtually every slave purchased or kept in slavery in the United States was purchased or owned by a white person, usually a white male. The institution of slavery would not have been possible without both groups. I’m not excusing anyone, just noting a fact that seems to be overlooked.

The second point is that slavery existed in what we today would call a “free market,” that is, there were originally [not until the early nineteen century when Great Britain abolished the slave trade in 1807 and then slavery itself in 1833] no restrictions on the sale and purchase of slaves. Slaves had no rights and no legal protections. Sellers and buyers negotiated with complete freedom from outside interference. In that sense, slavery was the logical extension of totally free markets, where even human beings could be bought and sold, and even killed, for whatever the market would bear. So, all you free-market types, think about that when you preach about the need for “free” markets.

Third, given the diversity of the original slaves, who came from many different groups and tribes, those American blacks descended from slaves do not have a single “history/culture” predating the institution of slavery in the United States, except perhaps the shared misfortune of losing out in local African warfare, which resulted in their being enslaved in the first place. Their shared “history” is that of slavery, which is a failed and despicable culture. For this reason, I have to admit I frankly don’t understand the emphasis I see among many blacks from this background on finding their “culture,” because there isn’t a single one that all have in common prior to their ancestors landing in North America in a state of enslavement. Add to that the fact that any of the truly great African cultures had collapsed well before the beginning of the American slave trade, and a search for “history” and culture is more like poor whites seeking a history in Greek mythology than a particularly fruitful or worthwhile effort.

Fourth, over the past centuries and even into the present, many of those who opposed rights for blacks, almost entirely those of Caucasian backgrounds, cited the need for racial purity or opposition to “mixed races.” Come again? DNA studies show that every racial group besides “pure” African blacks [and some recent DNA testing even raises questions there about interbreeding with yet another undiscovered human species/race] has DNA confirming that their ancestors interbred with Neanderthals and Denisovians, both of whom failed to survive. That’s not exactly a hallmark of “purity”… or even good judgment on the part of one’s distant ancestors. Caucasians and Asians already had a mixed-blood background, even while some whites trumpeted their untainted blood. So let go of the damned racial purity argument. All of us are mongrels in some way or another now.

Fifth, in the end, at some point, we have to acknowledge what was, ALL of what was… and then get on with improving the future, no matter how one group denies what was and another dwells on it excessively, because we can’t change what was, only what will be.

A New High?

According to The Economist, the United States has the highest rate of credit card fraud of any developed nation, a rate far, far higher, than European Union nations, as well as far higher monetary losses. This isn’t necessarily just because we have more credit card thieves, which we apparently do, but also because the United States has far more credit cards and, equally important, has lagged behind the E.U. in adopting the so-called “pin and chip” credit card that contains a microchip with security features. The “pin and chip” system means effectively that it is far more difficult to use a stolen card or card number.

American business has lagged in employing this system, although Target, the latest and largest victim of hacking and the theft of tens of millions of credit card numbers and user names, is now looking into developing and issuing credit cards with greater security features. The reason for the delay? The new systems will cost more to install and implement, because new card readers will be required.

Or, in other words, until the losses to business make it clear that it’s “cost-effective” for them, regardless of the costs and hassles to consumers, they really don’t want to adopt a new and more secure system. These are also the men and women who, not unanimously, but overwhelmingly, try every method they can to reduce their costs. They beg their consumers to “go paperless,” claiming that doing so will benefit consumers while their real reason is to reduce their own paperwork burden. They’re the same retail executives who employ part-timers so that they won’t have to pay health benefits, who cut middle-management and overwork the survivors, and who outsource overseas anything they can to reduce costs, disregarding what it does to both their employees and the economy as a whole.

Yet when it comes to reducing the burden of fraud on their consumers, most are notably silent, or even oppose any improvement because it will increase their short-term costs. Just as cleaner environmental production and distribution systems might do… or health insurance or living wages. Fancy that.

New Ideas?

The other day I was reading reader reviews of Rex Regis , a habit that my wife disparages, and there is, I must admit, a certain validity to that disparagement, but I occasionally find useful comments and every so often those which are thought-provoking.

The comment that I found thought-provoking was one reader’s comment that because a lead-lined room for limiting the power of imagers figured in the book, I had to be running out of ideas. To me that comment revealed a certain unrealistic short-sightedness. In the world of Terahnar, the use of lead-lined rooms for imagers dates back well before the beginning of Scholar, and there’s no secret to the usefulness of lead in this regard. Those who used the lead room did so exactly because that usefulness was well known, although they did employ another device that had just been developed, a fact seemingly overlooked by the reader.

Now and again, I’ve noted similar comments about other authors’ works as well; so my observations don’t represent something limited just to my work. As I have stated more than once, human beings will employ what is useful, and they will continue to use whatever they find useful until it is no longer useful or until they find a better way or tool. If more than one person uses a gun or laser or whatever, that doesn’t mean an author is out of ideas; it means he or she understands people and tools.

Somewhere, among a certain group of fantasy readers, there seems to be a belief that each and every problem must be resolved in a new and unique way, as if the only measure of author creativity is a new and different solution to each problem, even if some of those problems are the same nature as preceding difficulties. It’s one thing to use a new technique or technology if it fits into the story, can be supported by the magic/technology in use or is a logical outgrowth of that magic or technology, and doesn’t require resources beyond the ability of the individual or culture, but to throw in “new” gimmicks merely to keep readers interested or for the sake of trying to usually ends up undermining the credibility of the story… and the author.

Yet, at the same time, I do understand the desire on the part of readers for something “new,” for something to inspire that sense of wonder. The problem is that “new” things don’t happen that often in any civilization, and need to be introduced sparingly… or the author ends up producing a magic funhouse (in fantasy) or technoporn (in SF), neither of which are something to which I aspire. So it’s likely you’ll only see “new” techniques and/or gadgets in my work when they fit in the societies I’m describing… as I’ve tried to do all along.

Never Too Late?

There’s a phrase that exists in the American/English language: “It’s never too late to [fill in some goal or action].” I don’t know if similar phraseology exists in other languages, but its prevalence in today’s society is indicative of a mindset that Americans can do anything, even if it seems too late. After all, late as it was, didn’t the U.S. enter World War II and turn the tide, so to speak? Didn’t we come late to rocketry, but become the first to put astronauts on the moon? And I have to admit that there are a number of other examples.

BUT… all too many people fail to realize that there are just as many examples of “too little, too late,” and I have the feeling that, as a result of a society that has become ever more one of instant gratification, just-in-time supply deliveries, and the up-and-coming 3D print-it-yourself technology, we’re losing sight of too many areas where it may well become “too late.

In some areas, we accept, if grudgingly, that “too late” exists. If a child doesn’t learn a second language young, that child will never learn to speak the language without an accent. If you don’t master the violin by age 13-14, you’ll never be a concertmaster/mistress. At some point, it is too late even for a professional athlete to continue performing at a high level. In other areas, we don’t seem to get it at all. Remedial writing instruction for college students [except for foreign students who write well in their own language] is largely useless. Those skills have to be learned close to the time of puberty, yet universities pour billions of dollars into such courses.

The idea of it never being too late is more pronounced in socio-political issues. Gun violence at American schools, now at even grade schools and middle schools, not just colleges and high schools, is continuing to increase to the point that I have to question if it is not too late to ever reduce the carnage. With close to 400 million firearms in circulation, with popular opposition over any control whatsoever of who can use weapons and under what conditions, and a growing lack of self-discipline by a growing percentage of the American public, is there really any way the violence and deaths can be reduced?

Global warming is continuing, despite recent evidence of a slight decrease in solar radiation received by the entire earth. Billions of years ago, Venus and earth were more similar, until runaway global warming turned Venus into a hothouse torrid enough that lead can melt in places on its surface. Another study just revealed that the firn ice of Greenland has either reached or is close to reaching its capability for absorbing meltwater, and further increases in meltwater could lead to the melting of the entire icecap of Greenland. How long before it’s too late to save much of Florida and U.S. coastal cities?

In the end, there is a difference between situations where “it’s never too late”; situations where it soon will be too late; and situations where it’s already too late. But because of the human tendencies to procrastinate and to demand just one position, maybe, just maybe, in cases where there’s any doubt at all we ought to take the default position that, if we don’t do something now, all too soon it will be too late.

Management ?

Recently, in response to one of my blogs about the excessive salaries paid to university administrators here in the U.S., and particularly in Utah, one of my readers sent me a link to a UK site, which, interestingly enough, contained a news story about how university administrators there had received a five figure pay increase while faculty got, as I recall, less than a one percent increase, after almost a decade of essentially no pay increases. Unfortunately, this trend isn’t confined to universities; it’s also a trend in business, and that trend is to stiff the people actually doing the work and reward the executives and managers for keeping costs down and profits up – effectively by keeping pay as low as possible and piling more and more work on those under them.

Paul Krugman, the Nobel Prize-winning economist, observed just several weeks ago that the 99% versus the one percent was a highly inaccurate and deceptive depiction of income inequality, in that nine-tenths of the uppermost one percent had seen modest real gains in income, perhaps less than ten percent after inflation, accompanied by as much as a twenty-percent increase in hours worked… and that didn’t take into account that this group already worked long hours. In short, 99% of the working people in the United States have seen real wages and salaries decline, the next nine-tenths of a percent have managed modest gains in income by working much longer hours, while the income of the top tenth of one percent has skyrocketed.

To make matters worse, at least here in the United States, wealthy business types, such as the DeVos and Koch families, have embarked on what amounts to political crusade to reduce the bargaining power of “labor” at all levels – except very upper management – while also pressuring politicians at all levels to keep taxes low and to reduce support services to the poor and working poor, as well as the unemployed…and, of course, to provide a wider range of “business” tax subsidies. The result is, of course, more of what we’re already experiencing – increasing income inequality as the real income of the top tenth of one percent goes up and the real [after taxes and inflation] income of everyone else goes down, or perhaps holds steady or rises slightly for a “fortunate” few; slow, almost non-existent job growth; and an anemic economy at best.

I’m as incensed as anyone about the comparative handful of the “professional poor,” and the grifters, especially huge agribusiness combines and special interest loophole users, who employ every angle to con money from the federal government, and I’m not exactly pleased by couples who have far more offspring than they can support, and knowing that, who continue to have more children. BUT… statistics show that the majority of those receiving unemployment assistance are still looking for work… and the vast majority of working Americans are getting screwed.

Meantime, the top one tenth of one percent, such as hedge fund managers, and the DeVos and Koch families, are doing just fine, and all too many of them are trying to find ways to lower their taxes and keep down labor costs, regardless of the devastating impact on most working Americans.