Archive for the ‘General’ Category

Social Contract

Way back in the middle of the seventeen hundreds, the French philosopher Jean Jacques Rousseau came up with the idea of “the social contact.” The simplified concept is that government represents a tacit contract between the people effectively to be governed and behave in order to escape the “natural state” [which Rousseau tacitly admits never existed, but that’s another story].

One of the interesting facets of his concept was that the social contract tended to break down if the income inequality between the rich and the poor became too great, which in fact is exactly what happened with the French Revolution several decades later in the last years of the century, and after Rousseau’s death. In point of fact, it’s rather interesting to note that in the more than 300 years since Rousseau made that observation, there have been more than a few revolutions and a great deal of social unrest in times of great income inequality.

At present, the United States is in one of those periods. According to Rousseau, the popularity of either Bernie Sanders or Donald Trump should scarcely be surprising, given that income inequality in the United States has now surpassed the income inequality of the previous period of greatest inequality in the 1890s, which was, of course, the time during which William Jennings Bryan ran for President as a fundamentalist/pro-silver/anti-big-money Democrat. After three years of economic depression, in 1896, Bryan carried 22 of the 45 states at that time and took almost 47% of the vote, despite being outspent by a five to one margin by McKinley.

Today, what is interesting is that the “establishments” of both major U.S. political parties are being challenged by those within the respective parties who feel economically and/or politically threatened or disenfranchised by the current political system. And, no matter how the next election turns out, the problem of income inequality isn’t about to disappear.

The greatest danger is, in fact, is if people think that the election resolves the problem, because then, nothing effective will be done, and most likely both income inequality and economic conditions will worsen.

Why?

It’s fairly simple. Right now, corporations and other institutions are literally sitting on close to a trillion dollars in uninvested dollars. These dollars are not invested because those holding them do not see a market for the goods or services that could be produced with them. The reason for this is that too many Americans are too poor to purchase anything but the basics. The idea that low taxes on high earners makes more money available to create jobs is, like a lot of simplistic “solutions,” half-right. It does make money available, but no one is going to use that “excess capital’ to create that many new jobs in the U.S. if a huge percentage of the population can’t afford the goods and services created by those jobs.

This is where a massive government-funded infrastructure development program would help, provided it’s designed right, and not merely a subsidy to the construction and technology industries. We have hundreds of thousands of unsafe and/or rapidly decaying bridges and tunnels, unsafe municipal drinking water systems, an air-traffic control system that verges on the obsolete and inefficient, a power grid that could be destroyed by a solar flare, thousands of miles of highways that are crumbling, national parks that have billions of dollars of delayed and deferred maintenance… and we do nothing about any of this, while nearly a trillion dollars of uninvested capital sit largely idle because unemployed or underemployed workers haven’t the money to buy anything except the extreme basics – if that.

Yet, as I’ve argued, and as have others, in the end, those ultra-high earners, the one tenth of one percent, could likely make even more money if they were taxed a bit more and those taxes put to work in improving infrastructure.

Will it happen?

I have my doubts. I suspect it won’t until the social unrest and economic stagnation become even worse, which they will, unless matters change in the mindset of the American political system. The problem is that, if matters get too much worse, we may face a revolution, rather than an evolution. I could be wrong, I’ll be the first to admit, but right now, the odds are in favor of pessimism.

Pain

The past six-month period has been one of the worst for us that I can recall in years in terms of the number of friends who have suffered, some of whom have died. All this suffering that I’ve witnessed has brought home to me a tremendous short-coming in our modern medical system or structure. It’s simple enough. In prolonging life, especially in treating some forms of cancer, in saving wounded soldiers and victims of accidents who surely would have died in earlier times, in fact as recently as a decade or two ago, we have created a massive problem and source of suffering – a huge increase in people suffering agonizing pain.

So many forms of medical treatment can now keep people alive, but at the cost of continual pain. According to the National Institutes of Health, 17% of all Americans suffer severe pain intermittently, and 65% of those – 11% of the U.S. population – suffer daily, chronic, and severe pain. Yet while we have relatively effective forms of pain control for mild pain, the only substances currently effective for severe pain are opioid-based, and the problem with these is that with continued use, they become both addictive and less and less effective. So those in pain take more and more, and often mix those painkillers with alcohol, just so they can dull the pain and sleep, which is another reason why there are so many deaths as the result of combining alcohol and painkillers.

Yet this problem is scarcely recognized by most people. Nor is there any real recognition of why this pain problem has occurred. I certainly didn’t grasp its magnitude until recently, when more and more people I know ended up with excruciating pain. Instead, there’s an incredible push to stop the “overuse” and “abuse” of opioid painkillers. In my home state of Utah, the LDS Church effectively blocked even a modest piece of legislation that would have allowed the medical use of cannabis products and extracts [all specified as non-hallucinogenic]. The upshot of all these efforts appears to be that even terminally ill people are often being denied painkillers adequate to mitigate their suffering, but if they’re terminally ill, why worry about whether they’ll become addicted?

I’ve seen reports on promising new developments in non-addictive and non-hallucinogenic painkillers, but it will be years before any of them are widely available. In the meantime, what are we going to do for the more than 25 million Americans dealing with severe pain on an on-going basis? [And, no, I’m thankfully not one of them.]

Just tell them to hold on because we don’t want them addicted to opioids or marijuana?

Art…and art

Just as people have different tastes in food, they have different tastes in the “art” they enjoy and appreciate, and, for the most part, people tend to rate more highly art and food that they enjoy. I will submit that, while people should be allowed to enjoy what they enjoy in food and art, there are food dishes that are markedly superior to what most people would claim is “the best” and there are books, paintings, performances, and musical compositions that are superior to what is popular.

This past weekend I saw two performances of the opera Little Women [and, yes, there is such an opera] as performed by the local university’s opera theatre, which, in the interests of full disclosure, I must admit was produced and directed by my wife the professor. The opera was commissioned roughly twenty years ago by the Houston Opera, and when performed by the Houston Opera in 2000, was recognized as a masterpiece by The New York Times and other critics, and a television production was then done by the Houston Opera for the 2001 PBS Great Performances series. A few hundred people saw the university production, and the audience was very receptive and enthusiastic. The professional musicians who saw it rated it highly.

Three weeks before, some of the same singers participated in a choral extravaganza in the same theatre – and the music was all 1970s rock and roll. More than a 1,000 people filled the theatre, and the audience went wild. The professional musicians thought it was “fun,” but quite a number questioned why a university’s classical music program was putting on a rock and roll concert. The chorus director replied that it was to build support for the music program, and to increase attendance, despite the fact that the music program is designed for two groups of students – those who will teach the basics of music in secondary school and those who will play or sing professionally, either classically or semi-classically.

The vast majority of the people who attended the rock and roll concert did not attend the opera. I have no problems with that. Nor do I have problems with rock and roll concerts.

What I have a problem with is the tacit admission by the Music Department, by putting on both concerts, that rock and roll is on the same level of expertise and excellence as operas, symphonies, oratorios, art song, or chamber music. While I will admit that there are actually a handful of popular rock and roll and country music performers with excellent classical training, the vast majority couldn’t do vocally or instrumentally what most graduating seniors in the good music programs across the country do on a daily basis.

Liking what you like is fine, but popularity is not excellence, and that’s something that is getting lost more and more in a culture that rewards the bottom line far more than excellence.

Too Angry to Think?

As some of my readers might recall, way back last August, I made the observation that both Bernie Sanders and Donald Trump would do far better than people realized, although at that time, I did express doubt that Trump would be able to capture the Republican nomination. While I had a better feel than most for the depth of anger, what I didn’t realize was how many Republicans would become literally too angry to think and how much they wanted to lash out at all politicians, regardless of what it might do to the country. The attitude of these voters is literally that they don’t care, that the country and the rich have screwed them, and that they’ll be damned if they’re going to vote for any “professional” politician.

I spent close to twenty years in politics, largely in Washington, D.C., and I loathe the “Mr. Smith goes to Washington” myth, the idea that all politicians are either up to something illegal or incompetent. In fact, most politicians are very good at voting what their constituents want. What almost no one wants to think about is that such lock-step voting is exactly what’s caused the current gridlock in Washington. Politicians who want to keep their jobs are well aware that voting against their constituency is likely to cost them their job. So they don’t. And no one can afford to compromise. And political views are polarized with enough strength on either side that not much can get done without compromise. The less that gets done, the angrier people are and the more likely they are to punish politicians who show the slightest hint of moderation. Yet with all the anger, a huge number of people react by becoming more extreme, totally failing to recognize that they – not the politicians – are the cause of the problem. In general, the politicians fall into one of two categories, those who are ambitious and unscrupulous in exploiting that extremism, or those who are true-believing extremists who glory in that extremism.

Along with this failure of recognition is that too many of these angry voters also fail to realize Donald Trump is in fact a consummate politician who has read the public mood far better than any of the “professional” Republican politicians and who is exploiting the wide-spread anger by appealing blatantly to those angry people and promising to do things that are either physically or financially impossible, unconstitutional or illegal, or just plain stupid – and the anger of those supporting Trump is so strong that his supporters either just don’t care, are truly ignorant of the impossibility of any of Trump’s promises being enacted, or believe that what Trump says is merely rhetoric to get him elected.

But then, being angry and venting makes so many of us feel better. The difficulty is that all that venting doesn’t solve the underlying problems, and, in this case, will only make them worse.

Legislating Absolutism?

Absolutist moral pronouncements may be good guidelines for personal behavior, but they make very bad laws. And there’s a very good reason why this is so: absolute moral pronouncements are simple, and neither the universe nor society is. Equally important is the fact that human society has never been able to agree on exactly which moral codes should be enforced and how. Secular laws are by necessity a compromise between conflicting moral codes and beliefs based upon points of agreement [at least, ideally].

One of the basic principles behind the structure set up by the Founding Fathers of the United States was the idea that laws were to be legislated by a civil body, not by religious authority, and that those laws should recognize fundamental civil rights as superior to religious doctrine and rights. The reason for this was basic. They had seen and often lived through a time when people were tortured, killed, or otherwise persecuted for what they believed, and those acts were enabled by laws enacted pursuant to religious authority.

Yet today, we have tens of millions of Americans who are demanding laws to enforce their religious beliefs on others under the guise of religious freedom, effectively repudiating the idea of civil rights through their determination to enact and enforce laws restricting the rights of others based on religious beliefs.

One of the great ironies I see in today’s political debates and often hate-filled rhetoric is that the same people who are so deathly afraid that President Obama or others might impose Islamic Sharia law upon the U.S. are all too often those same ultra-conservative evangelicals who want to impose the “Christian” equivalent of Sharia on the United States – effectively limiting rights for women, legal recognition of religious practices, and the arrogation of religious beliefs over civil rights. Yet those individuals cannot see that what they want is essentially the same structure as ultra-traditional Islamists, but a structure supporting their own ultra-conservative beliefs and enforcing those beliefs on those who do not share them.

I’m very much in favor of civil rights, the right of an individual to do what he or she pleases so long as those acts do not physically harm others – and that includes forms of indirect harm. You should not have the freedom to pollute the air others breathe or the waters others must drink. I have great problems with laws that effectively marginalize others under the guise of religious freedom. I also have problems with those who wish to eliminate laws that protect health and the environment on the grounds that such laws restrict the freedom to do business. Obviously, no law can be perfect, but legislating absolutes is the route to tyranny, not morality.

Charitable?

Almost invariably, the majority of mail that we receive is from charitable organizations, the preponderance of it from so-called charities to which we do not contribute nor most likely never will. There are some, I admit, that once received a contribution in a moment of weakness on our part, but never will again. The unanswered telephone solicitations are even more disturbing, because we have never contributed to anyone or anything based on a telephone solicitation.

It appears, in fact, that everyone and everything has its own charitable organization. While this perception is in fact erroneous, it still feels that way to me, perhaps because there are over a million and a half charitable organizations in the United States alone. And after the scattered revelations of the past few years about the compensation of those running charitable organizations and the fact that, in all too many cases, far too much of donated funds to legally permitted 501(c) (3) organizations goes to anything but the purposes for which they were ostensibly founded. From what I can tell, there are foundations for almost every form of ill-treated or endangered species, particularly mammals, land-based and aquatic, and large avians, not to mention scores of foundations dealing with social ills, justice, discrimination, civil rights, conservation and environmental improvement – the list is truly endless.

The problem, of course, is that a great many of them address very real problems. Far fewer do so efficiently and cost-effectively. Some address problems that don’t seem to be problems to me, such as the “need” to “return” federal lands to either the people or the states [not that either ever held those lands], and some spotlight problems that cannot be solved by greater application of resources.

If people wish to give money to these causes, so be it, but should all these donations be considered tax-deductible? For that matter, should donations to religious organizations be tax-deductible? Such deductions add to the federal deficit and, in essence, require higher taxes on everyone.

Now, I know that many conservatives feel that government attempts to do too much in social programs and believe that private charity is more suited to dealing with many of these problems, but isn’t providing tax deductions for charitable and religious organizations effectively the same as a government subsidy? And all too often, the cost of subsidies is far greater than anyone knows because it’s essentially hidden. The more than one and a half million U.S. charities spend 1.5 trillion dollars every year, an amount equivalent to the 40% of the federal budget. Given the billions poured into charities, if charity were that effective, shouldn’t we be seeing better results?

The Magic World of the Everyday

Arthur C. Clarke once observed that “Any sufficiently advanced technology is indistinguishable from magic.” While many – and I’m one of them – would generally agree with his words, I’d take it a step further. We live in a magic world, indeed, a magic universe. Einstein theorized that this was true in his equation E = mc2, which essentially equated matter and energy. In practical terms, the development of technology since then has proved that the only difference between “matter” and energy is form, in that all matter is composed of structured energy.

While what we perceive and experience as matter is not “solid” in the sense we emotionally believe and physically experience, since on the sub-atomic level matter is largely empty space, but what could be called, in an over-simplistic sense, the interplay of energy fields. Part of the effect of those fields is to create what we experience as matter, essentially barriers, or at least limitations, to the inter-penetration of other “matter-energy-fields” – a universe, if you will, of energy whose flows and fields we interpret variously as energy and/or matter within a space-time framework.

Theoretically, anything can be transformed into anything else, given enough energy and a sufficiently advanced technology to restructure the energy flows and structure. Whether we as a species will ever master such restructuring doesn’t take away from the fact that it is at least theoretically possible.

That understanding of the universe colors my view of “virtual reality,” because “virtual” or, more accurately “cyber-enhanced-partial representation of physically modulated perception,” seems to me to be a denial of the very wonder of the physical universe, or at least a wish-fulfillment escape from it. Now, I’ve seen enough of death, misery, and oppression to understand all too well why many are embracing virtual reality. It’s a very real attraction, one whose dangers and pitfalls James Gunn outlined more than half a century ago in The Joy Makers [and even earlier in “The Hedonist”], but as Gunn pointed out, it’s so much easier to escape into one’s personal virtual reality than to remake a failing and imperfect society.

And that would be a tragedy in a universe already so magical… but the choice is ours.

The Fallacy of Corporate Leadership

The other day I had a discussion with a friend about what I perceive as the excessive level of pay and bonuses received by the CEOs of large corporations and financial institutions. I even mentioned the study that showed no relationship whatsoever between the profitability and success of the corporation and the salary levels of the CEO, as well as one of its conclusions that comparatively lower-paid CEOs often headed up better-performing businesses.

He was anything but convinced. His argument was essentially that, if companies were willing to pay that amount, the executives were worth that much, and that there was nothing wrong with the fact that, in some companies, the CEOs made thousands as much as the salary of the average employee. I’m not against CEOs getting much higher pay than everyone else, but it seems to me that what’s overlooked is that large businesses aren’t successful just because of one executive at the top. The fact that they survive and even prosper while turning over CEOs on the average of every five years suggests to me that CEOs are high-level interchangeable parts, and that means that they’re merely more highly skilled workers, meriting great multiples of the average worker’s compensation, but not to the degree of thousands of times the average salary, and in a few cases up to ten thousand times the salary of the average employee. In fact, until about thirty years ago, they were paid only a hundred times or so the salary of the average employee.

Seemingly lost in the current self-reinforcing beliefs of higher executives are some of the old and truthful adages, such as a chain being only as strong as its weakest link, or the fact that no person is indispensable. Instead, the grandees of industry pose and parade, taking full and often sole credit for the work of thousands of individuals, while giving little but lip-service to those below them and while extolling the benefit of cost-cutting and cost-effectiveness. Yet isn’t it strange how there’s no measurable cost-cutting and cost-effectiveness in the executive suites and boardrooms?

Part of that is because, in a multi-billion dollar corporation, wasteful overpayment to a single individual, i.e., a CEO, is almost noise and doesn’t directly affect the bottom line that much. Indirectly, I’d submit, the effect is much greater, particularly among middle and upper management, because those grandiose salaries inspire brutal and often highly unethical internal power struggles in pursuit of what is often literally a golden fleece. Both the power struggles and the outsized upper level compensation also tend to demoralize lower management and create higher stress levels there. Study after study has shown that stress levels actually are lower in upper management and higher in those who work for them and that the highest stress levels are created at lower levels of management when the expectations of upper management conflict with the lack of adequate resources for achieving those expectations and when the compensation differential between those tasked with a job and those supervising them is highest.

These findings also tend to get buried and never find their way to the executive boardrooms, most likely because corporate emperors are as adverse as other emperors to learning that their imperial garments are illusory and their beliefs self-serving. At least as I’ve observed, the great majority of CEOs are in fact at best marginally more talented than their subordinates, yet the great fallacy of corporate leadership remains — the CEO-perpetuated idea that extra-special individuals preside as CEOs, when in fact a great body of evidence, both statistical and anecdotal, strongly suggests otherwise.

Critics

The other day I was reading a book review of A.O. Scott’s Better Living through Criticism, which was interesting in itself, since a magazine book critic was critiquing a movie critic’s book, when I realized something basic about almost all reviews, either by professionals, semi-pros, or even readers. Such reviews report what the general story line is and what the obvious strengths and weaknesses of a book are, at least from the reviewer’s perspective, and while that is valuable to many readers, that is as far as most reviews go.

What most reviews don’t mention is what is not obvious in a book or movie. And from what I can tell from the review of Better Living through Criticism and from what little I’ve quickly read of the book, Scott apparently thinks that critics should go beyond the obvious. I honestly don’t know if he does in his own reviews, because I seldom read movie reviews, but whether he does or not, it’s a valid point, and one I’d recommend to all reviewers.

I’ve had lots of books reviewed over the years by professionals or semi-pros, and most of those reviews, at least the ones I’ve read, fall in the category of finding what I write either acceptable or moderately good, which is certainly better than many possible alternatives. A few reviews have castigated a given book, and a few more have offered fulsome praise. Several times, I’ve had both a castigating review and one of high praise about the same book.

Seldom, however, do reviewers actually mention what is truly different, or even unique, about a book. Now, from an author’s point of view, uniqueness is very much a double-edged blade, as the reviews (and comparative sales) of my more unique books – such as Archform:Beauty, Empress of Eternity, Haze, The One-Eyed Man, Solar Express, or the “Ghost” trilogy – seem to indicate. Yet the majority of reviews of those books never mention the unique aspects of the books, let alone note why they’re different. Instead, most concentrate on the strengths and especially the perceived weaknesses of the conventional aspects of the books. That’s understandable, and in itself, should be expected, but the failure to dwell upon what else lies within the pages and the story shortchanges the reader of the review.

Now… having said that, I could be far better off that the reviewers didn’t mention the different or unique aspects of any of those books, because a great many readers are looking for comfortable escapism and predictable entertainment.

Perhaps, just perhaps, I’m better off that readers don’t know in advance, because some readers will find that difference entrancing (as various emails and letters have told me) when they might not have even picked the book up had they read a review that highlighted the differences.

Which reinforces the thought that more insightful reviews are indeed a double-edged blade.

The Greatest Good

All too many years ago, in my very first day in my very first college political science class, I got into trouble with the professor, after he had stated that the goal of political science was to determine policies which identified “the greatest good for the greatest number.” I objected to his stating that as the goal of all political scientists, claiming in return that it was the goal of liberal political scientists, not all political scientists. Needless to say, I got off to a rocky start, and my standing with that professor never recovered.

While that episode remains relatively fresh in my memory, in time I realized that while I was right to question, I hadn’t picked the right basis for my objection. The principal problem with his assertion was even simpler. What is “good” in the political universe, and how do we determine it?

Another consideration is how does one choose among the competing “goods” and prioritize what comes first? A third problem is that of perspective – good for whom?

These are far from esoteric or ivory-tower questions. They get to the basis of the polarization and conflict within our political system and to our continuing problems in foreign policy. And that’s even before one gets to the question of how one might implement such “goods.”

One person might suggest that the greatest good is a healthy and well-educated population, all of whom, with the exception of law-breakers, have the rights outlined in the Constitution. Someone else might suggest that the greatest good is a society where hard work, intelligence, and perseverance are rewarded, rather than in having a society where those who are unable or unwilling to work are still guaranteed health care and material sustenance. A third person might declare that a “society under God” is the greatest good, which contains the assumption of belief in and adherence to the strictures of a particular deity. Someone else might find the greatest good to lie in the least government possible, or no government at all.

All of which requires that someone choose exactly which vision of the greatest good is pursued. In deciding “the greatest good” in the U.S. political system, the simplistic answer is that those who vote determine that. Except they don’t. They vote for officials who will make those determinations, either through executive or legislative actions.

And we now have a political system where the majority of elected officials slavishly pursue the extremes of the “greatest good” advocated by the majority of their constituents, regardless of the language crafted by the Founding Fathers, and the infeasibility of forcing those extremes on those who do not share those beliefs. Which was why they made political change so difficult in order that the two most likely outcomes would be either compromise or gridlock, believing that reasonable individuals would work out compromises.

Unhappily, fewer and fewer Americans appear to meet the Founders’ definition of reasonable, and they punish politicians who attempt to work out compromises, which results in fewer and fewer politicians being reasonable, in turn making political gridlock on contentious points inexorably inevitable. That results in already unreasonable individuals becoming more so, and blaming the problems all on those who do not share their views.

One Person’s Waste [Part II]

There’s always been a hue and cry from regulated industries that the regulations under which they “labor” are burdensome and “wasteful,” but often that “waste” is only from the point of view of the industry involved. While an electric utility may claim that a regulation that restricts its emissions is expensive and “wasteful,” that regulation is designed to improve the health and environment, which reduces healthcare and environmental remediation costs for large numbers of people, far larger than the number of utility employees and shareholders.

Unhappily, however, there are also the regulations that create costs and burdens without commensurate societal benefits, such as the 2015 regulation by the Department of Energy mandated that dishwashers must use no more than 3.1 gallons of water per load. The problem? So far manufacturers can’t figure out how to get the dishes clean with so little water, but they still have to produce machines that use no more than the sacred 3.1 gallons.

Then there’s another kind of waste – the government rules or regulations that proclaim benefits, but effectively add problems or costs for consumers and/or small businesses, while benefiting only a comparative handful of companies or individuals.

One of the most expensive and with one of the most wide-spread effects is the current regulatory regime at the FDA that allows pharmaceutical firms to make minor changes to drugs whose patent protection is expiring and thus gain more years to gouge the public and also imposes great barriers to those companies who could produce and sell generic drugs less expensively. For example, when chlorofluorocarbons were required to be removed as propellants for asthma inhalers and a new propellant was added, the manufacturers gained more years of selling inhalers at higher prices to the point that a rescue inhaler – one that can literally save an asthma sufferer from dying – doubled in price. Then, when that protection lapsed, interestingly enough, since the U.S. has no maximum price regulations, just this year, the price of the most-widely used asthma medication, albuterol sulfate, jumped from $11 to $434 per inhaler, a 4,000 percent price increase, because getting FDA approval, even for a generic, is so burdensome that most firms won’t try, at least in the U.S., while that very same inhaler, as well as a range of generics, costs less than $30 in Great Britain or Canada.

Buried in the 3,000 plus pages of the Affordable Care Act is a provision that requires, as of December 2016, that all brewers must include a detailed calorie count on every type of beer they produce. Failure to comply with the new regulations means craft brewers will not be able to sell their beer in any restaurant chain with over 20 locations. Because this is a major market for selling beer, it hamstrings smaller craft brewers if they do not comply. The Cato Institute estimates the calorie labeling requirements will cost a business as much as $77,000 to implement. For larger beer companies, this is a drop in the bucket, but for small, local craft brewers it represents a substantial cost that they must pay. As a result, it creates a significant disadvantage compared to larger beer companies who can better absorb the regulatory cost… as if serious beer connoisseurs don’t know the specifications anyway.

Among the regulations that cost U.S. consumers a great deal, for the benefit of a few, are those dealing with sugar, the price of which in the U.S. price is effectively set by a combination of federal requirements that limit domestic production of cane and beet sugar, restrict foreign imports, place a floor under growers’ prices and require the government to buy crop surpluses.

Sugar beet and sugarcane farms account for about one-fifth of 1 percent of U.S. farms. Out of 2.2 million farms in the United States, there are only 3,913 sugar beet farms and 666 sugarcane farms, but these growers account for 33% of crop industries’ total campaign donations, and 40 percent of crop industries’ total lobbying expenditures. Since 2000, Americans have paid an average of 79 percent more for raw sugar and 87 percent more for refined sugar compared to the average world price, a total of more than a billion extra dollars annually for sugar in order to subsidize less than five thousand U.S. sugar growers.

Then there are the successful waste reduction programs that Congress junks, such as the Recovery Audit Contractor Act, first implemented in 2005, which collected more than $3.5 billion in 2013 alone by collecting overpayments to healthcare providers. Interestingly enough, the program was suspended by Congress in 2013 at the insistence of hospitals, with the result that since October 2013, about $1 billion per quarter in erroneous overpayments is not being recovered and collected.

And while no one likes the IRS, Congress has made it impossible for the agency to collect unpaid taxes and to audit high income individuals by continuing to cut the agency’s budget, so that audits are at an all-time low, and tax avoidance continues to rise, while at the same time politicians are attacking the IRS for such shortcomings as its inaccurate death records that show millions of Americans as dead who are still alive and 6.5 million people listed as age 112 and older.

So, when people talk about waste, it’s certainly not all about federal government excesses. Sometimes it is, but many times it’s something else.

One Person’s Waste [Part I]

During my years in government, then as a consultant dealing with government regulations and environmental and energy issues, and even afterward, I’ve heard thousands of people say that we could just solve the budget problem by getting rid of the “waste” in government.

And when I hear that tired old phrase, I want to strangle whoever has last uttered it, because “waste” – at least in the way it’s commonly used – is a tiny fraction of federal or state spending. Now… before you start screaming, let me at least try to explain.

First, I’m defining waste as unnecessary spending for no purpose and that accomplishes nothing. Second, I do believe that government spends a great deal of money on programs and projects which have little to do with the basic objectives of government as outlined by the Founding Fathers… and I suspect most intelligent individuals believe something along the same lines.

The problem is that one person’s waste is all too often another person’s gain or livelihood. For example:

The Georgia Christmas Tree Association got $50,000 from the Department of Agriculture for ads designed to spur the buying of natural Christmas trees. To the Christmas tree growers of Georgia, this was not waste, but advertising designed to help them sell trees and make money.

The Department of Agriculture spent $93,000 to “test the French fry potential of certain potatoes.” Do you think the potato growers objected to this?

$15,000 from the Environmental Protection Agency to create a device that monitors how long hotel guests spend in the shower. Is this so wasteful, given the water crises in the west and southwest?

And then there’s Donald Trump’s use of a $40 million tax credit to renovate the Old Post Office in Washington, D.C. into a luxury hotel. I’m certain that the city would support another tax-paying and revenue generating hotel.

The Department of Agriculture’s Market Access Program provided $400,000 to the liquor lobby, which used part of those funds to transport foreign journalists to different breweries and distilleries in the southeastern United States. The liquor industry doubtless feels that this will boost liquor exports.

At the same time, there is definite out-and-out waste. According to the Government Accountability Office, in 2014 the federal government spent $125 billion in duplicative and improper payments. GAO made 440 recommendations to Congress for fixing these problems. To date, it appears that Congress has addressed none of them.

One waste-watching outfit came up with $30 billion in supposedly wasteful projects for FY 2013, including studies of the threatened gnatcatcher bird species. The only problem with the gnatcatcher “waste” was that such a study is mandated by federal law when an endangered or threatened species may be adversely affected by building or expanding a federal facility.

More to the point, however, is the fact that these self-proclaimed waste-finders only came up with $30 billion worth of waste out of federal outlays totaling $3.5 trillion – so their waste amounted to less than one percent of federal spending. Even if Congress addressed the GAO’s much more sweeping findings, such actions would only reduce federal outlays by less than 4%.

Now… I’m not condoning waste in any amount, but when the federal deficit has been ranging from $440 billion to $670 billion in recent years, it doesn’t take much brain power to figure out that merely getting rid of even all the obvious waste isn’t going to do much for constraining federal spending, assuming Congress would agree, which, as an institution, it doesn’t despite the scores of politicians who claim they’re against waste.

And all those who support a strong national defense should be appalled at some aspects of defense spending. Right now, DOD has stated that as many as 20% of the 523 U.S. military installations are unneeded. This doesn’t even count the more than 700 U.S. bases and facilities outside the United States, yet the present Congress has enacted specific language in the appropriations bill for the current fiscal year that absolutely forbids base closures.

What about my “favorite” airplane, the oh-so-lovely-and-over-budget F-35? A recent report cited DOD officials stating that “essentially every aircraft bought to date requires modifications prior to use in combat.” A plane that isn’t yet ready for combat for which the government has already committed $400 billion? An aircraft that was outmaneuvered by a much older F-16?

DOD also wants to build a new long-range strike bomber with full stealth capabilities, 100 of them at a projected cost of $565 million each.

As a former Navy pilot, I don’t object to better planes; I do have problems with very expensive aircraft that don’t seem to be better than their predecessors, and especially attack aircraft that can’t defend themselves. I also have problems with politicians who decry waste, but won’t allow DOD to reduce it because that “waste” is in their districts. Those are far more expensive examples of waste than $50,000 studies on laughter or Christmas tree promotions. It reminds me of shell game misdirection – look at these ridiculous examples of waste, and, and for heaven’s sake, don’t look at that man over there behind the curtain… or at the pork in my district. And yet, politicians, especially Republican representatives and senators, continue to attack “waste” while doing absolutely nothing meaningful about it… and they get re-elected.

The Religious Selfie

One of the basic underpinnings of religion, almost any religion, is the worship of something or some deity bigger than oneself, and the overt acknowledgment that the individual worshipper is less than the deity worshipped. Some religions even incorporate that acknowledgment as part of liturgy and/or ritual. Such acknowledgments can also be part of “secular religions,” such as Nazism, Fascism, and Communism.

Today, however, there’s a totally different secular religion on the rise, with many of the old trappings in a new form, which might be called the “New Narcissism,” the elevation and exaltation of the individual, or the “self,” to the point where all other beliefs and deities are secondary.

Exaggeration? Not necessarily. What one believes in is reflected in the altars before which one prostrates oneself. Throughout history the altars of the faithful have either held images of a deity, perhaps accompanied by those of less deity, or no images whatsoever. While images of private individuals have also existed throughout history, those images or sculptures were created for posterity, of for the afterlife, so that others would have something to remember them by… or to allow them to remember themselves as they were. At one point in time, only the wealthy or the powerful could afford such images. Even until very recently, obtaining an image of one’s self required either the cooperation of others or special tools not particularly convenient to use. This tended to restrict the proliferation of self-images.

The combination of the personal communicator/camera/ computer and the internet has changed all that. Using Facebook, Instagram, Twitter, and the internet, now each individual has the ability to create themselves as a virtual deity – and tens, if not hundreds, of millions of people are doing just that, with post after post, selfie after selfie, proclaiming their presence, image, and power to the universe [with all three possibly altered for the best effect].

It’s the triumph of “pure” self. One no longer has to accomplish something for this presence and recognition. One can just proclaim it, just the way the prophets of the past proclaimed their deity. And given what positions and in how many ways people have prostrated themselves before their portable communications devices in order to obtain yet another selfie, another image of self, it does seem to resemble old-fashioned religious prostration.

Of course, one major problem with a culture obsessed with self and selfies is that such narcissism effectively means self is bigger than anything, including a deity or a country, and I have to wonder if and when organized religions will see this threat to their deity and belief system.

Another problem is that selfies have to be current; so everyone involved in the selfie culture is continually updating and taking more selfies, almost as if yesterday’s selfie has vanished [which it likely has] and that mere memory of the past and past actions mean nothing. All that counts is the latest moment and selfie. That, in turn, can easily foster an attitude of impermanence, and that attitude makes it hard for a society to build for the future when so many people’s attention is so focused on the present, with little understanding of the past and less interest in building the future… and more in scrambling for the next selfie.

All hail Narcissus, near-forgotten prophet of our multi-mirrored, selfie-dominated present.

Cultural Appropriation

Over the past several years, there’s been a great deal of talk about the need for “diversity.” So far as I can tell, this means stories set in cultures other than those of white, Western-European males and told by protagonists other than white males. I certainly have no problem with this.

I do, however, have some misgivings about the idea that such stories must always be written by authors from those cultures, and the equally disturbing idea that when someone other than a member or a descendent of those cultures writes about them, even when projected into the future, or into a fantasy setting, that is “cultural appropriation,” and a literary sin of the first level. The rationale behind this judgment appears to be that no one who is not a member of a different or a minority culture can do justice to representing that culture in a fictional setting.

Beside that fallacious point, what is exactly the point of fiction? Is it just to be culturally accurate? Or to entertain? To make the reader think? And for that matter, how does one determine “cultural accuracy,” especially when there are significant social and even geographic differences within most cultures?

Taken to extremes, one could classify Rob Sawyer’s hominid series, about an alternate world populated by Neandertals, as “cultural appropriation,” since most of us only have a tiny fraction of Neandertal genes. Roger Zelazny’s Lord of Light could easily be classed as cultural appropriation of Hindu beliefs and myths. For that matter, Tolkien certainly used the Elder Edda of Iceland as a significant basis of Lord of the Rings. And I wrote The Ghost of the Revelator even though I wasn’t born in Utah and I’m not LDS [although I have lived here for more than twenty years].

Obviously, writers should take seriously the advice to write what they know, and know what they write, but “non-members” of a minority or another culture may well know and understand that culture as well as or even better than members of that culture. Should they be precluded from writing fiction based on those cultures because editors fear the charge of “cultural appropriation”?

This concern, unfortunately, isn’t just academic. I’ve heard editors talk time and time again about how they want more diversity, but… In one case, the significant other of a Chinese-American born and raised in Hawaii wrote and offered a YA fantasy novel based on Hawaiian myth to a number of editors. When several agents and editors found out that the writer was not Hawaiian genetically, they decided against considering the book. Several well-known authors have also told me that they wouldn’t have considered the book either, because dealing with Hawaiian beliefs would be too controversial.

Shouldn’t it just be about the book…and not the genetics/cultural background of who wrote it?

Teachers

In yesterday’s local paper, there was a front page article headlining the coming teacher shortage in Utah, to which I wanted to reply, “How could there not be?”

The beginning salary for a Utah teacher in most systems is not far above the poverty level for a family of four, and the average Utah teacher’s salary is the lowest in the United States. Utah spends the least money per pupil in primary and secondary schools of any state in the United States. Nationwide, anywhere from twenty to fifty percent of newly certified teachers drop out of teaching in five years or less [depending on whose figures you trust], and that rate is even higher in Utah. In 2015, half of all newly hired teachers in Utah quit after just one year. Yet studies also show that the longer teachers teach, the more effective they become. Add to that the fact that Utah has on average the largest class sizes in the United States. The academic curriculum leading to a teaching degree has also become more demanding [at least at the local university], and it often takes even the best students more than the standard four years to complete a course of study that leads to teacher certification, especially if they have to work to help pay for their studies.

Despite the often dismal salaries, study after study shows the comparatively poor level of pay is down the list for why teachers walk away from teaching. Almost all prospective teachers know that teaching isn’t a high-paid profession. What they don’t know is just how effectively hostile the teaching environment is to a healthy and balanced life.

Here in Utah, for example, there are state legislators who complain about pampered and lazy teachers. They’re obviously unaware of the unpaid after-school, weekend, and evening workload required to support an eight-hour teaching day. Or of the number of parents who complain about their darling children’s grades – such as the one who wanted to know how his son could possibly flunk an art class [which turned out to be the fact that said son failed to attend most of the classes and never did a single art activity]. Or about the increasing reliance on testing to determine teaching effectiveness [when the testing itself reduces instructional time, when the test results determine teacher retention and ratings, and when the tests tend to measure factoids, and fill-in-the-blank skills, rather than thinking or being able to write even a coherent paragraph].

It also doesn’t help when the local papers are filled with pages and pages about the sports activities of the local high schools, with seldom a word about academic achievements or other more academic successes, such as plays, concerts, success in engineering competitions and the like.

Nor is it exactly encouraging when school administrators offer little understanding or support of their teaching faculty. That’s more commonplace than one might realize, although national surveys show it’s a significant factor in contributing to teacher drop-out/burnout. Certainly, a number of former students of my wife the university professor have mentioned this as a difficulty in their middle school or high school teaching positions.

And finally, in the end, what’s also overlooked is that it’s actually more expensive to continually replace a high number of departing teachers than to take the necessary steps to cut the teacher drop-out rate. But based on the current public view of education and the unwillingness to make meaningful changes, I don’t see this problem changing any time soon. In fact, it’s only going to get worse… far worse.

There’s Always Someone…

I admit it. I did watch the Super Bowl. How could I not when my grandfather was one of the first season ticket holders back in the days when the Broncos were truly horrible? I can still remember him taking me to a game, and he went, rain, shine, or snow, until he was physically no longer able. I wasn’t able to go with him, unfortunately, because by then I was working in Washington, D.C.

And yes, I was definitely happy that the Broncos won, particularly since I’ve always felt that Peyton Manning is a class act, but that brings me to the point — Cam Newton’s postgame interview, if it could be called that, which was anything but a class act. Yes, he was disappointed, and he wasn’t the first great quarterback to be disappointed, and certainly won’t be the last.

Newton’s real problem is that he is so physically gifted and also has a mind good enough to use those gifts that he’s never considered a few key matters. First, in anything, no matter how big you are, how fast you, how strong you are, how intelligent you are… there’s always someone bigger, faster, stronger, and more intelligent. Second, football is a team game, and the team that plays better as a team usually wins. Third, sometimes you get the breaks, and sometimes you don’t. Fourth, you don’t win just because you have the better record or the better offense – as Denver found out two years ago. Fifth, it is a game, if a very serious one played for high stakes.

Newton also needs to realize that he’s paid extraordinarily well to do exactly the same thing that every writer does, except few of us, indeed, are paid as well as he is. He’s paid to entertain the fans, and while that means winning as much as possible, it also means not pissing everyone off and coming off like a spoiled kid. This is also something writers need to keep in mind.

Given his talent, I’m sure Newton will be a factor for years to come, but it would be nice to see a bit more class when things don’t go well. You don’t have to like losing, but in the end, as even the great Peyton Manning has discovered, we all lose… and the mark of the truly great is to show class both when things go well and when they don’t.

High Tech – Low Productivity

The United States is one of the high-tech nations of the world, yet our productivity has hovered around a measly two percent per year for almost a decade. In the depths of the great recession that made a sort of sense, but the “recovery” from the recession has been anemic, to say the least. With all this technology, shouldn’t we be doing better?

Well… in manufacturing, productivity has to be up, whether the statistics show it or not, considering we’re producing more with fewer workers, and that has to mean greater output per worker. Despite the precipitous drop in the price of crude oil, the oil industry is almost maintaining output with far fewer rigs drilling and far fewer workers.

But perhaps what matters is what technology is productive and how it is used. I ran across an article in The Economist discussing “collaboration” with statistics indicating that electronic communications were taking more than half the work-week time of knowledge workers, and that more and more workers ended up doing their “real work” away from work because of the burden of dealing with electronic communications such as email and Twitter. And, unhappily, a significant proportion of the added burden comes under the “rubric” of accountability and assessment. But when you’re explaining what you’re doing and how you’re accountable, you’re not producing.

This is anything but the productive use of technology, and it may provide even greater incentive for businesses to computerize lower-level knowledge jobs even faster than is already happening. It just might be that, if you want to keep your job, less email is better. But then, if your boss doesn’t get that message as well, that puts you in an awkward position. I suppose you could console yourself, once you’re replaced by a computerized system, that your supervisor will soon have no one to badger with those endless emails demanding more and more status reports… before he or she is also replaced by an artificial intelligence.

We’ve already learned, despite the fact that too many Americans ignore the knowledge, that texting while driving runs a higher risk of causing fatalities than DUI. Will the supervisory types ever learn that excessive emailing may just lead not only to lower productivity, but eventual occupational suicide?

They Can’t Listen

Some of the complaints that the older generation has about the younger generation have been voiced almost as far back as there has been a way of recording those complaints, and they’re all familiar enough. They young don’t respect their elders; they don’t listen to their elders; they have no respect for tradition; they think they deserve something without really working for it, etc., etc. And, frankly, there’s some validity to those complaints today, and there always has been. That’s the nature of youth, to be headstrong, self-centered, and impatient with anything that hampers what they want.

But being adjacent, shall we say, to a university, I’m hearing what seems to be a variation on an old complaint, except it’s really not a variation, but a very troubling concern. What I’m hearing from a significant number of professors is that a growing percentage of their students can’t listen. They’re totally unable to maintain any focus on anything, often even visual presentations, for more than a few seconds – even when they seem to be trying. When they’re asked what they heard or saw, especially what they heard, they can’t recall anything in detail. We’re not talking about lack of intelligence – they do well on written multiple-guess tests – but an apparent inability to recall and process auditory input.

Unless there’s something of extraordinary interest, their attention span darts from one thing to another in a few seconds. Whether this is the result of a media driven culture, earlier teaching methods pandering to learning in sound-bites, a lack of discipline in enforcing focus, or some combination of these or other factors, I can’t say. But, whatever the reason, far too many students cannot focus on learning, especially auditory learning.

Unfortunately, the response of higher education has been to attempt to make learning “more interesting” or “more inspiring” or, the latest fad, “more experiential.” Learning through experience is an excellent means for attaining certain skills, provided the student has the background knowledge. But when a student hasn’t obtained that background knowledge, experiential learning is just meaningless and a waste of time and resources. And, generally speaking, learning has to begin with at least some listening.

Furthermore, in the “real world,” employers and bosses don’t provide “experiential learning.” They give instructions, usually vocally, and someone who can’t listen and assimilate knowledge from listening is going to have problems, possibly very large ones.

Despite all the academic rhetoric about students being unable to learn from lectures, lectures worked, if not perfectly, for most of human history. That suggests that much of the problem isn’t with the method, but with the listener. And it’s not just with professors. They can’t listen to each other, either. That’s likely why they’re always exchanging text messages. If this keeps up, I shudder to think what will happen if there’s a massive power loss, because they apparently can’t communicate except through electronic screens.

The “Federal Lands Fight”

The state legislature here in Utah has proposed setting aside $14 million for legal action against the federal government to “force” the United States to turn over all public lands to the state. This is just the latest effort in Utah to grab federal lands.

There are several aspects of this hullabaloo over federal lands that neither the legislature nor the Bundyites seem to understand… or want to. First, the Constitution vests public lands in the federal government, and numerous court cases have upheld that reading of the Constitution. Second, a 2012 study calculated that managing those lands would cost the state of Utah something like $278 million a year, and while much of that cost might be initially reclaimed by oil, gas, and coal leases, once the resources were extracted, the costs of management would remain, and the lands would have even less value. Third, if the grasslands were leased to ranchers, either the grazing fees would have to increase, since the BLM only charges about a third of what it costs the BLM for management [and one of the problems now is that the BLM doesn’t have enough money to manage the wild horse problem and a few others], or the state would have to pick up the difference, which it can’t afford.

In short, not only is what the legislature proposes illegal and unconstitutional, but the federal government is actually subsidizing the ranchers and the state of Utah, something the legislators don’t seem able to grasp.

The ranchers here in southern Utah are furious that the BLM doesn’t essentially round up all the wild horses so that there’s more forage for their cattle, but even if the BLM had the resources to do that, which it doesn’t, because Congress has insisted on not fully funding the BLM and upon keeping grazing fees low, that still wouldn’t solve the problem, because not only were western water rights predicated on the climate of the early part of the nineteenth century [which geologists have discovered was one of the wettest times here in the west in something like 10,000 years], but so were grazing rights. That is why the BLM has cut down on the number of animals allowed per acre, which is yet another rancher complaint.

In short, the ranchers, the legislature, and the Bundyites are precluded from doing as they please by the Constitution, the climate situation, and the Congress, and they’re so unhappy about it that they think the second amendment is the only answer. So, despite all their railing about their Constitutional rights, I guess they really mean that they intend to comply with just those parts of the Constitution whose they agree with, and that they’ll continue to insist that the Supreme Court has been wrong about what the Constitution means for over a century.

Everyone/No One Is Entitled?

Over at least a decade, there’s been debate about entitlements and about a younger generation that may or may not feel “entitled.” Almost always, the use of the phrase is derogatory and suggests individuals or groups who feel they deserve something without paying for it. Although the actual meaning of the word “entitled” means that someone has been given the right to receive something, Americans have a problem with those whom they believe do not deserve that right.

My problem with all the debate is that it’s not inclusive enough, that all too many groups and individuals are receiving societal/governmental benefits for which they either have not paid anything or for which they have paid a minimal amount in comparison to the value of what they have received. Now… in the United States, there are certain benefits to which law-abiding and tax-paying citizens are or should be “entitled.” We deserve fair and impartial laws and a justice system that supports them. We should have a government that protects us from attack by other countries and by terrorists or by law-breakers within our own society. We have decided as a society that part of the role of government is to support highway systems and air transport systems that benefit us all, and to regulate businesses and organizations so that we all have clean air, safe food, and various safe products. For these and other services we pay taxes.

The entitlement problem comes when people are perceived to receive services and benefits out of proportion to what they have paid. When people receive welfare benefits of various sorts for long periods of time, with some families receiving them for generations, people get angry, even though statistics show that most welfare recipients don’t receive benefits for nearly that long.

Likewise, often business owners or professionals in a field get angry when younger people express the idea that they are “entitled” to a job, especially a particular position, even when they don’t have the requisite education and/or experience.

Those are the well-known examples of “undeserved entitlement,” but what about those that aren’t so well known? For example, isn’t the corporation that receives the overall services, legal system, and national market provided by the government, but which pays no taxes on billions of dollars of income, receiving an undeserved entitlement? Or the Bundy family, which is supposed to pay $1.70 per cow and calf for federal grazing rights [a fee less than a tenth of that charged on private land], yet hasn’t paid any of those fees for almost a decade and claims that the land belongs to them through what amounts to squatters’ rights? What about a company that “bargains” for tax breaks from states when relocating a new facility [which effectively places more of the burden for state services on other taxpayers]? Are oil companies and others investing in oil and gas development entitled to a “depletion allowance,” which can reduce taxable income by as much as fifteen percent, simply because it’s possible they might run out of oil and gas to extract? Why are homeowners entitled to deduct their mortgage costs from their taxable income [perhaps as a subsidy to the construction industry?], but renters can’t deduct their rent payments? Then there are the unnecessary military bases that the Defense Department can’t close because senators and representatives insist their constituents are entitled to the remaining jobs at those facilities – which means the rest of us end up paying for those entitled jobs.

So… when people start complaining about entitlements, perhaps they should consider how many they enjoy that they haven’t considered. But then, those are always the exceptions that are deserved.