The New Monopolists

A week or so ago, a U.S. District Court approved the e-book settlement between Hachette, Simon & Schuster, and HarperCollins and the Department of Justice, a settlement that opens the way for Amazon to sell ebooks from those publishers at any price Amazon chooses.  The Justice Department, of course, hails the settlement as a groundbreaking and anti-monopolistic agreement that will provide cheaper books to consumers. In thinking this all over, I realized that the entire structure and operation of monopoly has changed in the last twenty years, while the definition has not, so much so that the Sherman Anti-Trust Act, designed to prevent the harmful effects of monopoly has, in the case of the publishing settlement, become an instrument to support monopoly – and no one seems to realize this.  How did this happen?

A century ago, the operation of a monopoly was clearly defined.  A company, such as Standard Oil, bought up all the competition, or the majority of it, sometimes used low prices as a temporary measure to bankrupt competitors or drive them out, then took control of the market and raised prices to make a greater profit.  Today, companies like WalMart and Amazon have developed a very different monopolistic approach. They begin with selectively low prices and equally low wages for employees.  The low prices of highly visible selected goods attract more customers, and few people notice that other goods aren’t any cheaper, and in some cases, are even more expensive. WalMart gets around this by allowing customers to show competitors’ prices and then matching those prices… but most customers can’t and won’t do that for the majority of goods.  In the case of Amazon, Jeff Bezos lost money for years building that bookselling customer base.

Then, once the new monopolists have that customer base, they exert pressure on suppliers to provide goods for lower and lower prices.  Both WalMart and Amazon are excellent at this.  Amazon provides its marketplace for online retailers, then scans their sales, discovers what items are selling well and in large quantities, and either pressures the supplier to sell to Amazon directly for less, thus undercutting the Amazon affiliate, or finds another supplier to do so more cheaply. Recently, reports have surfaced that Amazon is using similar tactics with small and independent publishers, who don’t have the clout or the nerve that some of the larger publishers have.  Thus, in the end, the new monopolists aren’t gouging the consumer, but using excessive market power to gouge the suppliers and their own employees.  All the while they can claim that they’re not monopolists because people are getting goods for lower prices.

What the Department of Justice and the legal scholars seem to be overlooking is that such behavior is still restraint of trade – it’s just restraint of trade from the suppliers and through low employee wages rather than price-fixing from the retailer… and it has a definite negative impact on both local economies and the national economy, most obviously in the outcome that lower paid employees can’t live as well, don’t buy as much of other goods, and pay less in taxes.

In fact, Jeff Bezos even declared that his goal was to destroy the traditional paper-based publishing industry and take over the information marketplace. If that isn’t a declared intent to monopolize an industry, I don’t know what is. The new monopoly structure also may well be a far more deadly form of monopoly than the old one because it impacts the entire supply chain and effectively reduces incomes and the standard of living of tens of millions of Americans, both directly and indirectly. As I’ve noted before, already the publishing marketplace has changed, in that there’s less diversity in what’s published by major publishers, and more and more former midlist authors are having trouble getting published… or have already been dropped.

While Borders Books had its management problems, the final straw that pushed the company out of business was likely Amazon’s predatory pricing. In the years before its final collapse, Borders annual sales were around $4 billion, and it operated close to 400 brick and mortar stores with approximately 11,000 employees.  Those sales, and payrolls, not to mention the store rental costs, likely generated a positive economic impact of anywhere from $40 to $70 billion. While some of those sales have gone to Barnes & Noble or Amazon, most have not, and the operating expenses and payrolls paid by Borders are almost entirely an economic loss, since Amazon and Barnes & Noble didn’t add many new employees or, in the case of B&N, open new stores.  Books-A-Million did open some new stores, but only a handful.

Amazon’s policies have also resulted in lost revenue for independent bookstores, as well as closure of a number of stores of smaller regional bookstore chains, just as WalMart’s policies have adversely affected local and regional retailers. Yet the Department of Justice claims a victory in a settlement that reinforces the practices of the new monopolists where, apparently, the only determining factor is how cheaply consumers can obtain a carefully selected range of ebooks.

All hail the monopolists of “cheap” and “cheaper.”

 

The Danger of Blind Faith

A film that most Americans had never heard of or considered appears on U-Tube, and anti-American riots break out in Egypt and Libya, during which four Americans are killed, including the U.S. ambassador to Libya. While recent information suggests that the demonstration was planned as a cover for the assassination, the fact remains that there was a demonstration in Egypt and the Libyan plotters had no trouble in rounding up plenty of outraged Muslims, and additional protests have since occurred in Malaysia, Bangladesh, and Yemen. Some might dismiss this as a one-time occurrence.  Unfortunately, it’s not.  Several years ago, a Danish newspaper published some satirical cartoons of Mohammed, and that caused violence and uproar.  When the novelist Salman Rushdie published The Satanic Verses, the Ayatollah Khomeini of Iran issued a fatwa calling on all good Muslims to kill Rushdie and his publishers, forcing Rushdie into seclusion for years.

Some people might declare that things are different in the United States… and they are, in the sense that our population doesn’t have so many “true believers” who are willing to kill those who offend their religious beliefs or so-called religious sensibilities, but we do have people like that, just not so many.  After all, what is the difference between fanatical anti-abortionists who kill doctors who perform legal abortions and fanatical believers in Islam who kill anyone who goes against what they believe? Is there that much difference in principle between Muslims who want Islamic law to replace secular law and fundamentalist Christians who want secular law to reflect their particular beliefs?  While there’s currently a difference in degree, five hundred years ago there certainly wasn’t even that.

What’s overlooked in all of the conflict between religious beliefs and secular law is the fundamental difference that, for the most part, secular law is concerned with punishing acts that inflict physical or financial harm on others, in hopes of deterring such actions, while religious law is aimed at requiring a specific code of conduct based on particular religious practices of a single belief. The entire history of the evolution of law reflects a struggle between blind adherence to a narrow set of beliefs and an effort to remove the codes that govern human behavior from any one set of beliefs and to base law on a secular basis, reflecting the basics common to all beliefs. Historically, most religious authorities have resisted this change, not surprisingly, because it reduced their power and influence.

Thus, cartoons of Mohammed or satirical movies do not cause physical harm, but they are seen to threaten the belief structure.  Allowing women full control of their bodies likewise threatens the belief structure that places the life or potential life of an unborn child above that of the mother.  When blind faith rules supreme and becomes the law of any land, no questions to that law are acceptable.

When a specific belief structure dominates a culture or subculture, the lack of questioning tends to permeate all aspects of that society.  To me, it’s absolutely no surprise that there’s a higher rate of denial of scientific findings, such as evolution and global warming, among Christian fundamentalists because true science is based on questioning and true belief is based on suppressing anything that raises questions… and such societal suppression is the greatest danger of all from blind faith, whether that faith is Islam, LDS, Christianity, or even a “political” faith, such as Fascism, Nazism, or Communism.

 

Success Or Failure?

Some twenty years ago, at the Republican convention that nominated George H.W. Bush for his second term, Pat Buchanan made a speech essentially claiming that what he stood for was the beginning of a fight for the soul of the Republican Party.   That struggle has persisted for twenty years, and now the Republican Party platform seems largely in conformity to what Buchanan outlined.  Paradoxically, some opponents of Republican policies might claim that platform proves that the Party has no soul, but I don’t see anyone raising the larger question:  Should a political party aim to have “a soul”?

Over the more than two centuries since the U.S. Constitution was adopted, there have been more than a few disputes and scores of court cases involving the respective roles of religion and government in American society, the idea of separation of  church and state notwithstanding.  Yet doesn’t anyone else find it strange that, in a society that theoretically does not want government dictating what its people should believe, and in a land created to avoid just that, one of the major political parties has been striving to find its soul, when the very idea of a soul is a highly religious symbol?

Not only that, but the closer the Republican Party has come to adopting Buchanan’s positions, the more the partisans of this “soulful” party have attempted to force government to adhere to positions based on highly religious views – many of which are not shared by the majority of Americans.  And requiring a secular state, which the United States is, despite the “under God” phraseology, to require conduct based on religious views is diametrically opposed to what the Founding Fathers had in mind.

Part of the reason for the growing push to embody “religious” ideas in statute is likely the fact that the United States has become more diverse, and many feel that the nation does not follow the “traditional” values and have reacted by attempting to prohibit any government program that they see as opposing or not supporting such traditional values. There have always been those who did not fully embrace such values, including such Founding Fathers as Thomas Jefferson, but the idea of using government to insist on such values in law, as opposed to defining acceptable conduct in secular terms, has continued to increase, particularly in the past twenty years.

Even if the United States continues to diversify, I suspect that the founders of this nation, who were largely skeptical of political parties, would be even more skeptical about fighting for the “soul” of a political party.

 

The “Birther” Controversy?

According to the September issue of The Atlantic, one in four Americans believe that President Obama is not a “natural born citizen” of the United States, while half of all Republicans believe this.  Given the latest political identification as indicated by the Rasmussen Report of June 2012, and the number of registered voters in the United States, that means that even twenty percent of Democrats and independents hold to this belief, still a considerable number.

The U.S. Constitution only specifies that, to be President, a person must be a “natural born citizen” of the United States, but does not define that term.  Over the time since the Constitution was adopted, the courts have defined “natural-born citizen as a person who was born “in” the United States and under its jurisdiction, even those born to alien parents; or was born abroad to U.S. citizen-parents, either in the United States or elsewhere; or by being born in other situations meeting legal requirements for U.S. citizenship at birth.

At least three court suits have been filed on the question of Obama’s citizenship, all in different states, and the determinations in all cases have affirmed that he is a “natural-born” citizen.  He was, despite all the rhetoric to the contrary, born in a U.S. state of an American citizen.

So why do so many people, Republicans, in particular, believe he isn’t a “natural-born” citizen?

Yes, his mother divorced his father and then married an Indonesian and moved to Indonesia for a time, but the courts have previously ruled in other cases that similar acts, including the case of a woman born in the United States [with only her mother as a U.S. citizen, as was the case with Obama] who lived in a foreign country from the age of three until she was twenty-one was still a natural born citizen.

And why do so many Americans believe that he is a Muslim, when the man has attended Christian churches for so many years?

Or are these convenient beliefs merely a cover for the fact that Obama is black, and many voters, obviously including a significant proportion of Republicans, simply don’t want to admit publicly that they don’t like and don’t want a black President?  Instead, they claim that his mother was too young when she married his father [using convoluted legal rhetoric to claim that because she was so young, the rules for a child being a citizen when only one parent is a citizen don’t apply, that is, if Obama didn’t happen to have been born in a U.S. state, ignoring the fact that he was] or that his birth certificate was forged, or that he was really born in Kenya.

It’s one thing to oppose a politician for what he stands for; it’s another to invent reasons to oppose him to avoid facing personal prejudices… and it’s a shame so many Americans have to go to such lengths to avoid admitting those prejudices.  And it certainly doesn’t speak well of the United States that so many Americans accept such arguments as having any validity at all.

 

The Stigmatization of Early “Failure”

College professors are faced with a new generation of students, one filled with students termed “teacups,” students who literally break or go to pieces when faced with failure of any sort.  They’ve been protected, nurtured, and coddled from their first pre-school until they’ve sent off to college.  Their upbringing has been so carefully managed that all too many of them have never faced major disappointments or setbacks. Their parents have successfully terrorized public school teachers into massive grade inflation and a lack of rigor – except at select schools and some advanced placement classes where the pressure is so great that many of the graduates of those schools come to college as jaded products of early forced success, also known as “crispies” – already burned out.

Neither “regime” of “success” is good for young people. As I’ve noted before, the world is a competitive place, and getting more so.  Not everyone can be President, or CEO, or a Nobel Prize-winning author or scientist.  Some do not have the abilities even for the few middle management jobs available, and many who do have the abilities will not achieve their potential because there are more people with ability than places for them.

Even more important is the fact that most successful individuals have had more failures in life than is ever widely known, at least until after they’ve been successful. Before he became President Abraham Lincoln had a most mixed record. Among other things, he failed as a storekeeper, as a farmer, in his first attempt to obtain political office, his first attempt to go to Congress, in trying to get an appointment to the United States Land Office, in running for the United States Senate, and in seeking the nomination for the vice-presidency in 1856.  Thomas Edison made 1,000 attempts before he created a successful light bulb. Henry Ford went broke five times before he succeeded.

For the most part, people learn more from their failures than their successes.  More often than not, most people who are early successes, without failure somewhere along the line, never really fulfill their potential.  Even Steve Jobs, thought of as an early success, failed several times before he could launch Apple, and then the management of the company that he founded threw him out… before he returned to revitalize Apple.

Yet these young college students are so terrified of failing that many of them will not attempt anything they see as risky or where a possibility of failure exists.  Yet, paradoxically, many will attempt something they have no business trying or something well beyond their ability because they have been told how wonderful they are all their lives – and they become bitter and angry at everyone else when they fail, because they have no experience with failing… and no understanding that everyone fails at something sometime, and that it’s a learning experience.

Instead, they blame the professor for courses that are too difficult or that they were overstressed or overworked… or something else, rather than facing the real reasons why they failed.

Failure is a learning experience, one that teaches one his or her shortcomings and lacks, and sometimes a great deal about other people as well.  The only failure with failure is failing to understand this and to get on with the business of life… and learning where and at what you can succeed.