The Hullabaloo Over College Majors

Now that it’s the season for college graduation, once more the articles and commentaries are popping up everywhere – and most of them either tout certain undergraduate majors as “good” because employment in that field is up or bad because immediate job prospects aren’t as good.  What’s even worse is that politicians are getting into the act, some of them going so far as to suggest that students shouldn’t major in fields that don’t pay as well or where employment prospects are aren’t so good, with hints that government and universities shouldn’t offer aid to students interested in such fields.

There are enormous problems with the whole idea of over-emphasizing undergraduate collegiate majors, the first of which is that many students entering college don’t have the faintest idea what their true talents are or whether their interests match their abilities. This problem has worsened in the past several generations as the general academic rigor of high schools has declined and as more students enter colleges and universities without ever having been truly tested to the limits of their abilities.

The second problem is that the emphasis on a “profitable” major is also a growing emphasis on turning college into what amounts to a white-collar vocational school, rather than on an institution devoted to teaching students how to think and to learn on a life-long basis. Colleges themselves are buying into this by pushing departments into “accountability” and insisting that departments determine how many graduates are successful and employed in that field years after graduating.  But does that really measure success?

In addition, the emphasis on selecting a major based on future projected employability neglects two incredibly important factors.  The first is the student’s aptitudes.  A student who is weak in mathematics is highly unlikely to be particularly successful in fields that require that ability, no matter how many jobs exist.  Second, most students take four years or more to finish college.  Projecting what occupations will be hiring the most in four years is chancy.

As for the subjects students choose for their major, the “employability” measurements used are generally employment in the first year after graduation, and the differences in various fields aren’t that significant.  For example, in a recent Georgetown University study, there was only about a 10% difference in employment between the “worst” and “best” undergraduate majors. Such measurements strongly suggest that a student who likes a field and works hard to excel is more likely to land a job, even in a field where employment is not as robust, than a student who tries to game the employment field and who picks a major based on projected employment and earnings rather than on picking a field suited to his or her abilities. In short, it’s far better for students to be at the top of a field they like than at the bottom of one that they don’t.

More than a few studies have shown and projected that today’s educated workers will have changed fields of work three to four times between entering the workforce and retiring – and that today’s students will face even more need to change their field of work.  Such changes place a premium on the ability to think and to learn throughout life, not on a single set of skills tailored to one field or profession.  Yes, there are some fields where dedicated and directed learning is required from the beginning of college, but those fields are a minority and call for initial dedication.  They seldom attract students who are unsure of what they want to do in life or students with wide interests.

In the end, all the political and media concern about “appropriate” majors, despite protests to the contrary, ignores much of what is best about college and for the students by emphasizing short-term economic goals that cannot possibly benefit the majority of students.

 

Media Dumbing Down

When we first got satellite television some fifteen years ago, in the infrequent times we watched television, our tastes ran to channels like Bravo, A&E, History, and Biography. Now we almost never tune in those channels, or many others of the hundred available.  Why not?  Because over the last decade, those once-independent channels have been purchased by major networks, who changed the programming that made them attractive to us.

Where are the biographies of the Founding Fathers, the great industrialists, great painters, poets, revolutionaries, thinkers, architects, authors – or the other notables of the past and present?  They’re gone, replaced by hour after hour of “Notorious,” each hour devoted to some heinous criminal or another, or other uplifting shows like “Outlaw Bikers.”

As for the History Channel, where are the great events or pivotal points in history?  Also gone, replaced by documentaries on the history of plumbing and endless hours of “Swamp People” or “Pawn Stars.”

A&E used to provide a wide range of material, from architectural/history gems like “America’s Castles” to docudramas like “Catherine the Great” (starring Catherine Zeta-Jones, no less). Now what is there?  Six straight hours of “Storage Wars!”

I love science… but I can’t watch most science shows anymore, not when they’re presented at a third-grade level and when, after a commercial break, the narrator repeats the last minute before the break, as if the viewing audience were developmentally disabled.

And the commercials, endless minutes, each one ending, I suspect, with the immortal words, “But wait!  There’s more.  If you order now…”

The movie channels aren’t much better, except for TCM, because each channel takes its turn with the same movie.  How many times do you want to see “Secretariat”… and I liked that one a lot?  But most aren’t that good…

Now… if I wanted, I could subscribe to every sports event offered in the United States and hundreds more from across the world… but March Madness is enough sports for us for the entire year.

Yes… satellite/cable television once was a good thing… until the media titans took over and turned it into a true triumph of capitalism… dollars over quality, and while the dollars are rolling in and the quality degrades further, the politicians in Washington are trying to gut public television, which is about all that’s left with offerings that aren’t dealing in endless moronic variations on pop culture, sex, violence, or sports.  But then, public media channels, supposedly regulated by the FCC for the people, are only about the dollars, aren’t they?

Political Dialogue and Analysis

With just a bit less than six months before the fall elections, in one sense, I can’t wait for the elections to be over, if only to get a respite from political news and sensationalism… but even that respite isn’t likely to be very long, because politics has become not only continuing news, but something resembling a spectator sport.  And like all spectator sports, the political arena is filled with commentary.  Unlike athletic spectator sports, where the acts of the players and the results can be seen immediately, in politics the results of political actions, laws, and policies, in the vast majority of cases, can’t be discerned clearly for years, if ever.

This allows everyone to comment with “equal validity,” because very few members of the public have the knowledge of economics and politics, as well as the patience, to wait and see how things actually worked out.  Nor do most people remember what did happen accurately.  So they tend to trust the commentator whose views most nearly mirror those, not necessarily the commentator or expert who’s most likely to be right.

One of the things that appalls me the most is how both parties distort not only each others’ positions, but also employ the most inaccurate comparisons, and truly inapplicable facts and comparisons.  What makes it worse is that very few commentators or talk show hosts, or columnists, have either the ability or the nerve to suggest that such distortions are doing extreme violence to accuracy [I won’t say “the truth,” because that’s become a pseudo-religious term] and relevance.

Some of the worse offenses to such accuracy lie in the fallacious ignoring of well-known and proven facts.  For one thing, economies react slowly, often ponderously, to changes in law and policy. So like it or not, Bill Clinton got a tremendous boost from policies enacted by the first President Bush, and in turn, the first President Bush was forced to raise taxes by the policies of his predecessor, a fact gloriously ignored by those who cite the great Reagan prosperity. Admittedly, in Clinton’s case, he had enough sense to continue them when he was under pressure to change them, but the conditions for his highly praised period of expansion lay in his predecessor’s actions.  Likewise, to blame President Obama for current high unemployment and recession when those conditions were created by policies created well before his election, and when the U.S. also has to absorb economic fall-out from all across the world, is politically easy, but factually inaccurate, especially when political gridlock in Congress has restricted his ability to attack the problem in the way he would like.  But few of his critics will admit that they’re judging him as much, if not more, by Congressional inaction than by his own acts.

Comparing one economic recovery, or non-recovery, to another is not only inaccurate, but disingenuous, because the underlying factors differ greatly, yet such comparisons are a staple in the political arena, because politicians and their aides have an addiction to the simple and superficially relevant.

In addition, some factors are beyond any President, or any Congress’s, ability to change.  Oil is a fungible global economic good, and, in the short run, no change in U.S. environmental, energy, economic, or tax policy is going to measurably lower the price of crude oil in the months ahead, although the Saudi actions to flood the market with cheaper oil will likely cause a temporary respite, at least until world economic activity picks up.  Unwise government action can, as Richard Nixon proved with his ill-fated experiment with price controls, cause gasoline and heating oil shortages and increase prices in the long term.

Another problem in assessing government/political actions is determining how effective preventative actions are… or accepting the benefits while disavowing the means.  We know that the U.S. safety net for the poor has in fact historically reduced overall poverty in the United States – but which programs really work the best?  Which are failures?  Which work, but are so inefficient that they should be replaced?  How many of all the Homeland Security measures are truly necessary?   Most Americans seem to have forgotten that before the enactment of the Clean Water Act, the Cuyahoga River in Cleveland actually caught fire, or that the Potomac River was actually toxic.  Or that before the enactment of the Clean Air Act, office workers in Pittsburgh often took a second white shirt to work because the first got so soot-filled by midday that it looked black and gray?  Instead of the debate being about drinkable water and breathable air, it’s become about whether environmental protection costs too much and slows or hinders job creation, and almost no commentator questions the terms of the debate.

As I’ve pointed out all too many times, there has not yet been any determination of personal accountability for the latest economic meltdown – and now we’ve had a reminder, in the recent Citibank derivatives loss/scandal, that neither Congress nor the President [either Bush or Obama] ever truly addressed the problem, but merely papered it over.  But I’ve never heard any commentator mentioning that – or attacking the corrupt culture of the financial world and those who lead it.

Instead, we get media and political emphasis on the irrelevant, the inaccurate, the inappropriate, and the inapplicable… and the worst part of it all is that it’s only going to get worse over the next six months.

 

Excellence and Self-Promotion

I grew up in a time and a place where blatant self-promotion was deeply frowned upon.  My father made a number of observations on the subject, such as “Don’t go blowing your own horn; let your work speak for you” or “The big promoters all lived fast lives with big mansions and died broke and forgotten.”  As I’ve gotten older, I’ve learned that promotion and self-promotion have always been with us, dating back at least as far as Ramses II, who, at the very least, gilded if not falsified, in stone, no less, his achievements in battle.  And to this day most people who know American history [a vanishing group, I fear] still think that Paul Revere was the one who warned the American colonists about the coming British attack on Concord – largely because of the poem written by Henry Wadsworth Longfellow, which promoted Paul Revere, possibly because Longfellow couldn’t find enough words to rhyme with Samuel Prescott, the young doctor who actually did the warning after Revere was detained by the British.

Still… in previous times, i.e., before the internet age, blatant self-promotion was limited by economics, technology, and ethics, and there were more than a few derogatory terms for self-promotion.  And who remembers when the code of ethics of the American Bar Association banned advertising by attorneys?  Lawyers who tried to promote themselves publicly were termed ambulance chasers and worse… and disbarred from the profession. The same ethics applied to doctors and pharmaceutical companies.  Of course, there were never many restrictions on politicians, and now, unsurprisingly, there are less.

Unhappily, in field after field, excellence in accomplishment alone is seldom enough for success any more.  For more than modest success, excellence also requires massive promotion and/or self-promotion, even among authors. Some of us try relatively tasteful self-promotion, by attempting enlightening and hopefully entertaining websites, such as this.  Others go for a more sensational approach, and for some, no excess is too much.  From what I’ve observed lately, massive promotion and mere marginal competence in writing, along with cheap wares, results in sales that far outshine good entertainment or excellent writing that does not enjoy such promotion. One of the associated problems is that promotion or self-promotion takes time, effort, and money — and all detract from time, effort, and resources an an author can devote to the actual writing

Several years ago, Amazon embarked on a campaign to persuade people rating books to use their real names, rather than pseudonyms, because authors [yes, authors] were using aliases or the aliases of friends to blatantly praise their own work, and in some cases, to trash competing works. I have no doubts that the practice continues, if slightly less blatantly.

In today’s society especially, my father’s advice about not blowing your own horn leaves one at a huge disadvantage, because amid the storm of promotion and self-promotion  all too few people can either finds one’s unpromoted work or have the time or expertise to evaluate it… and if someone else blows a horn for you, it’s likely to be off-key and playing a different tune.

 

The “Competitive/Comparative” Model in Teaching

The local university just announced a merit pay bonus incentive for faculty members, and the majority of the Music Department greeted the plan with great skepticism, not because they opposed recognition of superior accomplishment… but because the proposed structure was essentially flawed.  In fact, for many university departments, and for most schools, as well as many businesses and corporations, such “merit” awards will always be fatally flawed.

Why?  Because all too many organizations regard their employees and even their executives as homogenous and interchangeable parts, even when duties, skills, responsibilities, work hours, and achievements vary widely, and those variances are even greater in the academic community and yet paradoxically, in terms of administration and pay, they’re even less recognized than in the  corporate world.

Take a music department, for example, with instrumentalists, pianists, vocalists, composers, music educators, and musicologists.  How, with any sense of fairness, do you compare expertise across disciplines?  Or across time?  Is the female opera director who built a voice program from nothing over 15 years, who has sung on low-level national stages intermittently, who is a noted reviewer in a top journal in her field, and who serves as a national officer in a professional organization more to be rewarded than the renowned pianist who won several prestigious international competitions and performs nationally, but who limits his teaching to the bare minimum?  Or what about the woodwind player who was voted educator of the year for both the university and the state, who is known regionally but not nationally  as a devoted and excellent teacher? Or the percussionist who revitalized the percussion program and performs on the side with a group twice nominated for Grammies? Or the soprano who sings in an elite choral group also nominated for a Grammy?

Then add the fact that all of them are underpaid by any comparative standard with other universities [which also indicates just how hard music faculty jobs are to find and hold]…and with other departments, even though the music faculty work longer hours as well as evenings and weekends, and the fact that the annual “merit pay” award would be a one-time annual payment of $1,000-$2,000 to only one faculty member.  In essence, the administration is attempting to address systemic underpayment and continued inequalities with a very small band-aid, not that the administrators have much choice, given that the legislature won’t fund higher education adequately and tuition increases are limited.

In primary and secondary schools, merit pay has become a huge issue, along with evaluating teachers.  Everyone, even teachers, agrees on the fact that good teachers should be rewarded and bad ones removed.  But determining who is good or average, and who gets paid what is far, far, harder than it looks, which is why most teachers have historically opposed the concept of merit pay, because in all too many cases where it has been actually implemented it’s gone to administration or parent “favorites,” who are not always the best teachers.  A competent teacher in an upper-middle-class school where parents are involved and concerned should be able to boast of solid student achievement on tests, evaluations, etc.  A brilliant, dedicated, and effective teacher in some inner city schools may well be accomplishing miracles to keep or lift a bare majority of students to grade level, while a competent teacher may only have a few students on grade level.  Yet relying on student test scores would suggest that the first teacher of these three deserved “merit pay.”  And in “real life,” the complications are even greater.  How do you compare a special education teacher with standard classroom teachers, even in the same school, let alone across schools with different demographics?

In addition, when teachers feel overworked and underpaid, and many, but not all, are, offering merit pay tends to turn people into competing for the money — or rejecting the entire idea.  I’ve seen both happen, and neither outcome is good.   Yet the underlying principle of ratings and “merit pay” is that such comparisons are possible and valid.  So far, I’ve yet to see any such workable and valid plan… and neither have most teachers. And when merit pay is added in with all the other problems with the educational system that I’ve discussed in other posts, all merit pay usually does is make the situation worse.  It’s an overly simplistic solution to a complex series of problems that few really want to address.  But then, what else is new?