Featured Post

Another New Book Available: States of the Union, The History of the United States through Presidential Addresses, 1789-2023

Mount Greylock Books LLC has published States of the Union: The History of the United States through Presidential Addresses, 1789-2023.   St...

Friday, January 31, 2020

Keynes and Us

In the last week I finished a remarkable book, Money and Government, by the British historian Robert Skidelsky.  Eight years older than I am, Skidelsky made his reputation writing a three-volume biography of John Maynard Keynes, which appeared between 1983 and 1992.   This exhaustive work--most of which I have not read--paid tribute to the man who provided the theory behind the enormous economic success of the middle third of the twentieth century.  I too had learned to revere him in my youth, and have been almost as astonished to find him become unfashionable in my middle and old age.

Keynes was the hero of by far my most important course as a Harvard freshman in 1965-6, Economics 1.  As I described in my autobiography (see above), the course spent the first term on microeconomics and the second, more important term on macroeconomics.  Microeconomics focused on the theory of competitive markets and the Pareto optimum, which, it was easy to see then, was an ideal type (a concept I learned later) with only very intermittent relation to reality.  Firms large and small were always looking for edges that would make markets less competitive, to prevent the market from driving profit down to the affordable minimum.  Macroeconomics, on the other hand, were in the midst of the climax of the Keynesian era, which had saved both capitalism and civilization.

Classical theory held that national economies naturally reached an equilibrium, and that disturbances came from non-economic factors like famine, war, or unwise government policy.  Classical economists and their allies in national banks and treasury ministries believed that economies would self-correct, provided the banking system maintained stable prices.  For much of the 19th century this seemed like a reasonable approximation of the truth, since all the advanced economies grew quite impressively and prices remained stable, even though serious panics occurred at least every 20 years or so.  Things changed, however, in the wake of the First World War, and especially, of course, during the Great Depression.  Keynes, a Cambridge academic who had also worked in the British Treasury during the war, had too much respect for reality to stick to the old theory in the midst of inflation and depression in the 1920s and 1930s.  He eventually argued in The General Theory of Employment, Interest and Money that national economies could reach a kind of equilibrium that involved high unemployment, without any natural countervailing force operating to reduce it.  He also realized that savings did not necessarily turn into investment, and argued that when private interests failed to invest enough of their wealth to stimulate the economy, the government had to use that money itself--obtained if necessary by borrowing--for investment in public goods that would stimulate the economy effectively.  That was, of course, also the theory, in a very raw form, of FDR's New Deal, although Roosevelt got the nation and the world economy back into serious trouble again in 1937 when he decided to try to balance the budget,  helping to trigger another serious recession.  In Keynes' own Britain, however, no government tried his prescription seriously until the Second World War came.  And indeed, Skidelsky's first book--probably his Ph.d thesis--entitled, Politicians  and the Slump, described how the Labour Government of 1929-31 failed to try to Keynesian remedy when the Depression hit, and split itself and formed a government with the Tories instead.

I used that book to write one of the presentations I gave in my first-year graduate school colloquium in the spring of 1972.  I had been brought up in a New Deal household, I had read Arthur Schlesinger's New Deal histories at a pretty early age, and I had also learned in Economics 1 how well the Keynesian theory had been working the Kennedy and Johnson Administrations.  Indeed, I recall how my section man, David Major, in our very last class, remarked that the economics profession had made remarkable strides in solving macroeconomic problems in recent years, but not in microeconomic ones.  I also remember that he spent about 20 minutes of one class talking about the bizarre ideas of a rogue economist named Milton Friedman, then regarded as an oddball. "I think it's good for you to be exposed to this," he said.

Skidelsky's new book is a survey of large-scale economic thought since th 18th century, focusing on the rise and fall of Keynesianism.  Clearly he, like me, never imagined that the man to whom he devoted several decades of his life, and who had done so much to create the benevolent world that he and I grew up in, could become so unfashionable.  But he has, and Skidelsky explains how.  The pretext for discarding him was the advent of stagflation--a combination of high unemployment and veyr high inflation--that hit the western world, and especially Britain, in the 1970s and 1980s.  Keynesians had not anticipated this and had no remedy for it.  Others, however, eagerly seized upon this to repudiate the whole Keynesian model, because they wanted to restore the economic sovereignty of private enterprise and eliminate the government as a competitor for the use of capital, and accumulator of revenue, and a serious regulator of private enterprise.  Margaret Thatcher, Ronald Reagan, and Paul Volcker of the Fed tossed the Keynesian idea out the window, and in the 1990s Bill Clinton and Tony Blair did not really pick it up again.  The idea of the economy as a benevolent self-regulating mechanism returned to favor, and from about 1990 to 2006,  the combination of steady growth and lower inflation seemed to favor it.

I cannot take the time to summarize Skidelsy's academic arguments. Suffice it to say that the neo-classical model of economics that once again dominates the profession relies on an absurd view of human nature, as he realizes.  Economic men and woman ruthlessly maximize their well-being, always buying at the lowest available price, investing eagerly at equilibrium interest rates, and willingly working for the prevailing wage.  Unemployment, this view holds, occurs when prevailing wages are too high, period.  Markets, such as the housing market (!!) regulate themselves far better than any government bureaucrat could.  Economics, I think, attracts a lot of scholars attracted to the beauty of mathematical theory--but not to the study of actual reality.  Such is the hegemony of a certain set of ideas, however, that one can spot only a few dissenters such as my old friend Jamie Galbraith here and there on the horizon, and they do not exert significant influence in either Republican or Democratic administrations, or Labour or Tory governments in Britain.

The great financial crash of 2008 grew out of the absurd new faith in unregulated markets, which, combined with cheap money, had allowed the big banks to create an enormous subprime mortgage bubble, one that would have destroyed the world economy when it burst without the massive intervention of the government.  This time however Ben Bernanke and Tim Geithner showed no interest in Keynesian intervention as the primary solution (although the Obama stimulus was a significant Keynesian move.)  Indeed, Bernanke in particular wanted to show that Hoover and FDR had chosen the wrong remedy by spending more government money instead of just restoring liquidity in the banking system--a term which meant, in practice, buying up all the worthless trillions of assets on the balance sheets of the banking system with money created by the federal reserve.  That, and the stimulus, did lead to a fairly successful recovery in the US, although inequality continued to increase.  But it has not done so in Europe, where economies have grown very slowly now for well over a decade, while central bankers, just like their counterparts in the 1920s and 1930s, continue to insist on austerity.  And as a result, the established parties in nearly every European country are losing ground, particularly to right-wing populists.

Skidelsky's last chapters are chilling.  He was trained as an historian, not an economist, and he knew at an early age that bad, traditional economic policy had done a lot to destroy democracy in parts of Europe--most notably in Germany--in the 1920s and early 1930s.  Now, he argues, the insistence on neoclassical economic principles and on depriving national governments of a major economic role has crippled politics in much of the West.  Private interests and national banks, not elected officials, are the most important actors in our economic system, which they have organized for their own benefit.  The financial community in particular has taken advantage of deregulation to find many new ways to create, and hoard, enormous sums of money that benefit no one but themselves.   The most advanced western nations face critical shortages of many public goods such as infrastructure.  Millions of voters in the west now understand this and are repudiating the established politicians who have gone along with it.  Free trade and globalization are two other shibboleths of modern economic thought, and Skidelsky feels they need to be held back as well because of their disastrous economic impact in older industrial areas and their political consequences.  Many nations in past eras such as the late 19th century, he points out, prospered under protectionist regimes.  We need, he argues, new policies, and new economic thinking to go with them.  He does refer at one point to Thomas Piketty's 2014 work Capital in the Twenty-First Century and to its principle finding--borrowed, actually, from Karl Marx--that capital naturally grows more quickly under capitalism than the economy.  This still seems to me to be the biggest single reason that a Keynesian approach involving high taxes (including taxes on capital) and high spending on public goods is necessary to get us off the path that we are on now.

I completely agree with Skidelsky that the hegemony of the idea of self-regulating economies has crippled political power in the West.  The problem continually gets worse, of course, because our unregulated economies channel more and more of our wealth into a very few hands, increasing both their political and economic influence.  Our generations--Skidelsky's and mine--are victims of our parents' success.  They had to focus on public goods, broadly defined, to defeat the Depression, win the Second World War, and set up the western alliance for the Cold War.  Now that those threats have faded, the government seems to lack a compelling reason to mobilize private resources.  Worst of all, deeply flawed classical theories of economics remain hegemonic because they benefit the wealthy--whose largesse universities now need more than ever.  Like me, Skidelsky has remained faithful to what he learned in his youth--but he is now 80, and few replacements seem to be emerging either from our politics or from academia, and his remarkable book has gotten very little attention.  I learned about it from a very favorable review in The New York Review of Books, but even that review, I know think, didn't do justice to its scope.  It was panned, not surprisingly, in the Wall Street Journal, and it has not been reviewed at all in the daily or Sunday New York Times or in the Washington Post. 

Friday, January 24, 2020

Endless War and Political Collapse

18 years after 9/11, American foreign policy in the Middle East lies in tatters.  In Afghanistan, the US government is searching for a way to end its military involvement that will not result in the immediate victory of the Taliban, which we invaded the country in 2001 to overthrow.  Iraq, which the Bush Administration saw as the keystone of a new Middle East, remains wracked by civil war, and its government, for the second time, is trying to force the withdrawal of American forces from its territory.  President Trump, we have just learned, is about to issue a Middle East peace plan that will give the Israelis more concessions than they have ever dared to ask for in public.  The Arab spring overthrew the authoritarian government of Egypt with US encouragement, only to see a democratic experiment end in a military coup just a few years later.  The US decision to help bring down the Libyan government created chaos in yet another state, and triggered a destabilizing flood of refugees into Europe.  A similar attempt in Syria has failed completely.  Russian influence in the region has substantially increased, the US abandoned its Kurdish allies on the Turkish-Syrian border, and the Trump Administration foolishly abandoned the nuclear agreement with Iran, a significant step towards peaceful coexistence in the region.  Now Iran and the US stand on the brink of armed conflict.

Yet for all that, the foreign policy consequences of the Bush Administration’s decision to reshape the Middle East—which the Obama Administration in many ways adopted for itself—are no more significant, I think, than its domestic consequences.  The election of 2016 marked the collapse the American political system.  Neither major party could field a candidate who could defeat an oft-bankrupted businessman and reality television star who obviously lacked both the intellectual and temperamental qualifications to be President.  At least 50% of the voting-age population had evidently lost all confidence in our governing elite.  One reason, undoubtedly, was the complete failure of the US government’s major enterprise in the new century, our attempt to subdue or influence large areas of the Middle East.

About 25 years ago, William Strauss and Neil Howe, two amateur historians, discovered an 80-year rhythm in American history in two books, Generations(1991) and The Fourth Turning(1997).  The great crisis of 1774-1794 had thrown off British rule, written the Articles of Confederation and the Constitution, and given the nation a new government.  About 80 years later, in 1860-65, the Civil War had restored the union, ended slavery, and changed the relationship between the federal government and the states forever.  80 years after that, Franklin Roosevelt once again transformed the government’s role both at home and abroad during the Depression and the Second World War.  Each of these crises had created a new order and established a new ideological and social consensus.  That consensus, in each case, had begun to erode 20-40 years after the crisis, and the erosion accelerated when the generation that had lived through it as young adults aged, lost power, and died off.   One of the many things I learned from their books is that no government wins the support of its people simply because of the design of its institutions: it must win their trust by accomplishing great things and mobilizing resources for common aims.  That is what Washington, Hamilton and Jefferson had done in the first crisis, Lincoln and Grant in the second, and Roosevelt and Marshall and many others in the third.  But this was not all. Doing the math back in the 1990s, Strauss (who died in 2007) and Howe observed the decline of the post-1945 order that went along with the aging of the GI (or “greatest”) generation, and predicted a new great crisis that would once again reshape the United States during the first 15 years of the 21st century.  That prediction has now come true, but with disastrous consequences they did not predict.  This time our luck ran out and our leaders embarked upon a hopeless crusade.

2001 was only 72 years after the stock market crash had kicked off the last great crisis, but the new Administration of George W. Bush had big plans in both foreign and domestic policy which it eagerly moved to implement after 9/11.  Specifically, they wanted to take down hostile dictatorships in at least three countries—Iraq, Iran, and North Korea—under a new doctrine that asserted the right to move unilaterally against any regime that sought weapons that the United States did not think it should have.  That was, among other things, a risky strategy domestically.   During the preceding 60 years American military power had first defeated Nazi Germany and imperial Japan, and then held the line, on many fronts, in the Cold War.  The tragic decision to deploy hundreds of thousands of Americans in Southeast Asia—which failed to achieve its objective—had dealt the first huge blow to the postwar consensus.  The foreign policy elite, as Andrew Bacevich showed in Washington Rules, had not abandoned its belief in the utility of American force around the world, but our political and military leadership had stayed out of any major conflict during the rest of the 1970s and 1980s, allowing them to maintain their prestige.  George. H. W. Bush had fought a limited war against Iraq in 1991, but he had done so only as the leader of a very broad coalition, and with the limited objective of restoring the independence of Kuwait.  After 9/11, however, the new Bush Administration cast caution to the winds, defining a new generational task of spreading democracy through the Muslim world, largely by eliminating hostile regimes.  To do so, they took advantage of an outburst of national feeling after 9/11, which, like Fort Sumer and Pearl Harbor, created a bipartisan consensus behind new wars.  At the same time, they embarked upon a crusade for energy independence, one that drew little notice at the time, but which has now succeeded—in its own terms, at least—with other huge economic, environmental and political consequences. And the decision to combine new wars with tax cuts instead of tax increases turned a budget surplus into a huge permanent deficit that has made it much harder for the government to deal with domestic problems.

Karl Rove and George W. Bush clearly hoped to create a new Republican majority based in part on successes overseas.  This they failed to do when the war in Iraq went badly, and other failures at home, culminating in the financial crisis, reduced the Republican Party to minority status once again from 2006 to 2010.  Barack Obama could probably have reversed key Bush policies both at home and abroad, but for the most part, he declined to do so.  He did eventually withdraw our troops from Iraq, but he increased them in Afghanistan.  As we have seen, he too adopted regime change as a Middle East policy in Egypt, Libya, and Syria, with similarly disastrous results.  He had to put US troops back into Iraq to cope with ISIS.  He continued the spread of the “war on terror” into more continents, and it has now become business as usual in the US military establishment.  In 2016 the Democratic Party fielded former Secretary of State Hillary Clinton, a frequent supporter of military action abroad and the architect of the Libyan disaster. 

Tens of millions of American voters have lost confidence in our political leadership for domestic reasons as well.  Both parties embraced and pushed globalization without regard to its impact on many American communities.  Both deregulated the economy in ways that have allowed inequality to increase.  Both are more responsive to special interests of one kind or another than to the needs of average Americans.  Yet the decision of both parties to pursue endless, worse than useless wars in distant lands has surely contributed a great deal to the fraying of their relationships with the electorate.  These wars have turned out to be a political luxury that the nation could not afford.

Saturday, January 18, 2020

The State of US Publishing

I now belong to an organization called the Independent Publishers of New England, and it sent an email around last week summarizing trends in publishing.  The email is about four years old, but I'm sure the trends it describes have continued.  It does not bode well for the future of serious writing.  Here it is.

Steven Piersanti, President, Berrett-Koehler Publishers Updated September 26, 2016
1. The number of books being published every year has exploded. According to the latest Bowker Report (September 7, 2016), more than 700,000 books were self-published in the U.S. in 2015, which is an incredible increase of 375% since 2010. And the number of traditionally published books had climbed to over 300,000 by 2013 according to the latest Bowker figures (August 5, 2014). The net effect is that the number of new books published each year in the U.S. has exploded by more than 600,000 since 2007, to well over 1 million annually. At the same time, more than 13 million previously published books are still available through many sources. Unfortunately, the marketplace is not able to absorb all these books and is hugely oversaturated. [Note: self-published books topped 1,000,000 in 2018.]

2. Book industry sales are stagnant, despite the explosion of books published.

U.S. publishing industry sales peaked in 2007 and have either fallen or been flat in subsequent years, according to reports of the Association of American Publishers (AAP). Similarly, despite a 2.5% increase in 2015, U.S. bookstore sales are down 37% from their peak in 2007, according to the Census Bureau (Publishers Weekly, February 26, 2016).

3. Despite the growth of e-book sales, overall book sales are still shrinking.

After skyrocketing from 2008 to 2012, e-book sales leveled off in 2013 and have fallen more than 10% since then, according to the AAP StatShot Annual 2015. Unfortunately, the decline of print sales outpaced the growth of e-book sales, even from 2008 to 2012. The total book publishing pie is not growing—the peak sales year was in 2007—yet it is being divided among ever more hundreds of thousands of print and digital books.

4. Average book sales are shockingly small—and falling fast.
Combine the explosion of books published with the declining total sales and you get shrinking sales of each new title. According to BookScan—which tracks most bookstore, online, and other retail sales of books (including Amazon.com)—only 256 million print copies were sold in 2013 in the U.S. in all adult nonfiction categories combined (Publishers Weekly, January 1, 2016). The average U.S. nonfiction book is now selling less than 250 copies per year and less than 2,000 copies over its lifetime.

5. A book has far less than a 1% chance of being stocked in an average bookstore.

For every available bookstore shelf space, there are 100 to 1,000 or more titles competing for that shelf space. For example, the number of business titles stocked ranges from less than 100 (smaller bookstores) to up to 1,500 (superstores). Yet there are several hundred thousand business books in print that are fighting for that limited shelf space.

6. It is getting harder and harder every year to sell books.

Many book categories have become entirely saturated, with a surplus of books on every topic. It is increasingly difficult to make any book stand out. Each book is competing with more than thirteen million other books available for sale, while other media are claiming more and more of people’s time. Result: investing the same amount today to market a book as was invested a few years ago will yield a far smaller sales return today.

7. Most books today are selling only to the authors’ and publishers’ communities.

Everyone in the potential audiences for a book already knows of hundreds of interesting and useful books to read but has little time to read any. Therefore people are reading only books that their communities make important or even mandatory to read. There is no general audience for most nonfiction books, and chasing after such a mirage is usually far less effective than connecting with one’s communities.

8. Most book marketing today is done by authors, not by publishers.

Publishers have managed to stay afloat in this worsening marketplace only by shifting more and more marketing responsibility to authors, to cut costs and prop up sales. In recognition of this reality, most book proposals from experienced authors now have an extensive (usually many pages) section on the authors’ marketing platform and what the authors will do to publicize and market the books. Publishers still fulfill important roles in helping craft books to succeed and making books available in sales channels, but whether the books move in those channels depends primarily on the authors.

9. No other industry has so many new product introductions.

Every new book is a new product, needing to be acquired, developed, reworked, designed, produced, named, manufactured, packaged, priced, introduced, marketed, warehoused, and sold. Yet the average new book generates only $50,000 to $150,000 in sales, which needs to cover all of these new product introduction expenses, leaving only small amounts available for each area of expense. This more than anything limits how much publishers can invest in any one new book and in its marketing campaign.

10. The book publishing world is in a never-ending state of turmoil.

The thin margins in the industry, high complexities of the business, intense competition, churning of new technologies, and rapid growth of other media lead to constant turmoil in bookselling and publishing (such as the disappearance over the past decade of over 500 independent bookstores and the Borders bookstore chain). Translation: expect even more changes and challenges in coming months and years.

1. The game is now pass-along sales.
2. Events/immersion experiences replace traditional publicity in moving the needle.
3. Leverage the authors’ and publishers’ communities.
4. In a crowded market, brands stand out.
5. Master new digital channels for sales, marketing, and community building.
6. Build books around a big new idea.
7. Front-load the main ideas in books and keep books short. [end email.]

My comments:

    Great books, both fiction and non-fiction, are works of art.  Authors who can write them are extremely rare--and the capacity to write them has nothing to do with race, gender, or sexuality.  From the 18th until the late 20th century, I would argue, the major western nations had some mechanism--however imperfect--for identifying, publishing, and marketing really superior works, and large segments of the public would read them.  Most published books fell far short of greatness, and some great authors labored in obscurity, but the overall mechanism worked fairly well.

      That is no longer the case.  I personally think that publishing has self-destructed by giving in to the values of marketing, and looking for niches and trying to replicate previous successes.  The same thing has happened to film. In such an environment, a truly original work has little or no chance of being taken by a major publisher or even represented by an agent--since the agents' role is now to anticipate what the publishers want.  I realized quite  a few years ago that most of my favorite books would never be accepted for publication today.

      One can, as I did last year, publish work one's self, but it's very difficult, even with professional  help, to get effective publicity for it.

      Thus to every writer with serious aspirations, I would recommend this passage from Alexander Solzhenitsyn's masterpiece, The First Circle, set in a Soviet technical institute that doubles as a prison camp.  The inmates are engineers and scientists, and one of them, Sologdin, has just presented a breakthrough design to the  learned Professor Chelnov, whose political unorthodoxy landed him in Stalin's prison system a long time ago.   The project is a scrambler telephone--ordered for the personal use of Comrade Stalin himself.  Sologdin is not sure that he wants to submit his breakthrough to the institute's leadership at all. I quote:

"How shall I put it? [asked Sologdin.]  Isn't there perhaps a certain moral ambiguity?. . .It's not as if it were a bridge, or a crane or a lathe.  Our assignment is not for something of great importance to industry--it's more like making a gadget for the boss.  And when I think of this particular 'customer' picking up the receiver we'll make for him. . . .Well, anyway, so far I've been working on it just. . .to test my strength. For myself."

"He looked up.

"'For myself.'  Chelnov knew all about this kind of work.  As a rule it was research of the highest order."


Saturday, January 11, 2020

Rotten in Denmark

This week I watched the amazing HBO miniseries, Chernobyl.  The production had one feature that left me chuckling all the way through the first episode: it follows the American cinematic tradition that all foreigners must speak with British accents.  The performances and the whole production were so good, however, that this didn't bother me for long.  The high drama of the story begins when some Russians in a high rise apartment actually see, and hear, what they do not know is a nuclear explosion.  It then turns to the battle to contain the disaster before something much worse happens.  Without some quick thinking and the sacrifice of some emergency workers to radiation sickness, water tanks around the reactor might have turned to steam and created a four megaton explosion sending radioactive fuel all over Belarus and Ukraine and making huge areas uninhabitable forever.  I was also struck in the first episode by parallels to Randy Shilts's classic about the beginning of the AIDS epidemic, And the Band Played On.  While some smart people low in their hierarchies immediately grasp at least that they are dealing with something completely new and different, higher management refuses to believe it.  In the last episodes of the series, however, as the government of the USSR tries to handle the aftermath, another analogy occurred to me--one with tragic historical implications, and terrifying contemporary ones.

The two heroes of the story are nuclear physicists who not only help bring the disaster under control, but try to understand how it happened.  One, Valery Legasov (Jared Harris, whom I recognized from Mad Men, but not from some of his other work), was a very real person; the other, Ulana Khomyuk (Emily Watson) is a composite, as the closing credits point out, introduced in part to give Legasov some one to talk to.  In the course of their investigation they realize that while operator error played a big part in the disaster, it did not tell the whole story.  A design flaw in the reactor--one of more than 20 of the same type in the USSR--led directly to the explosion at the critical moment.  This the government of the USSR, now led by Mikhail Gorbachev, refused to admit.  At the climax of the series, in the trial of the operators whom the regime wanted to blame for the whole catastrophe, Legasov, who knew that his own radiation exposure during the disaster was certain to kill him fairl soon, told the truth.  He lost his career as a result, and committed suicide a few years later.

This triggered my memory of a book I haven't looked into for more than 40 years, Alexander Solzhenitsyn's August 1914, the first volume in his huge, multi-volume historical novel of the Russian Revolution, The Red Wheel. (It's the only volume I've read.)  The Legasov of August 1914 is fictional: Colonel Georgi Mihailovich Vorotynsev, who is suddenly attached to General Samsonov, the commander of the Second Army, now advancing into East Prussia in the first month of the war.  Vorotynsev, Solzehnitsyn explains, belonged to a group of younger officers of relatively modest origins who had witnessed first hand the disaster of the Russo-Japanese War in 1904-5, which included both a humiliating military defeat and a revolution that had nearly toppled the throne.  They had tried to learn from this experience and from the example of Germany, their foe in this war.  They knew that another defeat might mean the end of the empire, although they did not know what would follow it.  As the book continues, Vorotynsev watches in helpless fury as the higher-ups botch the campaign, eventually bringing it to a disastrous conclusion by ordering a completely unnecessary retreat.  Then, in the wake of catastrophe, he gets the opportunity to attend a postmortem conference that includes the very highest authorities of the Russian Army, and he is determined to discuss the broader failures within that army that led to the disaster, rather than simply blame Samsonov, who, like Legasov about 75 years later, has committed suicide.  I have never forgotten the riveting argument he has with one of his best friends, who counsels him to keep his mouth shut, since speaking out will only ruin his own career while remaining silent might allow him to do some good.  What made the scene so riveting to me as a graduate student back in 1972 was the knowledge that the argument was meaningless in the context of the history to come.  The whole system, we knew then, had only two and a half years to live, and nothing Vorotynsev could do could change that.  As it was, he did speak out, only to be excused from the conference when he became too frank. 

And the same was true, of course, of the USSR in 1986, when Legasov, according to the miniseries, told the authorities at the trial that the reactor explosion had been so disastrous because the authorities had refused to find the necessary resources for containment towers and other safety features that were standard in the West.  The authorities' refusal to face facts, to admit error, and to listen to lower-level officials who had not given up their power to think had led the whole regime to the brink of collapse, and they went over it just a few years later as well.  Gorbachev, indeed, once remarked that Chernobyl was the real cause of the collapse of the Soviet Union.  I would call it the last attack of a progressive disease that was bound to result in death, sooner or later, whatever the specific symptoms of the final crisis.  And I cannot help noting once again that the collapses of 1917 and 1989 were separated by 72 years--not exactly the 80 anticipated by Strauss and Howe, but close enough, as one might say, for good meta-history.

We do not live in an empire or a party dictatorship.  In  my opinion we have too little central authority over our institutions today, not too much.  Yet I could not watch Chernobyl and review August 1914 without asking myself if our leading institutions are not similarly corrupt, and whether the election three years ago of Donald Trump signals a more general collapse that will have undreamed of consequences.

Certainly this does look true in the institutions I know best, our institutions of higher learning.  They operate almost entirely for their own benefit now, run by bloated bureaucracies.  The few men and women they still contain who really care about their educational mission keep to themselves and never reach positions of power and influence.   They are dominated by an ideology that has very little resonance among the public at large, and which actively undermines faith in our institutions.  The public knows that they cost much too much and has some sense of their ideological nature, but it does not realize how bad the education they provide has gotten. They provide a gateway to our economic elite.  They no longer create a real intellectual one.

Much of the corporate world has also lost any sense of responsibility.  Wall Street had no real second thoughts after a speculative frenzy brought the world economy to its knees in 2008, and successfully fought off any attempts at major reform.  The food industry inflates its profits by addicting us to fat, sugar and salt.  Purdue Pharma, we have learned, pushed the addiction of millions of Americans to its new legal narcotics, resulting in the deaths of tens of thousands and the destruction of their families--and no one has ever been criminally charged for it.  A recent New York Times story reported that most of the pharmaceutical firms that have been trying to develop desperately needed new antibiotics to fight superbugs have shut their doors, partly because there isn't enough profit in drugs that cure a fatal illness within a week or so.  And today's New York Times describes horrifying internal communications within Boeing, whose refusal to spend the money on essential simulator training for pilots of the new 737 Max reminds me of Purdue's refusal to admit their drug was addictive.  And perhaps our most powerful institutions--our energy companies--have decided to do everything they can to avoid any attempt to deal with global warming.  Meanwhile, our government remains paralyzed by partisanship, unable to deal with any of the problems I have listed here, or many others as well.

The cycle of greatness, decline and renewal has of course dominated history  since the beginning of recorded time.  Exactly how far we shall fall, what the consequences might be, and what we can do now, I do not know.  Yet I could not watch this miniseries and review this book without raising these questions.  Legasov and Vorotynsev rightly argued that only a proper diagnosis could lead to a real cure.  In that spirit I have written this, perhaps the most important post I have made here in the last 15 years.  I hope it will be widely shared.

Thursday, January 02, 2020

Generational voting, a history

A lot of younger people (that is, below 50!) on my favorite facebook page are looking forward to this year's election because they are counting on younger voters to defeat Donald Trump.  This reminds me of the late 1960s and early 1970s, when left wing Boomers were capturing the nation's admiration (and lots of air time on the evening news), and 18-year olds had just been given the vote.  The youth got their candidate in 1972--George McGovern, a public servant whom I always admired very much--but he lost in a landslide.  A reader has now identified exit polling data for 1972 in a New York Times chart that shows 18-29 year olds--a perfect match for the Boom--favoring Nixon 54-46 in that year.   Conservative Boomers, while quieter, were more numerous.  I started poking around in exit polling data over the last twenty years, and then one thing led to another, and before I knew it I had a whole spreadsheet of age breakdowns for every presidential election since 1976.  The results are quite striking.

Let's begin with some ground rules.  Here are my birth dates for the various living generations--mostly from Strauss and Howe, but asterisks show where I have made changes.

GIs:  born 1903*-24
Silent generation: b. 1925-42
Boomers: 1943-60
Gen X: 1961-81
Millennials: 1982-96*
Gen Z: 1997-(?)

The 1997 start date for Gen Z is becoming increasingly popular

Now unfortunately, the age brackets used by the Roper organization and CNN--my two exit poll sources--rarely match any generation exactly, but one or the other usually comes close enough to get a good idea of who voted for whom.  Rather than go from election to election, I'm going to present generational voting histories, which will show a pretty complete evolution for the votes of Boomers, Xers, and, in the three elections in which they have voted in significant numbers, Millennials.  We shall find that there is a common thread among them all.

The Silent generation in 1976, the first year of my study, ranged in age from 34 to 51.  They were on the conservative side of the nation, and voted for Gerald Ford over Jimmy Carter by 52-48--the same percentage as the GIs (then 52 to  73), and the even older survivors of the Lost generation.  In 1980, aged 38 to 55, they shifted further Republican--as did the whole nation, obviously--and preferred Ronald Reagan to Carter by 55-38, with another 7% for their very own John Anderson.  They remained more Republican than the country as a whole in 1984, giving Reagan about 61% of their votes (he won 59% overall) in the most one-sided election in our study, over Walter Mondale, a Silent himself.  Faced with the choice between GI George H. W. Bush and Silent Michael Dukakis in 1988, they gave Bush a full 58% of their votes, compared to 53% for the nation as a whole, and just 51% for Bush's own GI generation.

Like the nation as a whole, the Silent generation fragmented and swung towards the Democrats in 1992, giving Boomer Bill Clinton 43% of their votes, Bush just 38%, and H. Ross Perot 18%, essentially matching Perot's nationwide total.  By this time their ages ranged from 50 to 67.  Four years later, however, they gave Bob Dole 45% of their votes, trailing Clinton by just 2% (and with 8% for Perot), the most pro-Dole of any voting generation at that time.  In 2000, when the youngest GIs were only 76, the data lumps everyone 60 or over together.  That age group favored Al Gore over George W. Bush by 51-48--Gore's best generational showing.  Four years later, when Bush increased his total from 48% of the popular vote to 51%, the over-60 voters were his strongest age group, giving him 54% to John Kerry's 46%.  The youngest Silents were  66 in 2008, 70 in 2012 and 74 in 2016, and the older vote (increasingly composed of them) went solidly Republican all three times, 53-45 for Silent John McCain in 2008, 56-44 for Mitt Romney in 2012, and 52-45 for Donald Trump in 2016. The Silent generation, in short, has been Republican for most of its adult life, as far as we can see, with the exception of the Clinton years.

We come now to my own Boom generation, whose young Vietnam-era activists bequeathed a reputation for left wing radicalism that endures to this day.  That reputation does not reflect reality.

Although as we have seen they favored Nixon in 1972, Boomers did elect Jimmy Carter President in 1976.  They ranged from 33 to 16 in that year, and the 18-29 year old age group voted for Carter by 56-44--the only generation to give him a majority, in a year in which he won a bare 50% of the total.  Four years later that same age group remained the most Democratic in the nation, giving both Carter and Reagan 44% of their votes and John Anderson 11%, but they were clearly swinging to the right.  The Boomer vote essentially matched the national 59-41 margin for Reagan in 1984, by which time Boomers were known as yuppies.  That was the start of something. In 1988 Boomers ranged in age from 28 to 45, and the 30-49 age group voted from George H. W. Bush over Michael Dukakis, 54-46,  very close to the nationwide total.  Four years later, the same 30-49 age group--now composed almost entirely of Boomers--gave their own Bill Clinton just 41% of their votes, with 38% for Bush and a whopping 22% for H. Ross Perot.   In 1996 the Boomers appear to have cut their Perot vote from 22% to 9%, dividing the rest about evenly between Clinton (50%) and Bob Dole (41%).  That election, however--now 24 years away--marked the end of the Boomers as an asset to the Democratic Party.

The Boom generation, like the country as a whole, split its votes almost evenly between Al Gore and George W. Bush in 2000.  In 2004 the 45-59 age bracket--an almost perfect match for Boomers that year (44-61)--favored Bush over John Kerry by 51-48, his exact overall margin.  In 2008 Boomers appear to have divided evenly between Barack Obama and John McCain; in 2012 they preferred Mitt Romney to Obama by about 51-47; and in 2016 they appear to have preferred Trump by about 52-44.  The Boom generation is now, of course, the principal beneficiary of our nation's growing inequality, and that may be reflected in its voting.

Gen X, now middle-aged (38 to 58), has been largely invisible for most of its life, and all the attention flowing towards Millennials and Gen Z is making it even more so.  That is ironic because it has had a very important voting impact in a number of elections.  Gen X's affection for Ronald Reagan is well known, and they showed it in 1984, the first year that they voted in significant numbers, giving him 61% of their votes.  They matched the country's Republican total in 1988 with 53% for Bush, but four years later, they gave Bush just 34%, with 22% for Ross Perot and 44% for Bill Clinton. Bob Dole evidently struck them as too old in 1996, when they themselves ranged from 25 to 45, and Clinton's 55% among Gen X was easily his best showing among any generation that year.  George W. Bush won all the Perot Xer votes back in 2000, however, and led Gore by about 49-48 that year, in one of the closest elections in US history.  They appear to have backed Bush solidly for re-election in 2004, 54-45.  But then, now entering midlife, they moved left.

Xers ranged from 27 to 47 in 2008--and voted for their own Barack Obama, 52-45, while Boomers split evenly.  They stuck with Obama 52-45 in 2012 over Romney.  In 2016, however, they flipped..  CNN data for ages 40-49 shows them going for Trump 49-46 (Xers were 35-55 in 2016.)  And CNN found that the 50-64 age group--a third of which are Xers--went 52-44 for Trump, almost exactly the same as the over-65 Boomers and remaining Silents.

Millennials now range in age from 23 to 37.  They have been far more Democratic than Boomers and Xers at comparable stages of their lives so far.  In 2004, when their voters ranged from 18-22, the 18-29 age group went 54-45 for Kerry--the only generation to do so.  In 2008 a much larger Millennial group went 66-32 for Barack Obama, the biggest generational sweep, I believe, in this whole 40-year period.  In 2012 they 60-37 for Obama, but in 2016, their vote also seems to have fragmented somewhat, and they went about 55-36 for Clinton against Trump, with 9% for minor parties.  No new Millennials will be eligible to vote this year, which will mark the first substantial entry into politics of Gen Z voters.  The Millennials have already swung slightly to the right.  Given the severe rightward swing of Gen X, I am skeptical that the Millennials are going to reshape our voting patterns and our politics drastically this year.   The Boomers, Xers and Millennials all began their voting lives with substantial Democratic majorities.  The Boomers are now majority Republican, and the Xers have been trending that way.   The Xers are now taking over most of our institutions, and it looks as though they can swing our politics either way, as well.  But there will be no Xers on the presidential ballot this year.