Five Ways to Raise MSU’s Profile by 2025

We have three years. In 2025, Missouri State University will celebrate twenty years since our name change. We’ve bolstered attendance, built and renovated campus-wide, and grown more competitive in sports, resulting in a fast-climbing reputation and wider brand awareness.

Let’s keep it going. Here are five strategies to go from fast-climbing to skyrocketing before the historic celebration.

1) Sponsor “Matt & Abby” on social media. Matt and Abby Howard, MSU grads, have over 3 million followers on TikTok, over 1 million subscribers on YouTube, and nearly 800,000 followers on Instagram. Their fun videos occasionally provide free advertising, as they wear MO State shirts and hoodies, but a sponsorship to increase and focus this (imagine them doing BearWear Fridays) would be beneficial. Their views are now collectively in the billions.

2) Offer Terrell Owens a role at a football game. Legendary NFL receiver Terrell Owens (who has a sizable social media presence of his own) appeared on the MSU sideline during the 2021 season, as his son Terique is a Bears wide receiver. Invite Terrell Owens to join the cheer squad and lead the chants at a game. Or ask him to speak at halftime. Advertise it widely to boost attendance and get the story picked up by the national press.

3) Convince John Goodman to get on social media. Beloved actor and MSU alumnus John Goodman is now involved in university fundraising and related media — that’s huge. (Say, get him a role at a game, too.) The only thing that could make this better is if he would get on socials. Goodman would have millions of followers in a day, and with that comes exposure for MO State. Who knows what it would take to convince him after all these years avoiding it, but someone at this university has his ear…and should try.

4) Keep going after that Mizzou game. Mizzou men’s basketball coach Cuonzo Martin, as the former coach at MSU, is our best bet in the foreseeable future for the first MSU-Mizzou showdown since the Bears’ 1998 victory. In fact, a deal was in the works in summer 2020, but quickly fell apart. Martin’s contract ends in 2024 — if it is not renewed, scheduling a game will become much more difficult. Today MO State plays Mizzou in nearly all sports, even if football is irregular (last in 2017, next in 2033). We should keep fighting for a men’s basketball game. Then, of course, win it.

5) Build and beautify. From the John Goodman Amphitheatre to the renovation of Temple Hall, the campus is growing, dazzling. This should continue, for instance with the proposed facility on the south side of Plaster Stadium. Improving football facilities ups the odds of a future invite to an FBS conference. And one cannot forget more trees, possibly the most inexpensive way to radically beautify a university. Filling campus with more greenery, with more new and restored buildings, will position Missouri State as a destination campus for the next 20 years and beyond.

This article first appeared on Yahoo! and the Springfield News-Leader.

For more from the author, subscribe and follow or read his books.

Slowly Abandoning Online Communication and Texting

I grow increasingly suspicious of speaking to others digitally, at least in written form — comments, DMs, texts. It has in fact been 1.5 years since I last replied to a comment on socials, and in that time have attempted to reduce texting and similar private exchanges. Imagine that, a writer who doesn’t believe in written communication.

The motive for these life changes were largely outlined in Designing a New Social Media Platform:

As everyone has likely noticed, we don’t speak to each other online the way we do in person. We’re generally nastier due to the Online Disinhibition Effect; the normal inhibitions, social cues, and consequences that keep us civil and empathetic in person largely don’t exist. We don’t see each other the same way, because we cannot see each other. Studies show that, compared to verbal communication, we tend to denigrate and dehumanize other people when reading their written disagreements, seeing them as less capable of feeling and reason, which can increase political polarization. We can’t hear tone or see facial expressions, the eyes most important of all, creating fertile ground for both unkindness and misunderstandings. In public discussions, we also tend to put on a show for spectators, perhaps sacrificing kindness for a dunk that will garner likes. So let’s get rid of all that, and force people to talk face-to-face.

Circling back to these points is important because they obviously apply not only to social media but to texting, email, dating apps, and many other features of modern civilization. We all know how easy it is for a light disagreement to somehow turn into something terribly ugly when texting a friend, partner, or family member. It happens so fast we’re bewildered, or angered that things spiraled out of control, that we were so inexplicably unpleasant. It needn’t be this way. Some modes of communication are difficult to curb — if your job involves email, for instance — but it’s helpful to seek balance. You don’t have to forsake a tool completely if you don’t want to, just use it differently, adopt principles. A good rule: at the first hint of disagreement or conflict, stop. (Sometimes we even know it’s coming, and can act preemptively.) Stop texting or emailing about whatever it is. Ask to Facetime or Zoom, or meet in person, or call (at least you can hear them). Look into their eyes, listen to their voice. There are things that are said via text and on socials that would simply never be said in person or using more intimate technologies.

Progress will be different for each person. Some would rather talk than text anyway, and excising the latter from their lives would be simple. Others may actually be able to email less and cover more during meetings. Some enviable souls have detached themselves from social media altogether — which I hope to do at some point, but have found a balance or middle ground for now, since it’s important to me to share my writings, change the way people think, draw attention to political news and actions, and keep track of what local organizations and activists are up to (plus, my job requires social media use).

Changing these behaviors is key to protecting and saving human relationships, and maybe even society itself. First, if there’s an obvious way to avoid firestorms with friends and loved ones, keeping our bonds strong rather than frayed, we should take it. Second, the contribution of social media to political polarization, hatred, and misinformation since 2005 (maybe of the internet since the 1990s) is immeasurable, with tangible impacts on violence and threats to democracy. Society tearing itself apart due at least partially to this new technology sounds less hyperbolic by the day.

And it’s troubling to think that I, with all good intentions, am still contributing to that by posting, online advocacy perhaps having a negative impact on the world alongside an important positive one. What difference does it really make, after all, to share an opinion but not speak to anyone about it? Wouldn’t a social media platform where everyone shared their opinions but did not converse with others, ignored the comments, be just as harmful to society as a platform where we posted opinions and also went to war in the comments section? Perhaps so. The difference may be negligible. But in a year and a half, I have not engaged in any online debate or squabble, avoiding heated emotions toward individuals and bringing about a degree of personal peace (I have instead had political discussions in person, where it’s all more pleasant and productive). If I could advocate for progressivism or secularism while avoiding heightened emotions toward individual pious conservatives, whether friends or random strangers, they could do the same, posting and opining while sidestepping heightened emotions toward me. This doesn’t solve the divisiveness of social media — the awful beliefs and posts from the other side (whichever that is for you) are still there. Plenty of harmful aspects still exist beside the positive ones that keep us on. But perhaps it lowers the temperature a little.

For more from the author, subscribe and follow or read his books.

Free Speech on Campus Under Socialism

Socialism seeks to make power social, to enrich the lives of ordinary people with democracy and ownership. Just as the workers should own their workplaces and citizens should have decision-making power over law and policy, universities under socialism would operate a bit differently. The states will not own public universities, nor individuals and investors private ones. Such institutions will be owned and managed by the professors, groundskeepers, and other workers. There is a compelling case for at least some student control as well, especially when it comes to free speech controversies.

Broadening student power in university decision-making more closely resembles a consumer cooperative than a worker cooperative, described above and analyzed elsewhere. A consumer cooperative is owned and controlled by those who use it, patrons, rather than workers. This writer’s vision of socialism, laid bare in articles and books, has always centered the worker, and it is not a fully comfortable thought to allow students, merely passing through a college for two, four, or six years and greatly outnumbering the workers, free reign over policy. There is a disconnect here between workers and majority rule, quite unlike in worker cooperatives (I have always been a bit suspicious of consumer co-ops for this reason). However, it is likely that a system of checks and balances (so important in a socialist direct democracy) could be devised. Giving students more power over their place of higher learning is a positive thing (think of the crucial student movements against college investments in fossil fuels today), as this sacred place is for them, but this would have to be balanced with the power of the faculty and staff, who like any other workers deserve control over their workplace. A system of checks and balances, or specialized areas of authority granted to students, may be a sensible compromise. This to an extent already exists, with college students voting to raise their fees to fund desired facilities, and so on.

One specialized area could be free speech policy. Socialism may be a delightful solution to ideological clashes and crises. I have written on the free speech battles on campuses, such as in Woke Cancel Culture Through the Lens of Reason. There I opined only in the context of modern society (“Here’s what I think we should do while stuck in the capitalist system”). The remarks in full read:

One hardly envies the position college administrators find themselves in, pulled between the idea that a true place of learning should include diverse and dissenting opinions, the desire to punish and prevent hate speech or awful behaviors, the interest in responding to student demands, and the knowledge that the loudest, best organized demands are at times themselves minority opinions, not representative.

Private universities are like private businesses, in that there’s no real argument against them cancelling as they please.

But public universities, owned by the states, have a special responsibility to protect a wide range of opinion, from faculty, students, guest speakers, and more, as I’ve written elsewhere. As much as this writer loves seeing the power of student organizing and protest, and the capitulation to that power by decision-makers at the top, public colleges should take a harder line in many cases to defend views or actions that are deemed offensive, in order to keep these spaces open to ideological diversity and not drive away students who could very much benefit from being in an environment with people of different classes, ethnicities, genders, sexual orientations, religions, and politics. Similar to the above, that is a sensible general principle. There will of course be circumstances where words and deeds should be crushed, cancellation swift and terrible. Where that line is, again, is a matter of disagreement. But the principle is simply that public colleges should save firings, censorship, cancellation, suspension, and expulsion for more extreme cases than is current practice. The same for other public entities and public workplaces. Such spaces are linked to the government, which actually does bring the First Amendment and other free speech rights into the conversation, and therefore there exists a special onus to allow broader ranges of views.

But under socialism, the conversation changes. Imagine for a moment that college worker-owners gave students the power to determine the fate of free speech controversies, student bodies voting on whether to allow a speaker, fire a professor, kick out a student, and so forth. This doesn’t solve every dilemma and complexity involved in such decisions, but it has a couple benefits. First, you don’t have a small power body making decisions for everyone else, an administration enraging one faction (“They caved to the woke Leftist mob”; “They’re tolerating dangerous bigots”). Second, the decision has majority support from the student body; the power of the extremes, the perhaps non-representative voices, are diminished. Two forms of minority rule are done away with (this is what socialism aims to do, after all), and the decision has more legitimacy, with inherent popular support. More conservative student bases will make different decisions than more liberal ones, but that is comparable to today’s different-leaning administrations in thousands of colleges across the United States.

Unlike in the excerpt above, which refers to the current societal setup, private and public colleges alike will operate like this — these classifications in fact lose their meanings, as both are owned by the workers and become the same kind of entity. A university’s relationship to free speech laws, which aren’t going anywhere in a socialist society, then needs to be determined. Divorced from ownership by states, institutions of higher learning could fall outside free speech laws, like other cooperatives (where private employers and colleges largely fall today). But, to better defend diverse views, worthwhile interactions, and a deeper education, let’s envision a socialist nation that applies First Amendment protections to all universities (whether that preserved onus should be extended to all cooperatives can be debated another time).

When a university fires a professor today for some controversial comment, it might land in legal trouble, sued for violating First Amendment rights and perhaps forced to pay damages. Legal protection of rights is a given in a decent society. Under socialism, can you sue a student body (or former student body, as these things take a while)? Or just those who voted to kick you out? Surely not, as ballots are secret and you cannot punish those who were for you alongside those against you. Instead, would this important check still be directed against the university? This would place worker-owners in a terrible position: how can decision-making over free speech cases be given to the student body if it’s the worker-owners who will face the lawsuits later? One mustn’t punish the innocent and let the guilty walk. These issues may speak to the importance of worker-owners reserving full power, minority power, to decide free speech cases on campus. Yet if punishment in the future moves beyond money, there may be hope yet for the idea of student power. It may not be fair for a university to pay damages because of what a student body ruled, but worker-owners could perhaps stomach a court-ordered public apology on behalf of student voters, mandated reinstatement of a professor or student or speaker, etc.

With free speech battles, someone has to make the final call. Will X be tolerated? As socialism is built, as punishment changes, it may be worth asking: “Why not the students?”

For more from the author, subscribe and follow or read his books.

The Future of American Politics

The following are five predictions about the future of U.S. politics. Some are short-term, others long-term; some are possible, others probable.

One-term presidents. In a time of extreme political polarization and razor-thin electoral victories, we may have to get used to the White House changing hands every four years rather than eight. In 2016, Trump won Michigan by 13,000 votes, Wisconsin by 27,000, Pennsylvania by 68,000, Arizona by 91,000. Biden won those same states in 2020 by 154,000, 21,000, 82,000, and 10,000, respectively. Other states were close as well, such as Biden’s +13,000 in Georgia or Clinton’s +2,700 in New Hampshire. Competitive races are nothing new in election history, and 13 presidents (including Trump) have failed to reach a second term directly after their first, but Trump’s defeat was the first incumbent loss in nearly 30 years. The bitter divisions and conspiratorial hysteria of modern times may make swing state races closer than ever, resulting in fewer two-term presidents — at least consecutive ones — in the near-term.

Mail privacy under rightwing attack. When abortion was illegal in the United States, there were many abortions. If Roe falls and states outlaw the procedure, or if the Supreme Court continues to allow restrictions that essentially do the same, we will again see many illegal terminations — only they will be far safer and easier this time, with abortion pills via mail. Even if your state bans the purchase, sale, or use of the pill, mail forwarding services or help from out-of-town friends (shipping the pills to a pro-choice state and then having them mailed to you) will easily get the pills to your home. Is mail privacy a future rightwing target? The U.S. has a history of banning the mailing of contraceptives, information on abortion, pornography, lottery tickets, and more, enforced through surveillance, requiring the Supreme Court to declare our mail cannot be opened without a warrant. It is possible the Right will attempt to categorize abortion pills as items illegal to ship and even push for the return of warrantless searches.

Further demagoguery, authoritarianism, and lunacy. Trump’s success is already inspiring others, some worse than he is, to run for elected office. His party looks the other way or enthusiastically embraces his deceitful attempts to overturn fair elections because it is most interested in power, reason and democracy be damned. Same for Trump’s demagoguery, his other lies and authoritarian tendencies, his extreme policies, his awful personal behavior — his base loves it all and it’s all terribly useful to the GOP. While Trump’s loss at the polls in 2020 may cause some to second-guess the wisdom of supporting such a lunatic, at least those not among the 40% of citizens who still believe the election was stolen, at present it seems the conservative base and the Republican Party are largely ready for Round 2. What the people want and the party tolerates they will get; what’s favored and encouraged will be perpetuated and created anew. It’s now difficult to imagine a normal human being, a classic Republican, a decent person like Mitt Romney, Liz Cheney, Jon Huntsman, John Kasich, or even Marco Rubio beating an extremist fool at the primary polls. The madness will likely continue for some time, both with Trump and others who come later, with only temporary respites of normalcy between monsters. Meanwhile, weaknesses in the political and legal system Trump exploited will no doubt remain unfixed for an exceptionally long time.

Republicans fight for their lives / A downward spiral against democracy. In a perverse sort of way, Republican cheating may be a good sign. Gerrymandering, voter suppression in all its forms, support for overturning a fair election, desperation to hold on to the Electoral College, and ignoring ballot initiatives passed by voters are the acts and sentiments of the fearful, those who no longer believe they can win honestly. And given the demographic changes already occurring in the U.S. that will transform the nation in the next 50-60 years (see next section), they’re increasingly correct. Republicans have an ever-growing incentive to cheat. Unfortunately, this means the Democrats do as well. Democrats may be better at putting democracy and fairness ahead of power interests, but this wall already has severe cracks, and one wonders how long it will hold. For example, the GOP refused to allow Obama to place a justice on the Supreme Court, and many Democrats dreamed of doing the same to Trump, plus expanding the Court during the Biden era. Democrats of course also gerrymander U.S. House and state legislature districts to their own advantage (the Princeton Gerrymandering Project is a good resource), even if Republican gerrymandering is worsefour times worse — therefore reaping bigger advantages. It’s sometimes challenging to parse out which Democratic moves are reactions to Republican tactics and which they would do anyway to protect their seats, but it’s obvious that any step away from impartiality and true democracy encourages the other party to do the same, creating a downward anti-democratic spiral, a race to the bottom.

(One argument might be addressed before moving on. Democrats generally make it easier for people to vote and support the elimination of the Electoral College, though again liberals are not angels and there are exceptions to both these statements. Aren’t those dirty tactics that serve their interests? As I wrote in The Enduring Stupidity of the Electoral College, which shows that this old anti-democratic system is unfair to each individual voter, “True, the popular vote may serve Democratic interests. Fairness serves Democratic interests. But, unlike unfairness, which Republicans seek to preserve, fairness is what’s right. Giving the candidate with the most votes the presidency is what’s right.” Same for not making it difficult for people who usually vote the “wrong” way to cast their ballots! You do what is right and fair, regardless of who it helps.)

Democratic dominance. In the long-term, Democrats will become the dominant party through demographics alone. Voters under 30 favored the Democratic presidential candidate by large margins in 2004, 2008, 2012, 2016, and 2020 — voters under 40 also went blue by a comfortable margin. Given that individual political views mostly remain stable over time (the idea that most or even many young people will grow more conservative as they age is unsupported by research), in 50 or 60 years this will be a rather different country. Today we still have voters (and politicians) in their 80s and 90s who were segregationists during Jim Crow. In five or six decades, those over 40 today (who lean Republican) will be gone, leaving a bloc of older voters who have leaned blue their entire lives, plus a new generation of younger and middle-aged voters likely more liberal than any of us today. This is on top of an increasingly diverse country, with people of color likely the majority in the 2040s — with the white population already declining by total numbers and as a share of the overall population, Republican strength will weaken further (the majority of whites have long voted Republican; the majority of people of color vote blue). A final point: the percentage of Americans who identify as liberal is steadily increasing, as opposed to those who identify as conservative, and Democrats have already won the popular vote in seven of the last eight presidential elections. Republican life rafts such as the Electoral College (whose swing states will experience these same changes) and other anti-democratic practices will grow hopelessly ineffective under the crushing weight of demographic metamorphosis. Assuming our democracy survives, the GOP will be forced to moderate to have a chance at competing.

For more from the author, subscribe and follow or read his books.

Actually, “Seeing Is Believing”

Don’t try to find “seeing isn’t believing, believing is seeing” in the bible, for though Christians at times use these precise words to encourage devotion, they come from an elf in the 1994 film The Santa Clause, an instructive fact. It is a biblical theme, however, with Christ telling the doubting Thomas, “Because you have seen me, you have believed; blessed are those who have not seen and yet have believed” (John 20:29), 2 Corinthians 5:7 proclaiming “We walk by faith, not by sight,” and more.

The theme falls under the first of two contradictory definitions of faith used by the religious. Faith 1 is essentially “I cannot prove this, I don’t have evidence for it, but I believe nonetheless.” Many believers profess this with pride — that’s true faith, pure faith, believing what cannot be verified. This is just the abandonment of critical thinking, turning off the lights. Other believers see the problem with it. A belief can’t be justified under Faith 1. Without proof, evidence, and reason, they realize, their faith is on the baseless, ridiculous level of every other wild human idea — believing in Zeus without verification, Allah without verification, Santa without verification. Faith 2 is the corrective: “I believe because of this evidence, let me show you.” The “evidence,” “proof,” and “logic” then offered are terrible and fall apart at once, but that has been discussed elsewhere. “Seeing isn’t believing, believing is seeing” aligns with the first definition, while Faith 2 would more agree with the title of this article (though room is always left for revelation as well).

I was once asked what would make me believe in God again, and I think about this from time to time. I attempt to stay both intellectually fair and deeply curious. Being a six on the Dawkins scale, I have long maintained that deities remain in the realm of the possible, in the same way our being in a computer simulation is possible, yet given the lack of evidence there is little reason to take it seriously at this time, as with a simulation. For me, the last, singular reason to wonder whether God or gods are real is the fact existence exists — but supposing higher powers were responsible for existence brings obvious problems of its own that are so large they preclude religious belief. Grounds for believing in God again would have to come from elsewhere.

“Believing is seeing” won’t do. It’s just a hearty cry for confirmation bias and self-delusion (plus, as a former Christian it has already been tried). Feeling God working in your life, hearing his whispers, the tugs on your heart, dreams and visions, your answered prayers, miracles…these things, experienced by followers of all religions and insane cults, even by myself long ago, could easily be imagined fictions, no matter how much you “know” they’re not, no matter how amazing the coincidences, dramatic the life changes, vivid the dreams, unexplainable the events (of current experience anyway; see below).

In contrast, “seeing is believing” is rational, but one must be careful here, too. It’s a trillion times more sensible to withhold belief in extraordinary claims until you see extraordinary evidence than to believe wild things before verifying, maybe just hoping some proof, revelation, comes along later. The latter is just gullibility, taking off the thinking cap, believing in Allah, Jesus, or Santa because someone told you to. However, for me, “seeing is believing” can’t just mean believing the dreadful “evidence” of apologetics referenced above, nor could it mean the god of a religion foreign to me appearing in a vision, confounding or suggestive coincidences and “miracles,” or other personal experiences that do not in any way require supernatural explanations. That’s not adequate seeing.

It would have to be a personal experience of greater magnitude. Experiencing the events of Revelation might do it — as interpreted by Tim LaHaye and Jerry B. Jenkins in their popular (and enjoyable, peaking with Assassins) book series of the late 90s and early 2000s, billions of Christians vanish, the seas turn to blood, people survive a nuclear bombing unscathed, Jesus and an army of angels arrive on the clouds, and so forth. These kinds of personal experiences would seem less likely to be delusions (though they still could be, if one is living in a simulation, insane, etc.), and would be a better basis for faith than things that have obvious or possible natural explanations, especially if they were accurately prophesied. In other words, at some stage personal experience does become a rational basis for belief; human beings simply tend to adopt a threshold that is outrageously low, far outside necessitated supernatural involvement. (It’s remarkable where life takes you: from “I’m glad I won’t have to go through the tribulation, as a believer” to “The tribulation would be reasonable grounds to become a believer again.”) Of course, I suspect this is all mythological and have no worry it will occur. How concerned is the Christian over Kalki punishing evildoers before the world expires and restarts (Hinduism) or the Spider Woman covering the land with her webs before the end (Hopi)? I will convert to one of these faiths if their apocalyptic prophesies come to pass.

The reaction of the pious is to say, “But others saw huge signs like that, Jesus walked on water and rose from the dead and it was all prophesied and –” No. That’s the challenge of religion. Stories of what other people saw can easily be made-up, often to match prophesy. Even a loved one relating a tale could have been tricked, hallucinating, delusional, lying. You can only trust the experiences you have, and even those you can’t fully trust! This is because you could be suffering from something similar — human senses and perceptions are known to miserably fail and mislead. The only (possible) solution is to go big. Really big. Years of predicted, apocalyptic disasters that you personally survive. You still might not be seeing clearly. But belief in a faith might be finally justified on rational, evidentiary grounds, in alignment with your perceptions. “Seeing is believing,” with proper parameters.

Anything short of this is merely “believing is seeing” — elf babble.

For more from the author, subscribe and follow or read his books.

History, Theory, and Ethics

The writing of history and the theories that guide it, argues historian Lynn Hunt in Writing History in the Global Era, urgently need “reinvigoration.”[1] The old meta-narratives used to explain historical change looked progressively weaker and fell under heavier criticism as the twentieth century reached its conclusion and gave way to the twenty-first.[2] Globalization, Hunt writes, can serve as a new paradigm. Her work offers a valuable overview of historical theories and develops an important new one, but this paper will argue Hunt implicitly undervalues older paradigms and fails to offer a comprehensive purpose for history under her theory. This essay then proposes some guardrails for history’s continuing development, not offering a new paradigm but rather a framing that gives older theories their due and a purpose that can power many different theories going forward.

We begin by reviewing Hunt’s main ideas. Hunt argues for “bottom-up” globalization as a meta-narrative for historical study, and contributes to this paradigm by offering a rationale for causality and change that places the concepts of “self” and “society” at its center. One of the most important points that Writing History in the Global Era makes is that globalization has varying meanings, with top-down and bottom-up definitions. Top-down globalization is “a process that transforms every part of the globe, creating a world system,” whereas the bottom-up view is myriad processes wherein “diverse places become connected and interdependent.”[3] In other words, while globalization is often considered synonymous with Europe’s encroachment on the rest of the world, from a broader and, as Hunt sees it, better perspective, globalization would in fact be exemplified by increased interactions and interdependence between India and China, for example.[4] The exploration and subjugation of the Americas was globalization, but so was the spread of Islam from the Middle East across North Africa to Spain. It is not simply the spread of more advanced technology or capitalism or what is considered to be, in eurocentrism, the most enlightened culture and value system, either: it is a reciprocal, “two-way relationship” that can be found anywhere as human populations move, meet, and start to rely on each other, through trade for example.[5] Hunt seeks to overcome two problems here. First, the eurocentric top-down approach and its “defects”; second, the lack of a “coherent alternative,” which her work seeks to provide.[6]

Hunt rightly and persuasively makes the case for a bottom-up perspective of globalization as opposed to top-down, then turns to the question of why this paradigm has explanatory power. What is it about bottom-up globalization, the increasing interactions and interdependence of human beings, that brings about historical change? Here Hunt is situating her historical lens alongside and succeeding previous ones, explored early in the work. Marxism, modernization, and the Annales School offered theories of causality. Cultural and political change was brought about by new modes of economic production, the growth of technology and the State, or by geography and climate, respectively.[7] The paradigm of identity politics, Hunt notes, at times lacked such a clear “overarching narrative,” but implied that inclusion of The Other, minority or oppressed groups, in the national narrative was key to achieving genuine democracy (which more falls under purpose, to be explored later).[8] Cultural theories rejected the idea, inherent in older paradigms, that culture was produced by economic or social relations; culture was a force unto itself, comprised of language, semiotics, discourse, which determined what an individual thought to be true and how one behaved.[9] “Culture shaped class and politics rather than the other way around” — meaning culture brought about historical change (though many cultural theorists preferred not to focus on causation, perhaps similar to those engaged in identity politics).[10] Bottom-up globalization, Hunt posits, is useful as a modern explanatory schema for the historical field. It brings about changes in the self (in fact in the brain) and of society, which spurs cultural and political transformations.[11] There is explanatory power in increased connections between societies. For instance, she suggests that drugs and stimulants like coffee, brought into Europe through globalization, produced selves that sought pleasure and thrill (i.e. altered the neurochemistry of the brain) and changed society by creating central gathering places, coffeehouses, where political issues could be intensely discussed. These developments may have pushed places like France toward democratic and revolutionary action.[12] For Hunt, it is not enough to say culture alone directs the thinkable and human action, nor is the mind simply a social construction — the biology of the brain and how it reacts and operates must be taken into account.[13] The field must move on from cultural theories.

Globalization, a useful lens through which to view history, joins a long list, only partially outlined above. Beyond economics, advancing technology and government bureaucracy, geography and environment, subjugated groups, and culture, there is political, elite, or even “Great Men” history; social history, the story of ordinary people; the history of ideas, things, and diseases and non-human species; microhistory, biography, a close look at events and individuals; and more.[14] Various ways of looking at history, some of which are true theories that include causes of change, together construct a more complete view of the past. They are all valuable. As historian Sarah Maza writes, “History writing does not get better and better but shifts and changes in response to the needs and curiosities of the present day. Innovations and new perspectives keep the study of the past fresh and interesting, but that does not mean we should jettison certain areas or approaches as old-fashioned or irrelevant.”[15] This is a crucial reminder. New paradigms can reinvigorate, but historians must be cautious of seeing them as signals that preceding paradigms are dead and buried.

Hunt’s work flirts with this mistake, though perhaps unintentionally. Obviously, some paradigms grow less popular, while others, particularly new ones, see surges in adherents. Writing History in the Global Era outlines the “rise and fall” of theories over time, the changing popularities and new ways of thinking that brought them about.[16] One implication in Hunt’s language, though such phrasing is utilized from the viewpoint of historical time or those critical of older theories, is that certain paradigms are indeed dead or of little use — “validity” and “credibility” are “questioned” or “lost,” “limitations” and “disappointments” discovered, theories “undermined” and “weakened” by “gravediggers” before they “fall,” and so forth.[17] Again, these are not necessarily Hunt’s views, rather descriptors of changing trends and critiques, but Hunt’s work offers no nod to how older paradigms are still useful today, itself implying that different ways of writing history are now irrelevant. With prior theories worth less, a new one, globalization, is needed. Hunt’s work could have benefited from more resistance to this implication, with a serious look at how geography and climate, or changing modes of economic production, remain valuable lenses historians use to chart change and find truth — an openness to the full spectrum of approaches, for they all work cooperatively to reveal the past, despite their unique limitations. Above, Maza mentioned “certain areas” of history in addition to “approaches,” and continued: “As Lynn Hunt has pointed out, no field of history [such as ancient Rome] should be cast aside just because it is no longer ‘hot’…”[18] Hunt should have acknowledged and demonstrated that the precise same is true of approaches to history.

Another area that deserves more attention is purpose. In the same way that not all historical approaches emphasize causality and change, not all emphasize purpose. Identity politics had a clear use: the inclusion of subjugated groups in history helped move nations toward political equality.[19] With other approaches, however, “What is it good for?” is more difficult to answer. This is to ask what utility a theory had for contemporary individuals and societies (and has for modern ones), beyond a more complete understanding of yesteryear or fostering new research. It may be more challenging to see a clear purpose in the study of how the elements of the longue durée, such as geography and climate, of the Annales School change human development. How was such a lens utilized as a tool, if in fact it was, in the heyday of the Annales School? How could it be utilized today? (Perhaps it could be useful in mobilizing action against climate change.) The purpose of history — of each historical paradigm — is not always obvious.

Indeed, Hunt’s paradigm “offers a new purpose for history: understanding our place in an increasingly interconnected world,” a rather vague suggestion that sees little elaboration.[20] What does it mean to understand our place? Is this a recycling of “one cannot understand the present without understanding the past,” a mere truism? Or is it to say that a bottom-up globalization paradigm can be utilized to demonstrate the connection between all human beings, breaking down nationalism or even national borders? After all, the theory moves away from eurocentrism and the focus on single nations. Perhaps it is something else, one cannot know for certain. Of course, Hunt may have wanted to leave this question to others, developing the tool and letting others determine how to wield it. However, hesitation on Hunt’s part to more deeply and explicitly explore purpose, to adequately show how her theory is useful to the present, may be a simple desire to avoid the controversy of politics. This would be disappointing to those who believe history is inherently political or anchored to ethics, but either reason is out of step with Hunt’s introduction. History, Hunt writes on her opening page, is “in crisis” due to the “nagging question that has proved so hard to answer…‘What is it good for?’”[21] In the nineteenth and twentieth centuries, she writes, the answer shifted from developing strong male leaders to building national identity and patriotism to contributing to the social movements of subjugated groups by unburying histories of oppression.[22] All of these purposes are political. Hunt deserves credit for constructing a new paradigm, with factors of causality and much fodder for future research, but to open the work by declaring a crisis of purposelessness, framing purposes as political, and then not offering a fully developed purpose through a political lens (or through another lens, explaining why purpose need not be political) is an oversight.

Based on these criticisms, we have a clear direction for the field of history. First, historians should reject any implication of a linear progression of historical meta-narratives, which this paper argues Hunt failed to do. “Old-fashioned” paradigms in fact have great value today, which must be noted and explored. A future work on the state of history might entirely reframe, or at least dramatically add to, the discussion of theory. Hunt tracked the historical development of theories and their critics, with all the ups and downs of popularity. This is important epistemologically, but emphasizes the failures of theories rather than their contributions, and presents them as stepping stones to be left behind on the journey to find something better. Marxism had a “blindness to culture” and had to be left by the wayside, its replacement had this or that limitation and was itself replaced, and so on.[23] Hunt writes globalization will not “hold forever” either.[24] A future work might instead, even if it included a brief, similar tracking, focus on how each paradigm added to our understanding of history, continued to do so, and how it does so today. As an example of the second task, Anthony Reid’s 1988 Southeast Asia in the Age of Commerce, 1450-1680 was written very much in the tradition of the Annales School, with a focus on geography, resources, climate, and demography, but it would be lost in a structure like Hunt’s, crowded out by the popularity of cultural studies in the last decades of the twentieth century.[25] Simply put, the historian must break away from the idea that paradigms are replaced. They are replaced in popularity, but not in importance to the mission of more fully understanding the past. As Hunt writes, “Paradigms are problematic because by their nature they focus on only part of the picture,” which highlights the necessity of the entire paradigmic spectrum, as does her putting globalization theory into practice, suggesting that coffee from abroad spurred revolutionary movements in eighteenth-century Europe, sidelining countless other factors.[26] Every paradigm helps us see more of the picture. It would be a shame if globalization was downplayed as implicitly irrelevant only a couple decades from now, if still a useful analytical lens. Paradigms are not stepping stones, they are columns holding up the house of history — more can be added as we go.

This aforementioned theoretical book on the field would also explore purpose, hypothesizing that history cannot be separated from ethics, and therefore from politics. Sarah Maza wrote in the final pages of Thinking About History:

Why study history? The simplest response is that history answers questions that other disciplines cannot. Why, for instance, are African-Americans in the United States today so shockingly disadvantaged in every possible respect, from income to education, health, life expectancy, and rates of incarceration, when the last vestiges of formal discrimination were done away with half a century ago? Unless one subscribes to racist beliefs, the only way to answer that question is historically, via the long and painful narrative that goes from transportation and slavery to today via Reconstruction, Jim Crow laws, and an accumulation, over decades, of inequities in urban policies, electoral access, and the judicial system.[27]

This is correct, and goes far beyond the purpose of answering questions. History is framed as the counter, even the antidote, to racist beliefs. If one is not looking to history for such answers, there is nowhere left to go but biology, racial inferiority, to beliefs deemed awful. History therefore informs ethical thinking; its utility is to help us become more ethical creatures, as (subjectively) defined by our society — and the self. This purpose is usually implied but rarely explicitly stated, and a discussion on the future of history should explore it. Now, one could argue that Maza’s dichotomy is simply steering us toward truth, away from incorrect ideas rather than unethical ones. But that does not work in all contexts. When we read Michel Foucault’s Discipline and Punish, he is not demonstrating that modes of discipline are incorrect — and one is hardly confused as to whether he sees them as bad things, these “formulas of domination” and “constant coercion.”[28] J.R. McNeill, at the end of Mosquito Empires: Ecology and War in the Greater Caribbean, 1620-1914, writes that yellow fever’s “career as a governing factor in human history, mercifully, has come to a close” while warning of a lapse in vaccination and mosquito control programs that could aid viruses that “still lurk in the biosphere.”[29] The English working class, wrote E.P. Thompson, faced “harsher and less personal” workplaces, “exploitation,” “unfreedom.”[30] The implications are clear: societies without disciplines, without exploitation, with careful mosquito control would be better societies. For human beings, unearthing and reading history cannot help but create value judgements, and it is a small step from the determination of what is right to the decision to pursue it, political action. It would be difficult, after all, to justify ignoring that which was deemed ethically right.

Indeed, not only do historians implicitly suggest better paths and condemn immoral ones, the notion that history helps human beings make more ethical choices is already fundamental to how many lay people read history — what is the cliché of being doomed to repeat the unlearned past about if not avoiding tragedies and terrors deemed wrong by present individuals and society collectively? As tired and disputed as the expression is, there is truth to it. Studying how would-be authoritarians often use minority groups as scapegoats for serious economic and social problems to reach elected office in democratic systems creates pathways for modern resistance, making the unthinkable thinkable, changing characterizations of what is right or wrong, changing behavior. Globalization may alter the self and society, but the field of history itself, to a degree, does the same. This could be grounds for a new, rather self-congratulatory paradigm, but the purpose, informing ethical and thus political decision-making, can guide many different theories, from Marxism to globalization. As noted, prior purposes of history were political: forming strong leaders, creating a national narrative, challenging a national narrative. A new political purpose would be standard practice. One might argue moving away from political purposes is a positive step, but it must be noted that the field seems to move away from purpose altogether when it does so. Is purpose inherently political? This future text would make the case that it is. A purpose cannot be posited without a self-evident perceived good. Strong leaders are good, for instance — and therefore should be part of the social and political landscape.

In conclusion, Hunt’s implicit dismissal of older theories and her incomplete purpose for history deserve correction, and doing so pushes the field forward in significant ways. For example, using the full spectrum of paradigms helps us work on (never solve) history’s causes-of-causes ad infinitum problem. Changing modes of production may have caused change x, but what caused the changing modes of production? What causes globalization in the first place? Paradigms can interrelate, helping answer the thorny questions of other paradigms (perhaps modernization or globalization theory could help explain changing modes of production, before requiring their own explanations). How giving history a full purpose advances the field is obvious: it sparks new interest, new ways of thinking, new conversations, new utilizations, new theories, while, like the sciences, offering the potential — but not the guarantee — of improving the human condition.

For more from the author, subscribe and follow or read his books.

[1] Lynn Hunt, Writing History in the Global Era (New York: W.W. Norton & Company, 2014), 1.

[2] Ibid, 26, 35-43.

[3] Ibid, 59. See also 60-71.

[4] Ibid, 70.

[5] Ibid.

[6] Ibid, 77.

[7] Ibid, 14-17.

[8] Ibid, 18.

[9] Ibid, 18-27.

[10] Ibid, 27, 77.

[11] Ibid, chapters 3 and 4.

[12] Ibid, 135-141.

[13] Ibid, 101-118.

[14] Sarah Maza, Thinking About History (Chicago: University of Chicago Press, 2017).

[15] Maza, Thinking, 236.

[16] Hunt, Writing History, chapter 1.

[17] Ibid, 8-9, 18, 26-27, chapter 1.

[18] Maza, Thinking, 236.

[19] Hunt, Writing History, 18.

[20] Ibid, 10.

[21] Ibid, 1.

[22] Ibid, 1-7.

[23] Ibid, 8.

[24] Ibid, 40.

[25] Anthony Reid, Southeast Asia in the Age of Commerce, 1450-1680, vol. 1, The Lands Below the Winds (New Haven: Yale University Press, 1988).

[26] Hunt, Writing History, 121, 135-140.

[27] Maza, Thinking, 237.

[28] Michel Foucault, Discipline and Punish (New York: Vintage Books, 1995), 137.

[29] J.R. McNeill, Mosquito Empires: Ecology and War in the Greater Caribbean, 1620-1914 (New York: Cambridge University Press, 2010), 314.

[30] E.P. Thompson, The Essential E.P. Thompson (New York: The New Press, 2001), 17. 

Is It Possible For Missouri State to Grow Larger Than Mizzou?

Students and alumni of Missouri State (and perhaps some of the University of Missouri) at times wonder if MSU will ever become the largest university in the state. While past trends are never a perfect predictor of the future, looking at the enrollment patterns of each institution can help offer an answer. Here are the total student growths since 2005.

Via its Student Body Profile reports and enrollment summary (Columbia campus):

2005 – 27,985
2006 – 28,253
2007 – 28,477
2008 – 30,200
2009 – 31,314
2010 – 32,415
2011 – 33,805
2012 – 34,748
2013 – 34,658
2014 – 35,441
2015 – 35,448
2016 – 33,266
2017 – 30,870
2018 – 29,866
2019 – 30,046
2020 – 31,103
2021 – 31,412

Missouri State
Via its enrollment history report (Springfield campus):

2005 – 19,165
2006 – 19,464
2007 – 19,705
2008 – 19,925
2009 – 20,842
2010 – 20,949
2011 – 20,802
2012 – 21,059
2013 – 21,798
2014 – 22,385
2015 – 22,834
2016 – 24,116
2017 – 24,350
2018 – 24,390
2019 – 24,126
2020 – 24,163
2021 – 23,618

In the past 16 years, MSU gained on average 278.3 new students each Fall. Mizzou gained 214.2 new students per year, an average tanked by the September 2015 racism controversy. Before the controversy (2005-2015 data), Mizzou gained 746.3 new students per year (MSU, over the same ten years, +366.9). From a low point in 2018, Mizzou has since, over a three-year period, gained on average 515.3 new students (over the same time, MSU saw -257.3 students — one school’s gain is often the other’s loss). This is too short a timeframe to draw unquestionable conclusions, but with Mizzou back on its feet it seems likely to continue to acquire more students on average each year, making MSU’s ascension to the top unlikely.

Predicting future enrollment patterns is rather difficult, of course. Over the past decade, fewer Americans have attended university, including fewer Missourians — and that was before COVID. Like a pandemic or a controversy, some disruptors cannot be predicted, nor can boosts to student populations. But most challenges will be faced by both schools: fewer young people, better economic times (which draws folks to the working world), pandemics, etc. The rising cost of college may give a university that is slightly more affordable an edge, as has been Missouri State’s long-time strategy. An increased profile through growing name recognition (it’s only been 16 years since Missouri State’s name change), success in sports, clever marketing schemes (alumnus John Goodman is now involved with MSU), ending Mizzou’s near-monopoly on doctoral degrees, and so on could make a difference, but there remains a huge advantage to simply being an older school, with a head-start in enrollment and brand recognition.

For more from the author, subscribe and follow or read his books.

COVID Showed Americans Don’t Leech Off Unemployment Checks

In most states, during normal times, you can use unemployment insurance for at most 26 weeks, half the year, and will receive 30-50% of the wages from your previous job, up to a certain income. This means $200-400 a week on average. One must meet a list of requirements to qualify, for instance having been fired from a job due to cutbacks, not through fault of your own. Only 35-40% of unemployed persons receive UI.

This means that at any given time, about 2 million Americans are receiving UI; in April/May 2020, with COVID-19 and State measures to prevent its spread causing mass firings, that number skyrocketed to 22 million. Put another way, just 1-3% of the workforce is usually using UI, and during the pandemic spike it was about 16%. Just before that rise, it was at 1.5% — and it returned to that rate in November 2021, just a year and a half later. Indeed, the number of recipients fell as fast as it shot up, from 16% to under 8% in just four months (September 2020), down to 4% in six months (November 2020). As much pearl-clutching as there was among conservatives (at least those who did not use UI) over increased dependency, especially with the temporary $600 federal boost to UI payments, tens of millions of Americans did not leech off the system. They got off early, even though emergency measures allowed them to stay on the entire year of 2020 and into the first three months of 2021! (The trend was straight down, by the way, even before the $600 boost ended.)

This in fact reflects what we’ve always known about unemployment insurance. It’s used as intended, as a temporary aid to those in financial trouble (though many low-wage workers don’t have access to it, which must be corrected). Look at the past 10 years of UI use. The average stay in the program (“duration”) each year was 17 or 18 weeks in times of economic recovery, 14 or 15 weeks in better economic times (sometimes even fewer). Four months or so, then a recipient stops filing for benefits, having found a job or ameliorated his or her crisis in some fashion. Some “enjoy” the 30-50% of previous wages for the whole stretch, but the average recipient doesn’t even use UI for 20 weeks, let alone the full 26 allowed. This makes sense, given how much of a pay cut UI is. Again, many Americans stop early, and the rest are cut off — so why all the screaming about leeching? Only during the COVID crisis did the average duration climb higher, to 26-27 weeks, as the federal government offered months of additional aid, as mentioned — again, many did not receive benefits for as long as they could have.

Those that receive benefits will not necessarily do the same next year. In times of moderate unemployment, for example, about 30% of displaced workers and 50% of workers on temporary layoff who receive benefits in Year 1 will reapply for benefits in Year 2. The rest do not refile.

However, we must be nuanced thinkers. Multiple things can be true at the same time. UI can also extend unemployment periods, which makes a great deal of sense even if UI benefits represent a drastic pay cut. UI gives workers some flexibility to be more selective in the job hunt. An accountant who has lost her position may, with some money coming in and keeping a savings account afloat, be able to undertake a longer search for another accounting job, rather than being forced to take the first thing she can find, such as a waitressing job. This extra time is important, because finding a similar-wage job means you can keep your house or current apartment, won’t fall further into poverty, etc. There are many factors behind the current shortage of workers, and UI seems to be having a small effect (indeed, studies range between no effect and moderate effects). And of course, in a big, complex world there will be some souls who avoid work as long as they can, and others who commit fraud (during COVID, vast sums were siphoned from our UI by individuals and organized crime rings alike, in the U.S. and from around the globe; any human being with internet access can attempt a scam). But that’s not most Americans. While UI allows workers to be more selective, prolonging an unemployed term a bit, they nevertheless generally stop filing for benefits early and avoid going back.

To summarize, for the conservatives in the back. The U.S. labor force is 161 million people. A tiny fraction is being aided by UI at any given moment. Those that are generally don’t stay the entire time they could. Those who do use 26 weeks of benefits will be denied further aid for the year (though extended benefits are sometimes possible in states with rising unemployment). Most recipients don’t refile the next year. True, lengths of unemployment may be increased some, and there will always be some Americans who take advantage of systems like this, but most people would prefer not to, instead wanting what all deserve — a good job, with a living wage.

For more from the author, subscribe and follow or read his books.

Comparative Power

The practice of reconstructing the past, with all its difficulties and incompleteness, is aided by comparative study. Historians, anthropologists, sociologists, and other researchers can learn a great deal about their favored society and culture by looking at others. This paper makes that basic point, but, more significantly, makes a distinction between the effectiveness of drawing meaning from cultural similarity/difference and doing the same from one’s own constructed cultural analogy, while acknowledging both are valuable methods. In other words, it is argued here that the historian who documents similarities and differences between societies stands on firmer methodological ground for drawing conclusions about human cultures than does the historian who is forced to fill in gaps in a given historical record by studying other societies in close geographic and temporal proximity. Also at a disadvantage is the historian working comparatively with gaps in early documentation that are filled in later documentation. This paper is a comparison of comparative methods — an important exercise, because such methods are often wielded due to a dearth of evidence in the archives. The historian should understand the strengths and limitations of various approaches (here reciprocal comparison, historical analogy, and historiographic comparison) to this problem.

To begin, a look at reciprocal comparison and the meaning derived from such an effort, derived specifically from likenesses or distinctions. Historian Robert Darnton found meaning in differences in The Great Cat Massacre: and Other Episodes in French Cultural History. What knowledge, Darnton wondered in his opening chapter, could we gain of eighteenth century French culture by looking at peasant folk tales and contrasting them to versions found in other places in Europe? Whereas similarities might point to shared cultural traits or norms, differences would isolate the particular mentalités of French peasants, how they viewed the world and what occupied their thoughts, in the historical tradition of the Annales School.[1] So while the English version of Tom Thumb was rather “genial,” with helpful fairies, attention to costume, and a titular character engaging in pranks, in the French version the Tom Thumb character, Poucet, was forced to survive in a “harsh, peasant world” against “bandits, wolves, and the village priest by using his wits.”[2] In a tale of a doctor cheating Death, the German version saw Death immediately kill the doctor; with a French twist, the doctor got away with his treachery for some time, becoming prosperous and living to old age — cheating paid off.[3] Indeed, French tales focused heavily on survival in a bleak and brutal world, and on this world’s particularities. Characters with magical wishes asked for food and full bellies, they got rid of children who did not work, put up with cruel step-mothers, and encountered many beggars on the road.[4] Most folk tales mix fictional elements like ogres and magic with socio-economic realities from the place and time they are told, and therefore the above themes reflect the ordinary lives of French peasants: hunger, poverty, the early deaths of biological mothers, begging, and so on.[5] In comparing French versions with those of the Italians, English, and Germans, Darnton noticed unique fixations in French peasant tales and then contrasted these obsessions with the findings of social historians on the material conditions of peasant life, bringing these things together to find meaning, to create a compelling case for what members of the eighteenth century French lower class thought about day to day and their attitudes towards society.

Now, compare Darnton’s work to ethno-historian Helen Rountree’s “Powhatan Indian Women: The People Captain John Smith Barely Saw.” Rountree uses ethnographic analogy, among other tools, to reconstruct the daily lives of Powhatan women in the first years of the seventeenth century. Given that interested English colonizers had limited access to Powhatan women and a “cloudy lens” of patriarchal eurocentrism through which they observed native societies, and given that the Powhatans left few records themselves, Rountree uses the evidence of daily life in nearby Eastern Woodland tribes to describe the likely experiences of Powhatan women.[6] For example: “Powhatan women, like other Woodland Indian women, probably nurse their babies for well over a year after birth, so it would make sense to keep baby and food source together” by bringing infants into the fields with them as the women work.[7] Elsewhere “probably” is dropped for more confident takes: “Powhatan men and women, like those in other Eastern Woodland tribes, would have valued each other as economic partners…”[8] A lack of direct archival knowledge of Powhatan society and sentiments is shored up through archival knowledge of other native peoples living in roughly the same time and region. The meaning Rountree derives from ethnographic analogy, alongside other techniques and evidence, is that the English were wrong, looking through their cloudy lens, to believe Powhatan women suffered drudgery and domination under Powhatan men. Rather, women experienced a great deal of autonomy, as well as fellowship and variety, in their work, and were considered co-equal partners with men in the economic functioning of the village.[9]  

Both Darnton and Rountree admit their methods have challenges where evidence is concerned. Darnton writes that his examination of folktales is “distressingly imprecise in its deployment of evidence,” the evidence is “vague,” because the tales were written down much later — exactly how they were orally transmitted at the relevant time cannot be known.[10] In other words, what if the aspect of a story one marks as characteristic of the French peasant mentalité was not actually in the verbal telling of the tale? It is a threat to the legitimacy of the project. Rountree is careful to use “probably” and “likely” with most of her analogies; the “technique is a valid basis for making inferences if used carefully” (emphasis added), and one must watch out for the imperfections in the records of other tribes.[11] For what if historical understanding of another Eastern Woodland tribe is incorrect, and the falsity is copied over to the narrative of the Powhatan people? Rountree and Darnton acknowledge the limitations of their methods even while firmly believing they are valuable for reconstructing the past. This paper does not dispute that — however, it would be odd if all comparative methods were created equal.

Despite its challenges, reciprocal comparison rests on safer methodological ground, for it at least boasts two actually existing elements to contrast. For instance, Darnton has in his possession folktales from France and from Germany, dug up in the archives, and with them he can notice differences and thus derive meaning about how French peasants viewed the world. Such meaning may be incorrect, but is less likely to be so with support from research on the material conditions of those who might be telling the tales, as mentioned. Rountree, on the other hand, wields a tool that works with but one existing element. Historical, cultural, or ethnographic analogy takes what is known about other peoples and applies it to a specific group suffering from a gap in the historical record. This gap, a lack of direct evidence, is filled with an assumption — which may simply be wrong, without support from other research, like Darnton enjoys, to help out (to have such research would make analogy unnecessary). Obviously, an incorrect assumption threatens to derail derived meaning. If the work of Powhatan women differed in a significant way from other Eastern Woodland tribes, unseen and undiscovered and even silenced by analogy, the case of Powhatan economic equality could weaken. Again, this is not to deny the method’s value, only to note the danger that it carries compared to reciprocal comparison. Paradoxically, the inference that Powhatan society resembled other tribes nearby seems as probable and reasonable as it is bold, risky.

Similarly, Michel-Rolph Trouillot, in Silencing the Past: Power and the Production of History, also found meaning with absence when examining whether Henri Christophe, monarch of Haiti after its successful revolution against the French from 1791 to 1804, was influenced by Frederick the Great of Prussia when Christophe named his new Milot palace “San Souci.” Was the palace named after Frederick’s own in Potsdam, or after Colonel San Souci, a revolutionary rival Christophe killed? Trouillot studied the historical record and found that opportunities for early observers to mention a Potsdam-Milot connection were suspiciously ignored.[12] For example, Austro-German geographer Karl Ritter, a contemporary of Christophe, repeatedly described his palace as “European” but failed to mention it was inspired by Frederick’s.[13] British consul Charles Mackenzie, “who visited and described San Souci less than ten years after Christophe’s death, does not connect the two palaces.”[14] Why was a fact that was such a given for later writers not mentioned early on if it was true?[15] These archival gaps of course co-exist with Trouillot’s positive evidence (“Christophe built San Souci, the palace, a few yards away from — if not exactly — where he killed San Souci, the man”[16]), but are used to build a case that Christophe had Colonel San Souci in mind when naming his palace, a detail that evidences an overall erasure of the colonel from history.[17] By contrasting the early historical record with the later one, Trouillot finds truth and silencing.

This historiographic comparison is different from Rountree’s historical analogy. Rountree fills in epistemological gaps about Powhatan women with the traits of nearby, similar cultures; Trouillot judges the gaps in early reports about Haiti’s San Souci palace to suggest later writers were in error and participating in historical silencing (he, like Darnton, is working with two existing elements and weighs the differences). Like Rountree’s, Trouillot’s method is useful and important: the historian should always seek the earliest writings from relevant sources to develop an argument, and if surprising absences exist there is cause to be suspicious that later works created falsities. However, this method too flirts with assumption. It assumes the unwritten is also the unthought, which is not always the case. It may be odd or unlikely that Mackenzie or Ritter would leave Potsdam unmentioned if they believed in its influence, but not impossible or unthinkable. It further assumes a representative sample size — Trouillot is working with very few early documents. Would the discovery of more affect his thesis? As we see with Trouillot and Rountree, and as one might expect, a dearth in the archives forces assumptions.

While Trouillot’s conclusion is probable, he is nevertheless at greater risk of refutation than Darnton or, say, historian Kenneth Pomeranz, who also engaged in reciprocal comparison when he put China beside Europe during the centuries before 1800. Unlike the opening chapter of The Great Cat Massacre, The Great Divergence finds meaning in similarities as well as differences. Pomeranz seeks to understand why Europe experienced an Industrial Revolution instead of China, and must sort through many posited causal factors. For instance, did legal and institutional structures more favorable to capitalist development give Europe an edge, contributing to greater productivity and efficiency?[18] Finding similar regulatory mechanisms like interest rates and property rights, and a larger “world of surprising resemblances” before 1750, Pomeranz argued for other differences: Europe’s access to New World resources and trade, as well as to coal.[19] This indicates that Europe’s industrialization occurred not due to the superior intentions, wisdom, or industriousness of Europeans but rather due to unforeseen, fortunate happenings, or “conjunctures” that “often worked to Western Europe’s advantage, but not necessarily because Europeans created or imposed them.”[20] Reciprocal comparison can thus break down eurocentric perspectives by looking at a broader range of historical evidence. No assumptions need be made (rather, assumptions, such as those about superior industriousness, can be excised). As obvious as it is to write, a wealth of archival evidence, rather than a lack, makes for safer methodological footing, as does working with two existing evidentiary elements, no risky suppositions necessary.

A future paper might muse further on the relationship between analogy and silencing, alluded to earlier — if Trouillot is correct and a fact-based narrative is built on silences, how much more problematic is the narrative based partly on analogy?[21] As for this work, in sum, the historian must use some caution with historical analogy, historiographic comparison, and other tools that have an empty space on one side of the equation. These methods are hugely important and often present theses of high probability. But they are by nature put at risk by archival gaps; reciprocal comparison has more power in its derived meanings and claims about other cultures of the past — by its own archival nature.

For more from the author, subscribe and follow or read his books.

[1] Anna Green and Kathleen Troup, eds., The Houses of History: A Critical Reader in Twentieth-Century History and Theory, 2nd ed. (Manchester: Manchester University Press, 2016), 111.

[2] Robert Darnton, The Great Cat Massacre: And Other Episodes in French Cultural History (New York: Basic Books, 1984), 42.

[3] Ibid, 47-48.

[4] Ibid, 29-38.

[5] Ibid, 23-29.

[6] Helen C. Rountree, “Powhatan Indian Women: The People Captain John Smith Barely Saw,” Ethnohistory 45, no. 1 (winter 1998): 1-2.

[7] Ibid, 4.

[8] Ibid, 21.

[9] Ibid, 22.

[10] Darnton, Cat Massacre, 261.

[11] Rountree, “Powhatan,” 2.

[12] Michel-Rolph Trouillot, Silencing the Past: Power and the Production of History (Boston: Beacon Press, 1995), 61-65.

[13] Ibid, 63-64.

[14] Ibid, 62.

[15] Ibid, 64.

[16] Ibid, 65.

[17] Ibid, chapters 1 and 2.

[18] Kenneth Pomeranz, The Great Divergence: China, Europe, and the Making of the Modern World Economy (Princeton: Princeton University Press, 2000), chapters 3 and 4.

[19] Ibid, 29, 279-283.

[20] Ibid, 4.

[21] Trouillot, Silencing, 26-27.

Will Capitalism Lead to the One-Country World?

In Why America Needs Socialism, I offered a long list of ways the brutalities and absurdities of capitalism necessitate a better system, one of greater democracy, worker ownership, and universal State services. The work also explored the importance of internationalism, moving away from nationalistic ideas (the simpleminded worship of one’s country) and toward an embrace of all peoples — a world with one large nation. Yet these ideas could have been more deeply connected. The need for internationalism was largely framed as a response to war, which, as shown, can be driven by capitalism but of course existed before it and thus independently of it. The necessity of a global nation was only briefly linked to global inequality, disastrous climate change, and other problems. In other words, one could predict that the brutalities and absurdities of international capitalism, such as the dreadful activities of transnational corporations, will push humanity toward increased global political integration.

As a recent example of a (small) step toward political integration, look at the 2021 agreement of 136 nations to set a minimum corporate tax rate of 15% and tax multinational companies where they operate, not just where they are headquartered. This historic moment was a response to corporations avoiding taxes via havens in low-tax countries, moving headquarters, and other schemes. Or look to the 2015 Paris climate accords that set a collective goal of limiting planetary warming to 1.5-2 degrees Celsius, a response to the environmental damage wrought by human industry since the Industrial Revolution. There is a recognition that a small number of enormous companies threaten the health of all people. Since the mid-twentieth century, many international treaties have focused on the environment and labor rights (for example, outlawing forced labor and child labor, which were always highly beneficial and profitable for capitalists). The alignment of nations’ laws is a remarkable step toward unity. Apart from war and nuclear weapons, apart from the global inequality stemming from geography (such as an unlucky lack of resources) or history (such as imperialism), the effects and nature of modern capitalism alone scream for the urgency of internationalism. Capital can move about the globe, businesses seeking places with weaker environmental regulations, minimum wages, and safety standards, spreading monopolies, avoiding taxes, poisoning the biosphere, with an interconnected global economy falling like a house of cards during economic crises. The movement of capital and the interconnectivity of the world necessitate further, deeper forms of international cooperation.

Perhaps, whether in one hundred years or a thousand, humanity will realize that the challenges of multi-country accords — goals missed or ignored, legislatures refusing to ratify treaties, and so on — would be mitigated by a unified political body. A single human nation could address tax avoidance, climate change, and so on far more effectively and efficiently.

On the other hand, global capitalism may lead to a one-nation world in a far more direct way. Rather than the interests of capitalists spurring nations to work together to confront said interests, it may be that nations integrate to serve certain interests of global capitalism, to achieve unprecedented economic growth. The increasing integration of Europe and other regions provides some insight. The formation of the European Union’s common market eliminated taxes and customs between countries, and established a free flow of capital, goods, services, and workers, generating around €1 trillion in economic benefit annually. The EU market is the most integrated in the world, alongside the Caribbean Single Market and Economy, both earning sixes out of seven on the scale of economic integration, one step from merging entirely. Other common markets exist as well, being fives on the scale, uniting national economies in Eurasia, Central America, the Arabian Gulf, and South America; many more have been proposed. There is much capitalists enjoy after single market creation: trade increases, production costs fall, investment spikes, profits rise. Total economic and political unification may be, again, more effective and efficient still. Moving away from nations and toward worldwide cohesion could be astronomically beneficial to capitalism. Will the push toward a one-nation world come from the need to reign in capital, to serve capital, or both?

For more from the author, subscribe and follow or read his books.

When The Beatles Sang About Killing Women

Move over, Johnny Cash and “Cocaine Blues.” Sure, “Early one mornin’ while making the rounds / I took a shot of cocaine and I shot my woman down… Shot her down because she made me slow / I thought I was her daddy but she had five more” are often the first lyrics one thinks of when considering the violent end of the toxic masculinity spectrum in white people music. (Is this not something you ponder? Confront more white folk who somehow only see these things in black music, you’ll get there.) But The Beatles took things to just as dark a place.

Enter “Run For Your Life” from their 1965 album Rubber Soul, a song as catchy as it is chilling: “You better run for your life if you can, little girl / Hide your head in the sand, little girl / Catch you with another man / That’s the end.” Jesus. It’s jarring, the cuddly “All You Need Is Love” boy band singing “Well, I’d rather see you dead, little girl / Than to be with another man” and “Let this be a sermon / I mean everything I’ve said / Baby, I’m determined / And I’d rather see you dead.” But jealous male violence in fact showed up in other Beatles songs as well, and in the real world, with the self-admitted abusive acts and attitudes of John Lennon, later regretted but no less horrific for it.

This awfulness ensured The Beatles would be viewed by many of posterity as a contradictory element, with proto-feminist themes and ideas of the 1960s taking root in their music alongside possessive, murderous sexism. That is, if these things are noticed at all.

For more from the author, subscribe and follow or read his books.

With Afghanistan, Biden Was in the ‘Nation-building Trap.’ And He Did Well.

You’ve done it. You have bombed, invaded, and occupied an oppressive State into a constitutional democracy, human rights and all. Now there is only one thing left to do: attempt to leave — and hope you are not snared in the nation-building trap.

Biden suffered much criticism over the chaotic events in Afghanistan in August 2021, such as the masses of fleeing Afghans crowding the airport in Kabul and clinging to U.S. military planes, the American citizens left behind, and more, all as the country fell to the Taliban. Yet Biden was in a dilemma, in the 16th century sense of the term: a choice between two terrible options. That’s the nation-building trap: if your nation-building project collapses after or as you leave, do you go back in and fight a bloody war a second time, or do you remain at home? You can 1) spend more blood, treasure, and years reestablishing the democracy and making sure the first war was not in vain, but risk being in the exact same situation down the road when you again attempt to leave. Or 2) refuse to sacrifice any more lives (including those of civilians) or resources, refrain from further war, and watch oppression return on the ruins of your project. This is a horrific choice to make, and no matter what you would choose there should be at least some sympathy for those who might choose the other.

Such a potentiality should make us question war and nation-building, a point to which we will return. But here it is important to recognize that the August chaos was inherent in the nation-building trap. Biden had that dilemma to face, and his decision came with unavoidable tangential consequences. For example, the choice, as the Taliban advanced across Afghanistan, could be reframed as 1) send troops back in, go back to war, and prevent a huge crowd at the airport and a frantic evacuation, or 2) remain committed to withdraw, end the war, but accept that there would be chaos as civilians tried to get out of the country. Again, dismal options.

This may seem too binary, but the timeline of events appears to support it. With a withdraw deadline of August 31, the Taliban offensive began in early May. By early July, the U.S. had left its last military base, marking the withdraw as “effectively finished” (this is a detail often forgotten). Military forces only remained in places like the U.S. embassy in Kabul. In other words, from early May to early July, the Taliban made serious advances against the Afghan army, but the rapid fall of the nation occurred after the U.S. and NATO withdraw — with some Afghan soldiers fighting valiantly, others giving up without a shot. There are countless analyses regarding why the much larger, U.S.-trained and -armed force collapsed so quickly. U.S. military commanders point to our errors like: “U.S. military officials trained Afghan forces to be too dependent on advanced technology; they did not appreciate the extent of corruption among local leaders; and they didn’t anticipate how badly the Afghan government would be demoralized by the U.S. withdrawal.” In any event, one can look at either May-June (when U.S. forces were departing and Taliban forces were advancing) or July-August (when U.S. forces were gone and the Taliban swallowed the nation in days) as the key decision-making moment(s). Biden had to decide whether to reverse the withdraw, send troops back in to help the Afghan forces retake lost districts (and thus avoid the chaos of a rush to the airport and U.S. citizens left behind), or hold firm to the decision to end the war (and accept the inevitability of turmoil). Many will argue he should have chosen option one, and that’s an understandable position. Even if you had to fight for another 20 years, and all the death and maiming that comes with it, and face the same potential scenario when you try to withdraw in 2041, some would support it. But for those who desired an end to war, it makes little sense to criticize Biden for the airport nightmare, or the Taliban takeover or American citizens being left behind (more on that below). “I supported withdraw but not the way it was done” is almost incomprehensible. In the context of that moment, all those things were interconnected. In summer 2021, only extending and broadening the war could have prevented those events. It’s the nation-building trap — it threatens to keep you at war forever.

The idea that Biden deserves a pass on the American citizens unable to be evacuated in time may draw special ire. Yes, one may think, maybe ending the war in summer 2021 brought an inevitable Taliban takeover (one can’t force the Afghan army to fight, and maybe we shouldn’t fight a war “Afghan forces are not willing to fight themselves,” as Biden put it) and a rush to flee the nation, but surely the U.S. could have done more to get U.S. citizens (and military allies such as translators) out of Afghanistan long before the withdraw began. This deserves some questioning as well — and as painful as it is to admit, the situation involved risky personal decisions, gambles that did not pay off. Truly, it was no secret that U.S. forces would be leaving Afghanistan in summer 2021. This was announced in late February 2020, when Trump signed a deal with the Taliban that would end hostilities and mark a withdraw date. U.S. citizens (most dual citizens) and allies had over a year to leave Afghanistan, and the State Department contacted U.S. citizens 19 times to alert them of the potential risks and offer to get them out, according to the president and the secretary of state. Thousands who chose to stay changed their minds as the Taliban advance continued. One needn’t be an absolutist here. It is possible some Americans fell through the cracks, or that military allies were given short shrift. And certainly, countless Afghan citizens had not the means or finances to leave the nation. Not everyone who wished to emigrate over that year could do so. Yet given that the withdraw date was known and U.S. citizens were given the opportunity to get out, some blame must necessarily be placed on those who wanted to stay despite the potential for danger — until, that is, the potential became actual.

Biden deserves harsh criticism, instead, for making stupid promises, for instance that there would be no chaotic withdraw. The world is too unpredictable for that. Further, for a drone strike that blew up children before the last plane departed. And for apparently lying about his generals’ push to keep 2,500 troops in the country.

That is a good segue for a few final thoughts. The first revolves around the question: “Regardless of the ethics of launching a nation-building war, is keeping 2,500 troops in the country, hypothetically forever, the moral thing to do to prevent a collapse into authoritarianism or theocracy?” Even if one opposed and condemned the invasion as immoral, once that bell has been rung it cannot be undone, and we’re thus forced to consider the ethics of how to act in a new, ugly situation. Isn’t 2,500 troops a “small price to pay” to preserve a nascent democracy and ensure a bloody war was not for nothing? That is a tempting position, and again one can have sympathy for it even if disagreeing, favoring full retreat. The counterargument is that choosing to leave a small force may preserve the nation-building project but it also incites terrorism against the U.S. We know that 9/11 was seen by Al-Qaeda as revenge for U.S. wars and military presences in Muslim lands, and the War on Terror has only caused more religious radicalization and deadly terrorist revenge, in an endless cycle of violence that should be obvious to anyone over age three. So here we see another dilemma: leave, risk a Taliban takeover, but (begin to) extricate yourself from the cycle of violence…or stay, protect the democracy, but invite more violence against Americans. This of course strays dangerously close to asking who is more valuable, human beings in Country X or Country Y, that old, disgusting patriotism or nationalism. But this writer detests war and nation-building and imperialism and the casualties at our own hands (our War on Terror is directly responsible for the deaths of nearly 1 million people), and supports breaking the cycle immediately. That entails total withdraw and living with the risk of the nation-building endeavor falling apart.

None of this is to say that nation-building cannot be successful in theory or always fails in practice. The 2003 invasion of Iraq, which like that of Afghanistan I condemn bitterly, ended a dictatorship; eighteen years later a democracy nearly broken by corruption, security problems, and the lack of enforcement of personal rights stands in its place, a flawed but modest step in the right direction. However, we cannot deny that attempting to invade and occupy a nation into a democracy carries a high risk of failure. For all the blood spilled — ours and our victims’ — the effort can easily end in disaster. War and new institutions and laws hardly address root causes of national problems that can tear a new country apart, such as religious extremism, longstanding ethnic conflict, and so on. It may in fact make such things worse. This fact should make us question the wisdom of nation-building. As discussed, you can “stay until the nation is ready,” which may mean generations. Then when you leave, the new nation may still collapse, not being as ready as you thought. Thus a senseless waste of lives and treasure. Further, why do we never take things to their logical conclusion? Why tackle one or two brutal regimes and not all the others? If we honestly wanted to use war to try to bring liberty and democracy to others, the U.S. would have to bomb and occupy nearly half the world. Actually “spreading freedom around the globe” and “staying till the job’s done” means wars of decades or centuries, occupations of almost entire continents, countless millions dead. Why do ordinary Americans support a small-scale project, but are horrified at the thought of a large-scale one? That is a little hint that what you are doing needs to be rethought.

Biden — surprisingly, admirably steadfast in his decision despite potential personal political consequences — uttered shocking words to the United States populace: “This decision about Afghanistan is not just about Afghanistan. It’s about ending an era of major military operations to remake other countries.” Let’s hope that is true.

For more from the author, subscribe and follow or read his books.

Hegemony and History

The Italian Marxist Antonio Gramsci, writing in the early 1930s while imprisoned by the Mussolini government, theorized that ruling classes grew entrenched through a process called cultural hegemony, the successful propagation of values and norms, which when accepted by the lower classes produced passivity and thus the continuation of domination and exploitation from above. An ideology became hegemonic when it found support from historical blocs, alliances of social groups (classes, religions, families, and so on) — meaning broad, diverse acceptance of ideas that served the interests of the bourgeoisie in a capitalist society and freed the ruling class from some of the burden of using outright force. This paper argues that Gramsci’s theory is useful for historians because its conception of “divided consciousness” offers a framework for understanding why individuals failed to act in ways that aligned with their own material interests or acted for the benefit of oppressive forces. Note this offering characterizes cultural hegemony as a whole, but it is divided consciousness that permits hegemony to function. Rather than a terminus a quo, however, divided consciousness can be seen as created, at least partially, by hegemony andas responsible for ultimate hegemonic success — a mutually reinforcing system. The individual mind and what occurs within it is the necessary starting point for understanding how domineering culture spreads and why members of social groups act in ways that puzzle later historians.

Divided (or contradictory) consciousness, according to Gramsci, was a phenomenon in which individuals believed both hegemonic ideology and contrary ideas based on their own lived experiences. Cultural hegemony pushed such ideas out of the bounds of rational discussion concerning what a decent society should look like. Historian T.J. Jackson Lears, summarizing sociologist Michael Mann, wrote that hegemony ensured “values rooted in the workers’ everyday experience lacked legitimacy… [W]orking class people tend to embrace dominant values as abstract propositions but often grow skeptical as the values are applied to their everyday lives. They endorse the idea that everyone has an equal chance of success in America but deny it when asked to compare themselves with the lawyer or businessman down the street.”[1] In other words, what individuals knew to be true from simply functioning in society was not readily applied to the nature of the overall society; some barrier, created at least in part by the process of hegemony, existed. Lears further noted the evidence from sociologists Richard Sennett and Jonathon Cobb, whose subaltern interviewees “could not escape the effect of dominant values” despite also holding contradictory ones, as “they deemed their class inferiority a sign of personal failure, even as many realized they had been constrained by class origins that they could not control.”[2] A garbage collector knew the fact that he was not taught to read properly was not his fault, yet blamed himself for his position in society.[3] The result of this contradiction, Gramsci observed, was often passivity, consent to oppressive systems.[4] If one could not translate and contrast personal truths to the operation of social systems, political action was less likely.

To understand how divided consciousness, for Gramsci, was achieved, it is necessary to consider the breadth of the instruments that propagated dominant culture. Historian Robert Gray, studying how the bourgeoisie achieved hegemony in Victorian Britain, wrote that hegemonic culture could spread not only through the state — hegemonic groups were not necessarily governing groups, though there was often overlap[5] — but through any human institutions and interactions: “the political and ideological are present in all social relations.”[6] Everything in Karl Marx’s “superstructure” could imbue individuals and historical blocs with domineering ideas: art, media, politics, religion, education, and so on. Gray wrote that British workers in the era of industrialization of course had to be pushed into “habituation” of the new and brutal wage-labor system by the workplace itself, but also through “poor law reform, the beginnings of elementary education, religious evangelism, propaganda against dangerous ‘economic heresies,’ the fostering of more acceptable expressions of working-class self help (friendly societies, co-ops, etc.), and of safe forms of ‘rational recreation.’”[7] The bourgeoisie, then, used many social avenues to manufacture consent, including legal reform that could placate workers. Some activities were acceptable under the new system (joining friendly societies or trade unions) to keep more radical activities out of bounds.[8] It was also valuable to create an abstract enemy, a “social danger” for the masses to fear.[9] So without an embrace of the dominant values and norms of industrial capitalism, there would be economic disaster, scarcity, loosening morals, the ruination of family, and more.[10] The consciousness was therefore under assault by the dominant culture from all directions, heavy competition for values derived from lived experience, despite the latter’s tangibility. In macro, Gramsci’s theory of cultural hegemony, to quote historian David Arnold, “held that popular ideas had as much historical weight or energy as purely material forces” or even “greater prominence.”[11] In micro, it can be derived, things work the same in the individual mind, with popular ideas as powerful as personal experience, and thus the presence of divided consciousness.

The concept of contradictory consciousness helps historians answer compelling questions and solve problems. Arnold notes Gramsci’s questions: “What historically had kept the peasants [of Italy] in subordination to the dominant classes? Why had they failed to overthrow their rulers and to establish a hegemony of their own?”[12] Contextually, why wasn’t the peasantry more like the industrial proletariat — the more rebellious, presumed leader of the revolution against capitalism?[13] The passivity wrought from divided consciousness provided an answer. While there were “glimmers” of class consciousness — that is, the application of lived experience to what social systems should be, and the growth of class-centered ideas aimed at ending exploitation — the Italian peasants “largely participated in their own subordination by subscribing to hegemonic values, by accepting, admiring, and even seeking to emulate many of the attributes of the superordinate classes.”[14] Their desires, having “little internal consistency or cohesion,” even allowed the ruling class to make soldiers of peasants,[15] meaning active participation in maintaining oppressive power structures. Likewise, Lears commented on the work of political theorist Lawrence Goodwyn and the question of why the Populist movement in the late nineteenth century United States largely failed. While not claiming hegemony as the only cause, Lears argued that the democratic movement was most successful in parts of the nation with democratic traditions, where such norms were already within the bounds of acceptable discussion.[16] Where they were not, where elites had more decision-making control, the “received culture” was more popular, with domination seeming more natural and inevitable.[17] Similarly, Arnold’s historiographical review of the Indian peasantry found that greater autonomy (self-organization to pursue vital interests) of subaltern groups meant hegemony was much harder to establish, with “Gandhi [coming] closest to securing the ‘consent’ of the peasantry for middle-class ideological and political leadership,” but the bourgeoisie failing to do the same.[18] Traditions and cultural realities could limit hegemonic possibilities; it’s just as important to historians to understand why something does not work out as it is to comprehend why something does. As a final example, historian Eugene Genovese found that American slaves demonstrated both resistance to and appropriation of the culture of masters, both in the interest of survival, with appropriation inadvertently reinforcing hegemony and the dominant views and norms.[19] This can help answer questions regarding why slave rebellions took place in some contexts but not others, or even why more did not occur — though, again, acceptance of Gramscian theory does not require ruling out all causal explanations beyond cultural hegemony and divided consciousness. After all, Gramsci himself favored nuance, with coexisting consent and coercion, consciousness of class or lived experience mixing with beliefs of oppressors coming from above, and so on.

The challenge of hegemonic theory and contradictory consciousness relates to parsing out aforementioned causes. Gray almost summed it up when he wrote, “[N]or should behavior that apparently corresponds to dominant ideology be read at face value as a direct product of ruling class influence.”[20] Here he was arguing that dominant culture was often imparted in indirect ways, not through intentionality of the ruling class or programs of social control.[21] But one could argue: “Behavior that apparently corresponds to dominant ideology cannot be read at face value as a product of divided consciousness and hegemony.” It is a problem of interpretation, and it can be difficult for historians to parse out divided consciousness or cultural hegemony from other historical causes and show which has more explanatory value. When commenting on the failure of the Populist movement, Lears mentioned “stolen elections, race-baiting demagogues,” and other events and actors with causal value.[22] How much weight should be given to dominant ideology and how much to stolen elections? This interpretive nature can appear to weaken the usefulness of Gramsci’s model. Historians have developed potential solutions. For instance, as Lears wrote, “[O]ne way to falsify the hypothesis of hegemony is to demonstrate the existence of genuinely pluralistic debate; one way to substantiate it is to discover what was left out of public debate and to account historically for those silences.”[23] If there was public discussion of a wide range of ideas, many running counter to the interests of dominant groups, the case for hegemony is weaker; if public discussion centered around a narrow slate of ideas that served obvious interests, the case is stronger. A stolen election may be assigned less casual value, and cultural hegemony more, if there existed restricted public debate. However, the best evidence for hegemony may remain the psychoanalysis of individuals, as seen above, that demonstrate some level of divided consciousness. Even in demonstrability, contradictory consciousness is key to Gramsci’s overall theory. A stolen election may earn less casual value if such insightful individual interviews can be submitted as evidence.  

In sum, for Gramscian thinkers divided consciousness is a demonstrable phenomenon that powers (and is powered by) hegemony and the acceptance of ruling class norms and beliefs. While likely not the only cause of passivity to subjugation, it offers historians an explanation as to why individuals do not act in their own best interests that can be explored, given causal weight, falsified, or verified (to degrees) in various contexts. Indeed, Gramsci’s theory is powerful in that it has much utility for historians whether true or misguided.

For more from the author, subscribe and follow or read his books.

[1] T.J. Jackson Lears, “The Concept of Cultural Hegemony: Problems and Possibilities,” The American Historical Review 90, no. 3 (June 1985): 577.

[2] Ibid, 577-578.

[3] Ibid, 578.

[4] Ibid, 569.

[5] Robert Gray, “Bourgeois Hegemony in Victorian Britain,” in Tony Bennet, ed., Culture, Ideology and Social Process: A Reader (London: Batsford Academic and Educational, 1981), 240.

[6] Ibid, 244.

[7] Ibid.

[8] Ibid, 246.

[9] Ibid, 245.

[10] Ibid.

[11] David Arnold, “Gramsci and the Peasant Subalternity in India,” The Journal of Peasant Studies 11, no. 4 (1984):158.

[12] Ibid, 157.

[13] Ibid, 157.

[14] Ibid, 159.

[15] Ibid.

[16] Lears, “Hegemony,” 576-577.

[17] Ibid.

[18] Arnold, “India,” 172.

[19] Lears, “Hegemony,” 574.

[20] Gray, “Britain,” 246.

[21] Ibid, 245-246.

[22] Ibid, 276.

[23] Lears, “Hegemony,” 586.

20% of Americans Are Former Christians

It’s relatively well-known that religion in this country is declining, with 26% of Americans now describing themselves as nonreligious (9% adorning the atheist or agnostic label, 17% saying they are “nothing in particular”). Less discussed is where these growing numbers come from and just how much “faith switching” happens here.

For example, about 20% of citizens are former Christians, one in every five people you pass on the street. Where these individuals go isn’t a foregone conclusion — at times it’s to Islam (77% of new converts used to be Christians), Hinduism, or other faiths (“Members of non-Christian religions also have grown modestly as a share of the adult population,” the Pew Research Center reports). But mostly it’s to the “none” category, which has thus risen dramatically and is the fastest-growing affiliation. In a majority-Christian country that is rapidly secularizing, all this makes sense. (For context, 34% of Americans — one in three people — have abandoned the belief system in which they were raised, this group including atheists, Christians, Buddhists, Muslims, everyone. 4% of Americans used to be nonreligious but are now people of faith.)

While Islam is able to gain new converts at about the same rate it loses members, thus keeping Islam’s numbers steady (similar to Hinduism and Judaism), Christianity loses far more adherents than it brings in, and is therefore seeing a significant decline (77% to 65% of Americans in just 10 years):

19.2% of all adults…no longer identify with Christianity. Far fewer Americans (4.2% of all adults) have converted to Christianity after having been raised in another faith or with no religious affiliation. Overall, there are more than four former Christians for every convert to Christianity.

This statistic holds true for all religions, as well: “For every person who has left the unaffiliated and now identifies with a religious group more than four people have joined the ranks of the religious ‘nones.'”

This is so even though kids raised to be unaffiliated are somewhat less likely to remain unaffiliated! 53% of Americans raised nonreligious remain so. This is better than the 45% of mainstream protestants who stick with their beliefs, but worse than the 59% of Catholics or 65% of evangelical protestants. (Hinduism, Islam, and Judaism again beat everyone — one shouldn’t argue that high retention rates, or big numbers, prove beliefs true, nor low ones false.) Yet it is simply the case that there are currently many more religious people to change their minds than there are skeptics to change theirs:

The low retention rate of the religiously unaffiliated may seem paradoxical, since they ultimately obtain bigger gains through religious switching than any other tradition. Despite the fact that nearly half of those raised unaffiliated wind up identifying with a religion as adults, “nones” are able to grow through religious switching because people switching into the unaffiliated category far outnumber those leaving the category.

Overall, this knowledge is valuable because the growing numbers of atheists, agnostics, and the unaffiliated are occasionally seen as coming out of nowhere, rather than out of Christianity itself. (And out of other faiths, to far lesser degrees: Muslims are 1% of the population, Jews 2%.) As if a few dangerous, free-thinking families were suddenly having drastically more children, or a massive influx of atheistic immigrants was pouring into the U.S., skewing the percentages. Rather, the 26% of Americans who are nonreligious is comprised of much of the 20% of Americans who have abandoned Christianity. The call’s coming from inside the church.

For more from the author, subscribe and follow or read his books.

How Should History Be Taught?

Debate currently rages over how to teach history in American public schools. Should the abyss of racism receive full attention? Should we teach our children that the United States is benevolent in its wars and use of military power — did we not bring down Nazi Germany? Is the nation fundamentally good based on its history, worthy of flying the flag, or is it responsible for so many horrors that an ethical person would keep the flag in the closet or burn it in the streets? Left and Right and everyone in between have different, contradictory perspectives, but to ban and censor is not ideal. Examining the full spectrum of views will help students understand the world they inhabit and the field of history itself.

While there was once an imagining of objectivity, historians now typically understand the true nature of their work. “Through the end of the twentieth century,” Sarah Maza writes in Thinking About History, “the ideal of historical objectivity was undermined from within the historical community… The more different perspectives on history accumulated, the harder it became to believe that any historian, however honest and well-intentioned, could tell the story of the past from a position of Olympian detachment, untainted by class, gender, racial, national, and other biases.” Selecting and rejecting sources involves interpretation and subconsciously bent decisions. Historians looking at the same sources will have different interpretations of meaning, which leads to fierce debates in scholarly journals. Teachers are not value-neutral either. All this is taken for granted. “It is impossible to imagine,” Maza writes, “going back to a time when historians imagined that their task involved bowing down before ‘the sovereignty of sources.'” They understand it’s more complex than that: “The history of the American Great Plains in the nineteenth century has been told as a tale of progress, tragedy, or triumph over adversity,” depending on the sources one is looking at and how meaning is derived from them.

But this is a positive thing. It gives us a fuller picture of the past, understanding the experiences of all actors. “History is always someone’s story, layered over and likely at odds with someone else’s: to recognize this does not make our chronicles of the past less reliable, but more varied, deeper, and more truthful.” It also makes us think critically — what interpretation makes the most sense to us, given the evidence offered? Why is the evidence reliable?

If historians understand this, why shouldn’t students? Young people should be taught that while historical truth exists, any presentation of historical truth — a history book, say — was affected by human action and sentiment. This is a reality that those on the Left and Right should be able to acknowledge. Given this fact, and that both sides are after the same goal, to teach students the truth, the only sensible path forward is to offer students multiple interpretations. Read A Patriot’s History of the United States (Schweikart, Allen) and A People’s History of the United States (Zinn). There are equivalent versions of these types of texts for elementary and middle schoolers. Read about why World War II was “The Good War” in your typical textbook, alongside Horrible Histories: Woeful Second World War. Have students read history by conservatives in awe of a greatest country in the whole wide world, as well as by liberals fiercely critical of the nation and many of its people for keeping liberty and democracy exclusively for some for far longer than many other countries. They can study top-down history (great rulers, generals, and leaders drive change) and bottom-up social history (ordinary people coming together drives change). Or compare primary sources from the late nineteenth century to the early twentieth demanding or opposing women’s rights. Why not? This gives students a broader view of the past, shows them why arguments and debates over history exist, and helps them understand modern political ideologies.

Most importantly, as noted, it helps students think critically. Many a teacher has said, “I don’t want to teach students what to think, but rather how to think.” This doesn’t seem possible without exploring varying perspectives and asking which one a young person finds most convincing and why. One can’t truly practice the art of thinking without one’s views being challenged, being forced to justify the maintenance of a perspective or a deviation based on newly acquired knowledge. Further, older students can go beyond different analyses of history and play around with source theories: what standard should there be to determine if a primary source is trustworthy? Can you take your standard, apply it to the sources of these two views, and determine which is more solid by your metric? There is much critical thinking to be done, and it makes for a more interesting time for young people.

Not only does teaching history in this way reflect the professional discipline, and greatly expand student knowledge and thought, it aligns with the nature of public schools, or with what the general philosophy of public schools should be. The bent of a history classroom, or the history segment of the day in the youngest grades, is determined by the teacher, but also by the books, curricula, and standards approved or required by the district, the regulations of the state, and so forth. So liberal teachers, districts, and states go their way and conservative teachers, districts, and states go theirs. But who is the public school classroom for, exactly? It’s for everyone — which necessitates some kind of openness to a broad range of perspectives (public universities are the same way, as I’ve written elsewhere).

This may be upsetting and sensible at the same time. On the one hand, “I don’t want my kid, or other kids, hearing false, dangerous ideas from the other side.” On the other, “It would be great for my kid, and other kids, to be exposed to this perspective when it so often is excluded from the classroom.” Everyone is happy, no one is happy. Likely more the latter. First, how can anyone favor bringing materials full of falsities into a history class? Again, anyone who favors critical thinking. Make that part of the study — look at the 1619 Project and the 1776 Report together, and explore why either side finds the other in error. Second, how far do you go? What extreme views will be dignified with attention? Is one to bring in Holocaust deniers and square their arguments up against the evidence for the genocide? Personally, this writer would support that: what an incredible exercise in evaluating and comparing the quantity and quality of evidence (and “evidence”). Perhaps others will disagree. But none of this means there can’t be reasonable limits to presented views. If an interpretation or idea is too fringe, it may be a waste of time to explore it. There is finite time in a class period and in a school year. The teacher, district, and so on will have to make the (subjective) choice (no one said this was a perfect system) to leave some things out and focus on bigger divides. If Holocaust denial is still relatively rare, controversy over whether the Civil War occurred due to slavery is not.

Who, exactly, is afraid of pitting their lens of history against that of another? Probably he who is afraid his sacred interpretation will be severely undermined, she who knows her position is not strong. If you’re confident your interpretation is truthful, backed by solid evidence, you welcome all challengers. Even if another viewpoint makes students think in new ways, even pulling them away from your lens, you know the latter imparted important knowledge and made an impression. As the author of a book on racism used in high schools and colleges, what do I have to fear when some conservative writes a book about how things really weren’t so bad for black Kansas Citians over the past two centuries? By all means, read both books, think for yourself, decide which thesis makes the most sense to you based on the sources — or create a synthesis of your own. The imaginary conservative author should likewise have no qualms about such an arrangement.

I have thus far remained fairly even-handed, because Leftists and right-wingers can become equally outraged over very different things. But here I will wonder whether the Right would have more anxiety over a multiple-interpretation study specifically. Once a student has learned of the darkness of American history, it is often more difficult to be a full-throated, flag-worshiping patriot. This risk will drive some conservatives berserk. Is the Leftist parent equally concerned that a positive, patriotic perspective on our past alongside a Zinnian version will turn her child into someone less critical, more favorable to the State, even downplaying the darkness? I’m not sure if the Leftist is as worried about that. My intuition, having personally been on both sides of the aisle, is that the risk would be more disturbing for conservatives — the horrors still horrify despite unrelated positive happenings, but the view of the U.S. as the unequivocal good guy is quickly eroded forever. Hopefully I am wrong and that is the mere bias of a current mindset talking. Either way, this pedagogy, the great compromise, is the right thing to do, for the reasons outlined above.

In conclusion, we must teach students the truth — and Americans will never fully agree on what that is, but the closest one could hope for is that this nation and its people have done horrific things as well as positive things. Teaching both is honest and important, and that’s what students will see when they examine different authors and documents. In my recent review of a history text, I wrote that the Left “shouldn’t shy away from acknowledging, for instance, that the U.S. Constitution was a strong step forward for representative democracy, secular government, and personal rights, despite the obvious exclusivity, compared to Europe’s systems.” Nor should one deny the genuine American interest in rescuing Europe and Asia from totalitarianism during World War II. And then there’s inventions, art, scientific discoveries, music, and many other things. The truth rests in nuance, as one might expect. James Baldwin said that American history is “more beautiful and more terrible than anything anyone has ever said about it.” (What nation does not have both horrors and wonderful things in its history? Where would philosophy be without the German greats?) I’ve at times envisioned writing a history of the U.S. through a “hypocrisy” interpretation, but it works the same under a “mixed bag” framing: religious dissenters coming to the New World for more freedom and immediately crushing religious dissenters, the men who spoke of liberty and equality who owned slaves, fighting the Nazi master race with a segregated army, supporting democracy in some cases but destroying it in others, and so on. All countries have done good and bad things.

That is a concept the youngest children — and the oldest adults — can understand.

For more from the author, subscribe and follow or read his books.

Big Government Programs Actually Prevent Totalitarianism

There is often much screaming among conservatives that big government programs — new ones like universal healthcare, universal college education, or guaranteed work, and long-established ones like Social Security, Medicaid, and Medicare — somehow lead to dictatorship. There is, naturally, no actual evidence for this. The imagined correlation is justified with nothing beyond “that’s socialism, which always becomes totalitarianism,” ignorance already addressed. The experience of advanced democracies around the world, and indeed the U.S. itself, suggests big government programs, run by big departments with big budgets and big staffs helping tens of millions of citizens, can happily coexist alongside elected governing bodies and presidents, constitutions, and human rights, as one would expect.

Threats to democracy come from elsewhere — but what’s interesting to consider is how conservatives have things completely backward. Big government programs — the demonstration that one’s democracy is a government “for the people,” existing to meet citizen needs and desires — are key to beating back the real threats to a republic.

In a recent interview with The Nation, Bernie Sanders touched on this:

“Why it is imperative that we address these issues today is not only because of the issues themselves—because families should not have to spend a huge proportion of their income on child care or sending their kid to college—but because we have got to address the reality that a very significant and growing number of Americans no longer have faith that their government is concerned about their needs,” says the senator. “This takes us to the whole threat of Trumpism and the attacks on democracy. If you are a worker who is working for lower wages today than you did 20 years ago, if you can’t afford to send your kid to college, etc., and if you see the very, very richest people in this country becoming phenomenally rich, you are asking yourself, ‘Who controls the government, and does the government care about my suffering and the problems of my family?’”

Sanders argues that restoring faith in government as a force for good is the most effective way to counter threats to democracy.

And he’s right. Empirical evidence suggests economic crises erode the rule of law and faith in representative democracy. Depressions are not the only force that pushes in this direction, but they are significant and at times a killing blow to democratic systems. Unemployment, low wages, a rising cost of living — hardship and poverty, in other words — drive citizens toward extreme parties and voices, including authoritarians. Such leaders are then elected to office, and begin to dismantle democracy with support of much of the population. Europe in the 1930s is the oft-cited example, but the same has been seen after the global recession beginning in 2008, with disturbing outgrowths of recent declining trust in democracy: the success of politicians with demagogic and anti-democratic bents like Trump, hysteria over fictional stolen elections that threatens to keep unelected people in office, and dangerous far-right parties making gains in Europe. The Eurozone and austerity crisis, the COVID-induced economic turmoil, and more have produced similar concerns.

What about the reverse? If economic disaster harms devotion to real democracy and politicians who believe in it, does the welfare state increase support for and faith in democracy? Studies also suggest this is so. Government tackling poverty through social programs increases satisfaction with democratic systems! The perception that inequality is rising and welfare isn’t doing enough to address it does the exact opposite. A helping hand increases happiness, and is expected from democracies, inherently linking favorability views on republics and redistribution. If we wish to inoculate the citizenry against authoritarian candidates and anti-democratic practices within established government, shoring up loyalty to democracy through big government programs is crucial.

It is as Sanders said: the most important thing for the government to do to strengthen our democracy and even heal polarization (“Maybe the Democrats putting $300 per child per month in my bank account aren’t so evil”), is simply to help people. To work for and serve all. Healthcare, education, income support, jobs…such services help those on the Right, Left, and everyone in between. This should be done whether there is economic bust or boom. People hold fast to democracy, a government of and by the people, when it is clearly a government for the people. If we lose the latter, so too the former.

For more from the author, subscribe and follow or read his books.

COVID Proved Social Conditions Largely Determine Our Health

In the past year, it has been heavily impressed upon Kansas Citians that one’s health is to a significant degree determined by factors beyond one’s control. The COVID-19 era is a key moment to further break down the reactionary notion that personal health choices are all that stands between an individual and optimal physical and mental well-being. It’s broadened our understanding of how health is also a product of social conditions.

The first and most elementary fact to note is that viruses, while often focusing on vulnerable populations such as the elderly, are not often entirely discriminatory. They end the lives of the young and healthy as well. Regardless of one’s habits of eating, exercise, or not smoking, random exposure to illnesses new or old as one shops for groceries or rides in an Uber helps introduce the point: The environment often makes a mockery of our personal choices, as important as those are.

The family you are born into, where you grow up, and other factors beyond your control — and often your own awareness — have a large impact on your development and health as a child, which in turn impacts your health as an adult. (And the environment you happen to be in continues to affect you.) Poverty, extremely stressful on the mind and body in many ways, is the ultimate destructive circumstance for children and adults alike. Take the disturbing life expectancy differences between the poor and the better-off, for instance. In Kansas City’s poorest ZIP codes, which are disproportionately black, you can expect to live 18 fewer years on average compared to our richest, whitest ZIP codes, as Flatland reported on June 22. Poor families are less likely to have health care offered by an employer or be able to afford it themselves. They live in social conditions that include more violence or worse air and water pollution. They can at times only afford housing owned by negligent landlords slow to take care of mold, and cope with a million other factors.

During the pandemic, what serious observers of the social determinants of health predicted came true: Black Kansas Citians were hammered by COVID-19. Here we feel, today, the cold touch of slavery and Jim Crow, which birthed disproportionate poverty, which nurtured worse health, which resulted in Black Kansas Citians being more likely to catch coronavirus and die from it, as The Star reported even in the early stages of the pandemic. Worse still, on Feb. 24, the paper noted that richer, whiter ZIP codes — the areas of less urgent need — were getting disproportionately more vaccines than poorer areas with more Black residents. The vaccines were first shipped by the state to health centers that were convenient for some but distant from others.

Imagine history and race playing a role in your health, how soon you could get a shot. Imagine transportation options and where you live being factors. Likewise, imagine the kind of job you have doing the same: Lower-income workers are more likely to have front-line jobs at restaurants and grocery stores, where you can catch the virus. The privileged, better-off often work from home.

Whether it is drinking water you don’t know is unsafe or working at a job that requires much human contact during a pandemic, the determinants of health stretch far beyond exercising, eating right, and choosing not to smoke. To reflect on this fact is to understand a moral duty. If social conditions affect the health of individuals and families, it is urgent to change social conditions — to build a decent society, one without poverty and the many horrors that flow from it.

In this moment, one important way to help move toward this goal is to urge the U.S. House to pass the reconciliation budget that just passed the Senate, to extend the direct child tax credit payments to families, boldly expand education and health care, and more. Onward, a better world awaits.

This article first appeared in The Kansas City Star:

For more from the author, subscribe and follow or read his books.

Proof God is a Liberal Atheist

Sometimes natural disasters are presented as proof of God’s judgement, as when George Floyd’s mural is struck by lightning or hurricanes arrive because of the gays. God exists, and he’s an angry conservative. Naturally, this line of thinking is dreadful, as the weather also provides clear signs God is a Leftist and a nonbeliever.

What else could one make of God sending lighting to burn down statues of Jesus, such as the King of Kings statue in Monroe, Ohio? Or to chip off Jesus’ thumb? Or to strike Jesus-actor Jim Caviezel while he was filming the Sermon on the Mount scene in The Passion of the Christ? What of the bible camps destroyed by wildfires? The solitary crosses in the middle of nowhere erased by flame, or those on church steeples eradicated by lightning? These incredible signs can be interpreted any way you like — that’s the fun of making stuff up. God prefers statues of Christ smaller than 62 feet, he doesn’t like Caviezel’s acting, the camp kids didn’t pray long enough, these were all just innocent weather events with no supernatural power or mind behind them, like lightning or fire scorching an empty field or a tree in the woods, and so forth. Perhaps God doesn’t want you to be a Christian, he wants you to be a traditional omnist, recognizing the truth of all religions, not taking a side with one faction. Perhaps he wants you to be an atheist because he’s a big joker and only skeptics get into heaven. Perhaps the Judeo-Christian god does not exist, and Allah or Zeus is displaying his wrath against a false faith. That’s the problem with taking natural disasters and assigning meaning and interpretation as proof of something — other people can do it too, and their interpretation, their “proof,” is just as solid (read: worthless) as your own. No critical thinker would engage in this sort of argumentation.

Not only do such remarkable miracles prove God is anti-Christian, others clearly reveal he’s a liberal, and with a delightful sense of humor to boot. How else to explain the pastor who declared natural disasters to be God’s punishment for homosexuality seeing his house destroyed by flood? Was the pastor secretly gay? Or just collateral damage, an innocent bystander, in God’s wrathful fit against LGBTQ people? No, most obviously, God was telling him to cut it out: God has no problem with homosexuality. This is like the pastor who thought COVID was brought about by sex outside marriage and then died from the virus: it wasn’t that the preacher was right, falling victim to a plague caused by others, it’s that God has no issues with premarital intercourse and thus did not send a calamity as retribution. Even more amazingly, religious conservatives like Anita Bryant once blamed a California drought on gays, but the dry spell ended, it began to rain, the day after Harvey Milk, a gay icon, was elected to San Francisco office. What a sign! Same for when an Alabama cop was struck by lightning a week after the Alabama house passed a restrictive bill against Black Lives Matter protests and while the Alabama senate was considering doing the same. And wasn’t the U.S. hit by COVID, double-hurricanes, and murder hornets soon after Trump was acquitted by the GOP-led Senate in early 2020? That can’t be a coincidence. Hurricanes, by the way, tend to hit southern conservative states of high religiosity — perhaps that doesn’t have anything to do with U.S. history and proximity to the gulf, but rather it’s punishment for rightwing policies, not queerness and abortion. Finally, recall when a Focus on the Family director asked everyone to pray for rain during the Democratic National Convention in 2008 so God sent a hurricane to disrupt the Republican National Convention? Finding signs and proof that God is a liberal isn’t difficult, given how weather functions.

Although, admittedly, the stories proving God is a leftwing, anti-religious fellow are not as common, given that it’s mostly religious conservatives who turn off their thinking caps, see providence behind every tornado, and write stories about it. When the Left or skeptics do this, it’s usually tongue-in-cheek, as with here.

Now, it’s true that some events and their interpretations align better with what’s in holy books. The gods of the bible and Qur’an want you to be a believer, not an atheist. Other things rely on human interpretation and choosing which parts of the book to take seriously: is gay marriage intolerable because being gay is an abomination, or just fine because we are to love one another and do unto others? Yet degree of alignment doesn’t actually make a claim that X disaster is proof of God or Allah and his rightwing judgement more convincing. The holy books could easily be fictional, as bad as the weather at proving a deity exists and revealing what its values are. Thus, one is free to imagine any supernatural being one wishes, and ascribe any values to him or her based on natural disasters. Any idea is just as valid as the next.

The point is made. Not only can a weather event be interpreted in countless ways (was the George Floyd mural struck because God is racist, because he heartlessly approves of Floyd’s murder, because he dislikes the Black Lives Matter movement in general, because he finds street art tacky, and so on), but it’s also obvious that various weather events will give contradictory messages about what the higher power believes and favors. The faithful can see and believe any sign they like, but bad arguments garner few converts.

For more from the author, subscribe and follow or read his books.

Is Time the Only Cure for COVID Foolishness?

As August 2021 began, 50% of the U.S. population was fully vaccinated against COVID-19, over 165 million people. There have been 615,000 confirmed deaths — the actual number, given the national excess mortality rate since the start of 2020, is likely double official figures. Over a 12-month period, since last August, 2.5 million people were hospitalized, many leaving with lasting medical problems. All the while, protests and foaming at the mouth over mask and vaccine mandates continue; half the population has refused or delayed the vaccine, this group disproportionately (+20%) Republican.

Attempting to convince the conspiracy theorists, bullheaded conservatives, and those concerned over how (historically) fast the vaccine breakthrough occurred is of course still the moral and pressing thing to do. This piece isn’t an exercise in fatalism, despite its headline. However, great frustration exists: if the hesitant haven’t been convinced by now, what will move the needle? With over a year and a half to absorb the dangers of COVID, deadly and otherwise, and eight months to observe a vaccine rollout that has given 1.2 billion people globally highly effective protection, with only an infinitesimally small percentage seeing any side effects (similar to everyday meds), what could possibly be said to convince someone to finally listen to the world’s medical and scientific consensus, to listen to reason? People have been given a chance to compare the disease to the shots (the unvaccinated are 25 times more likely to be hospitalized from COVID and 24 times more likely to die, with nearly all [97, 98, 99%] of COVID deaths now among the unprotected population), but that requires a trust in the expert consensus and data and trials and peer-reviewed research and all those things that make American stomachs churn. Giving people accurate information and sources can even make them less likely to see the light! There is, for some bizarre reason, more comfort and trust in the rogue doctor peddling unfounded nonsense on YouTube.

It may be of some comfort then to recognize that the insanity will surely decrease as time goes on. It’s already occurring. The most powerful answer to “what will move the needle?” is “personal impact” — as time passes, more people will know someone hospitalized or wiped from existence by the disease, and also know someone who has been vaccinated and is completely fine. There will be more family members who get the vaccine behind your back and more friends and acquaintances you’ll see online or in the media expressing deep regret from their ICU hospital beds. You may even be hospitalized yourself. Such things will make a difference. States currently hit hardest by the Delta variant and seeing overall cases skyrocket — the less vaccinated states — are also witnessing increases in vaccination rates. Even conservative media outlets and voices are breaking under the weight of reason, finally beginning to promote the vaccine and changing viewers’ minds, while naturally remaining in Absurdsville by pretending their anti-inoculation hysteria never occurred and blaming Democrats for vaccine hesitancy. Eventually, falsities and mad beliefs yield to science and reason, as we’ve seen throughout history. True, many will never change their minds, and will go to their deaths (likely untimely) believing COVID to be a hoax, or exaggerated, or less risky than a vaccine. But others will yield, shaken to the core by loved ones lost to the virus (one-fourth to one-third of citizens at least know someone who died already) or vaccinated without becoming a zombie, or even by growing ill themselves.

To say more time is needed to end the foolishness is, admittedly, in part to say more illness and death are needed. As stated, the more people a hesitant person knows who have grown ill or died, the more likely the hesitant person is to get his or her shots. A terrible thing to say, yet true. That is why we cannot rest, letting time work on its own. We must continue trying to convince people, through example, empathy (it’s often not logic that changes minds, but love), hand-holding, and other methods offered by psychologists. Lives can be saved. And to convince someone to get vaccinated is not only to protect them and others against COVID, it suddenly creates a person in someone else’s inner circle who has received the shots, perhaps helping the behavior spread. Both us and Father Time can make sure hesitant folk know more people who have been vaccinated, the more pleasant piece of time’s function.

Hopefully, our experience with coronavirus will prepare us for more deadly pandemics in the future, in terms of our behavior, healthcare systems, epidemiology, and more. As bad as COVID-19 is, as bad as Delta is, humanity was exceptionally lucky. The disease could have been far deadlier, far more contagious; the vaccine could have taken much longer, and been less effective. We’ve seen four million deaths worldwide, but even with this virus evolving and worsening, we’ll likely see nothing like the 50 million dead from the 1918 pandemic. Some see the rebellion against masks, lockdowns, and vaccines as a frightening sign: such insanity will spell absolute catastrophe when a deadlier virus comes around. This writer has always suspected (perhaps only hoped) that view to be a bit backward. A deadlier virus would likely mean less rebellion (as would a virus you could see on other people, something more visually horrifying like leprosy). It’s the relative tameness of COVID that allows for the high degree of madness. Admittedly, there was anti-mask resistance during the 1918 crisis, but there could be a correlation nonetheless between the seriousness of the epidemic and the willingness to engage in suicidal foolishness. That aligns with this idea that the more people you lose in your inner circle the more likely you are to give in and visit your local health clinic. Let’s hope science and reason reduce the opportunities to test this correlation hypothesis.

For more from the author, subscribe and follow or read his books.

Famous Bands That Sang About Kansas City

One’s city pride quickly swells upon perusing Spotify for songs about Kansas City. There’s much to hear, from the gems of local talent (“Get Out – The KC Streetcar Song,” Kemet the Phantom) to the fantastic artists from afar (“Train From Kansas City,” Neko Case) to the biggest names in music history:

The Beatles sang of Kansas City beginning in 1961 with “Kansas City / Hey-Hey-Hey-Hey,” which they took from Little Richard’s work of the late 1950s, itself a version of the 1952 classic “Kansas City” by Leiber and Stoller (“I’m going to Kansas City / Kansas City here I come…”). Other famous musicians to record Leiber and Stoller’s song include Willie Nelson, James Brown, and Sammie Davis Jr.

Frank Zappa performed the “Kansas City Shuffle.” Van Morrison had “The Eternal Kansas City”: “Dig your Charlie Parker / Basie and Young.” Yusuf (Cat Stevens) sang “18th Avenue (Kansas City Nightmare).” Clearly, and sadly, he did not have a pleasant stay.

Jefferson Airplane was “gonna move to Kansas City”; for Rogers and Hammerstein, in their 1943 musical Oklahoma!, everything was “up to date in Kansas City.” More recently, The New Basement Tapes, The Mowgli’s, and of course Tech N9ne have joined in.

I have created a public playlist on Spotify of four hours of songs about KC. It has a bit of everything, from the jazz and blues of yesteryear to the folk and Americana and hip hop of today. It includes famous artists and the obscure, and everyone in between, with some repeats so one can hear different artists tackle the same song. “Kansas City Hornpipe” by Fred Morrison and “Kansas City, Missouri” by Humbird are particularly enjoyable. Some songs, naturally, are better than others, but the most subpar or campy of Spotify’s selection have been excluded (many local artists go nowhere for a reason). Finally, and unfortunately, one of the best hip hop songs about the city, Center of Attention’s “Straight Outta Kauffman,” is not available on Spotify, so it must be listened to elsewhere.

Find some of that “Kansas City wine” (Leiber and Stoller) and enjoy.

For more from the author, subscribe and follow or read his books.

If Your Explanation Implies There’s Something Wrong With Black People, It’s Racist

Conservative whites who consider themselves respectable typically do not use the explicitly racist causal explanations behind higher rates of black poverty, violent crime, academic struggle, and so on. Ideas of blacks being naturally lazier, more aggressive or deviant, and less intelligent than white people are largely unspeakable today. Instead, these things are simply implied, wrapped in more palatable or comfortable language so one can go about the day guilt-free. This isn’t always conscious. It’s startling to realize that such whites, probably in most cases from what this writer has observed, do not realize their beliefs imply racist things. This is simply cognitive dissonance; it’s people believing with every fiber of their being that they are not racist, and therefore any explanation they believe cannot be racist, no matter how obviously it actually is to observers.

A few examples:

The problem is black culture. You don’t want to say there’s something wrong with black people. Instead, say there’s something wrong with black culture! This black culture is one of violence and revenge, of getting hooked on welfare instead of looking for work, of fathers abandoning mothers and children to create broken, single-parent homes, and so on. But obviously, to say there’s something wrong with black culture is to say there is something wrong with black people. Where, after all, did this “culture” come from? To respectable conservative whites, who should always be asked that very question immediately, it comes from black people themselves. Such whites won’t include an educated explanation of how history, environment/social conditions, and public policies produce “culture” — how recent American history birthed disproportionate poverty, how poverty breeds violence and necessitates welfare use, how a government’s racist War on Drugs and the crimes and violent deaths bred by that very poverty might mean more families without fathers. They surely won’t point out, as a nice comparison, that the white American culture of yesteryear that placed the age of sexual consent for girls at 10 years old, or a white European culture of executing those who questioned the Christian faith, obviously did not stem from whiteness itself, having nothing to do with caucasian ethnicity — so what does “black culture” have to do with blackness? Are these not human beings behaving in predictable ways to the poverty of the place or the theology of the time? People who think in such rational ways wouldn’t use the “problem is black culture” line in the first place. Nay, it is black folk themselves that create this culture, meaning something is terribly wrong with the race, with blacks as people, something linked to biology and genetics — as uncomfortable as that will be for some whites to hear, it is the corner they have readily backed themselves into. After all, white people do not have this “culture.” Why? Are whites superior?

It’s all about personal choice. Another popular one. The problem is black people are making the wrong choices. They have free will, why don’t they choose peace over violence, choose to look harder for a job or a higher-paying gig, study harder in school, just go to college? The response is again painfully obvious. If racial discrepancies all just boil down to personal choices, this is simply to say that blacks make worse personal choices than white people. This is so self-evident that the temptation to throw this article right in the garbage is overwhelming. To whites, blacks are making choices they wouldn’t personally make. There is no consideration of how environment can affect you. Take whether or not you flunk out of college. You hardly choose where or the family into which you are born, and growing up in a poor home affects your mental and physical development, typically resulting in worse academic performance than if you’d been born into a wealthy family; likewise, children don’t choose where they are educated: wealthy families can afford the best private schools and SAT tutoring, black public schools are more poorly funded than white public schools, and so on. Such things affect your ability to graduate college, or even gain admission. Nor is it considered how environment impacts your decisions themselves. For instance, witnessing violence as a child makes you more likely to engage in it, to choose to engage in it. Nor is there a thought to how social settings affect the choices you’ll even face in your life — if you live in a wealthy area without much crime, for instance, you are less likely to experience peer pressure from a friend to commit an illegal act (just as you’re less likely to see violence and thus engage in it later). One can be more successful in life with fewer opportunities to make bad choices in the first place! But none of that can be envisioned. For respectable conservative whites, there is something wrong with black people, something defective about their decision-making or moral character. White people, in contrast, make better choices, the right choices, and are thus wealthier, safer, better educated, families intact. Again, the implication of inferiority is front and center.

Good parenting is really the key. It all comes down to parenting. If black parents stuck together, emphasized to their kids the importance of education, a hard work ethic, the family unit, and turning the other cheek, all these racial disparities could come to an end. The disgusting implications are no doubt clear to the reader already, meaning we need not tarry here. To pin social problems on poor parenting, without any consideration of outside factors, is to simply say black humans are inherently worse parents than white humans. Whatever the problem with black moms and dads, white ones are happily immune.

These implications must be exposed whenever one hears them, and the conversation turned away from race and biology and toward history and socio-economics. Toward the truth.

The racial wealth gap in the United States was birthed by the horrors done to blacks: slavery meant black people, apart from some freemen, started with nothing in 1865, whereas whites began wealth accumulation centuries before, a colossal wealth gap; Jim Crow oppression meant another century of being paid lower wages, denied even menial employment and certainly high-paying jobs, hired last and fired first, kept out of universities, denied home loans or offered worse terms, taught in poorly funded schools, kept out of high-value neighborhoods through violence and racial covenants, and more; studies show that even today racism still affects wealth accumulation in significant ways. By studying history in a serious manner, we begin to understand why the racial wealth gap exists and why it has not yet closed — not because there’s something defective about black people, but because, beyond today’s challenges with racism, there simply has not been enough time for it to close. People who lived through the Jim Crow era, some mere grandchildren of slaves, are still alive today. This is hardly ancient history; it’s two or three generations.

The poverty that persists does to blacks what it does to human beings of all races. It exacerbates crime (not only theft or the drug trade as ways of earning more income, but from the stress in puts on the brain, equivalent to sleep deprivation, causing people to act in ways they simply would not have had they been in more affluent settings), it hurts the performance of students, it leads to more men confined to the cell or the coffin and thus not at home, and other challenges. Bad public policies, from city underinvestment in the black parts of town to the War on Drugs, make things worse. It is right to be a good parent, to make wise choices, and to value a positive culture — but for whites to imagine that some abandonment of these things by our black neighbors is the root cause of racial disparities, with no discussion of history and social conditions and how they persist and affect human beings, is racist and rotten to the core. Respectable conservative whites (and some black conservatives who focus exclusively on parenting, choices, and culture) may not notice or be conscious of such implications, but this can be made temporary.

If we consider ourselves to be moral creatures, it is our responsibility to give these rosier modern framings of old racist ideas no quarter.

For more from the author, subscribe and follow or read his books.

Were Hitler and the Nazis Socialists? Only Kind Of

How socialist were the National Socialists?

We know there will be times when an organization or national name doesn’t tell the whole story. As Jacobin writes, how democratic is the Democratic People’s Republic of (North) Korea? It’s hardly a republic either. (Hitler once asked, “Is there a truer form of Democracy” than the Reich — dictators, apparently, misuse terms.) Or look to the Likud, the National Liberals, one of Israel’s major conservative parties. And if the Christian Knights of the Ku Klux Klan were Christians, do they represent Christianity at large? So let us examine the Nazis and see if they fall into this category.

The first task, as always, is to define socialism. Like today, “socialism” and “communism” were used by some in the early 20th century to mean the same thing (communism) and by others to mean different things. As a poet from the 1880s put it, there are indeed “two socialisms”: the one where the workers own their workplaces and the one where the government owns the workplaces. We must remember these different term uses, but to make it easy we will simply be open to both: “Were the Nazis socialists?” can therefore mean either. There is more to it than that, of course, such as direct democracy and large government programs. But these additions are not sufficient qualifiers. There will be whining that the Nazi regime had large government programs and thus it was socialist, but if that’s the criteria then so were all the nations fighting the Nazis, including the U.S. (remember our huge public jobs programs and Social Security Act of the era?). Advanced societies tend to have sizable State services — and you can have these things without being truly socialist. If one has even a minimal understanding of socialist thought and history, then the conclusion that no country can earnestly be called socialist without worker or State ownership of business is hardly controversial. To speak of socialism was to speak of the elimination of private ownership of the means of production (called “private property,” businesses), with transfer of ownership away from capitalists to one of the two aforementioned bodies.

The German Workers Party, founded in 1919 in Munich by Anton Drexler and renamed the National Socialist German Workers Party in 1920, included actual socialists. Gregor and Otto Strasser, for instance, supported nationalization of industry — it’s simply not accurate to say the rhetoric of ending capitalism, building socialism, of revolution, workers, class, exploitation, and so on was solely propaganda. It was a mix of honest belief and empty propagandistic promises to attract voters in a time of extreme poverty and economic crisis, all depending on which Nazi was using it, as we will see. Socialists can be anti-semites, racists, patriots, and authoritarians, just like non-socialists and people of other belief systems. (I’ve written more elsewhere about the separability of ideologies and horrific things, if interested, typically using socialism and Christianity as examples. The response to “Nazis were socialists, so socialism is pure evil” is of course “Nazis were also Christians — Germany was an extremely religious nation — so is Christianity also pure evil? If the Nazis distorted Christianity, changing what it fundamentally was with their ‘Positive Christianity,’ advocated for in the Nazi platform, is true Christianity to be abandoned alongside true socialism if that has been distorted as well?”)

The meaning of socialism was distorted by Hitler and other party members. To Hitler, socialism meant the common weal, the common good for a community. While rhetorically familiar, this was divorced from ideas of worker or State ownership of the means of production. In a 1923 interview with The Guardian‘s George Sylvester Viereck, Hitler made this clear. After vowing to end Bolshevism (communism), Hitler got the key question:

“Why,” I asked Hitler, “do you call yourself a National Socialist, since your party programme is the very antithesis of that commonly accredited to socialism?”

“Socialism,” he retorted, putting down his cup of tea, pugnaciously, “is the science of dealing with the common weal. Communism is not Socialism. Marxism is not Socialism. The Marxians have stolen the term and confused its meaning. I shall take Socialism away from the Socialists.

“Socialism is an ancient Aryan, Germanic institution. Our German ancestors held certain lands in common. They cultivated the idea of the common weal. Marxism has no right to disguise itself as socialism. Socialism, unlike Marxism, does not repudiate private property. Unlike Marxism, it involves no negation of personality, and unlike Marxism, it is patriotic.

“We might have called ourselves the Liberal Party. We chose to call ourselves the National Socialists. We are not internationalists. Our socialism is national. We demand the fulfilment of the just claims of the productive classes by the state on the basis of race solidarity. To us state and race are one.”

Hitler’s socialism, then, had to do with the common good of one race, united as a nation around ancestral Aryan land and identity. What socialism meant to Hitler and other Nazis can only be understood through the lens of racial purity and extreme nationalism. They come first, forming the colander, and everything else is filtered through. In the same way, what Christianity meant to Hitler was fully shaped by these obsessions: it was a false religion invented by the Jews (who Jesus fought!), but could at the same time be used to justify their destruction. Bolshevism was likewise labeled a sinister Jewish creation (was not Marx ethnically Jewish?): “The Jewish doctrine of Marxism rejects the aristocratic principle of Nature…” Further, when Hitler criticized capitalists, it was often specific: Germany needed “delivery from the Jewish capitalist shackles,” the Jews being to blame for economic problems. A consumed conspiratorial bigot, and often contradictory and nonsensical, he would attack both sides of any issue if they smacked to him of Judaism. But we see Hitler’s agreement that National Socialism was the “antithesis of that commonly accredited to socialism”: there would still be private property, private ownership of the means of production; the internationalism and the racial diversity and tolerance at times preached by other socialists would be rejected. (So would class conflict: “National Socialism always bears in mind the interests of the people as a whole and not the interests of one class or another.”) Racial supremacy and the worship of country — elements of the new fascism, and the latter a typical element of the Right, not traditional socialism — were in order. (If these things were socialism, then again the nations fighting Germany were socialist: Jim Crow laws in America were used as models by Nazi planners, there existed devotion to American exceptionalism and greatness, and so forth.)

Hitler often repeated his view. On May 21, 1935:

National Socialism is a doctrine that has reference exclusively to the German people. Bolshevism lays stress on international mission. We National Socialists believe a man can, in the long run, be happy only among his own people… We National Socialists see in private property a higher level of human economic development that according to the differences in performance controls the management of what has been accomplished enabling and guaranteeing the advantage of a higher standard of living for everyone. Bolshevism destroys not only private property but also private initiative and the readiness to shoulder responsibility.

In a December 28, 1938 speech he declared:

A Socialist is one who serves the common good without giving up his individuality or personality or the product of his personal efficiency. Our adopted term ‘Socialist’ has nothing to do with Marxian Socialism. Marxism is anti-property; true socialism is not. Marxism places no value on the individual or the individual effort, or efficiency; true Socialism values the individual and encourages him in individual efficiency, at the same time holding that his interests as an individual must be in consonance with those of the community.

He who believed in “Germany, people and land — that man is a Socialist.” Otto Strasser, in his 1940 book Hitler and I, wrote that Hitler told him in 1930 that the revolution would be racial, not economic; that democracy should not be brought into the economic sphere; and that large corporations should be left alone; to which Strasser replied, “If you wish to preserve the capitalist regime, Herr Hitler, you have no right to talk of socialism. For our supporters are socialists, and your programme demands the socialisation of private enterprise.” Hitler responded:

That word ‘socialism’ is the trouble… I have never said that all enterprises should be socialised. On the contrary, I have maintained that we might socialise enterprises prejudicial to the interests of the nation. Unless they were so guilty, I should consider it a crime to destroy essential elements in our economic life… There is only one economic system, and that is responsibility and authority on the part of directors and executives. That is how it has been for thousands of years, and that is how it will always be. Profit-sharing and the workers’ right to be consulted are Marxist principles. I consider that the right to exercise influence on private enterprise should be conceded only to the state, directed by the superior class… The capitalists have worked their way to the top through their capacity, and on the basis of this selection, which again only proves their higher race, they have a right to lead. Now you want an incapable government council or works council, which has no notion of anything, to have a say; no leader in economic life would tolerate it.

Otto Strasser and his brother grew disillusioned that the party wasn’t pursuing actual socialism, and upset that Hitler supported and worked with big business, industrialists, capitalists, German princes. Otto was expelled from the party in 1930. Gregor resigned two years later.

The referenced National Socialist Program, or 25-point Plan, of 1920 demanded the “nationalization of all enterprises (already) converted into corporations (trusts),” “profit-sharing in large enterprises,” “communalization of the large department stores, which are to be leased at low rates to small tradesmen,” and nationalization “of land for public purposes.” Hitler clarified that since “the NSDAP stands on the platform of private ownership,” the nationalization of land for public use “concerns only the creation of legal opportunities to expropriate if necessary, land which has been illegally acquired or is not administered from the view-point of the national welfare. This is directed primarily against the Jewish land-speculation companies.” Large department stores were largely Jewish-run. And above we saw Hitler’s resistance to profit-sharing. Further, nationalization of businesses would be limited, as noted, to trusts. It could be that the disproportionately strong representation of Jews in ownership of big German companies played a role here, too. Now, a “secret” interview with Hitler that some scholars suspect is a forgery contains the quote: “Point No. 13 in that programme demands the nationalisation of all public companies, in other words socialisation, or what is known here as socialism,” yet even this limits the promise to publicly traded companies, and Hitler goes on, tellingly, to speak of “owners” and their “possessions,” “property owners,” “the bourgeoisie,” etc. that, while “controlled” by the State, plainly exist independently of it in his socialist vision. Nevertheless, the program has a socialist flair, making Otto Strasser’s comment in 1930 comprehensible, yet its timidity vis-à-vis economics (compare it to the German communist party’s platform of 1932) and its embrace of nationalism and rejection of internationalism would understandably make some ask the question George Sylvester Viereck did in 1923.

This socialist tinge, apart from attacks on Jewish businesses, was forgotten when the Nazis came to power. Historian Karl Bracher said such things to Hitler were “little more than an effective, persuasive propaganda weapon for mobilizing and manipulating the masses. Once it had brought him to power, it became pure decoration: ‘unalterable,’ yet unrealized in its demands for nationalization and expropriation, for land reform…” Indeed, while other Western nations were bringing businesses under State control to combat the Depression, the Nazis in the 1930s ran a program of privatization. Many firms and sectors were handed back to the private sphere. The Nazis valued private ownership for its efficiency. The German economy was State-directed in the sense that the government made purchases, contracting with private firms to produce commodities, such as armaments, and regulated business in many ways, as advanced nations often do, including the U.S. Historian Ian Kershaw wrote: “Hitler was never a socialist. But although he upheld private property, individual entrepreneurship, and economic competition, and disapproved of trade unions and workers’ interference in the freedom of owners and managers to run their concerns, the state, not the market, would determine the shape of economic development. Capitalism was, therefore, left in place. But in operation it was turned into an adjunct of the state.” While the regime incentivized business and regulated it, especially in preparation for war, intervening to keep entities aligned with State goals and ideology, “there occurred hardly any nationalizations of private firms during the Third Reich. In addition, there were few enterprises newly created as state-run firms,” summarized Christoph Buchheim and Jonas Scherner in The Journal of Economic History. Companies retained their independence and autonomy: they still “had ample scope to follow their own production plans… The state normally did not use power to secure the unconditional support of industry,” but rather offered attractive contracts. Socialism cannot simply be regulation of and incentives for private companies, to meet national goals — again, this is what non-socialist states do every day (and the U.S. war economy had plenty of centrally planned production goals and quotas, contracts, regulations, rationing, and even government takeovers).

The betrayal of the program was noticed at the time. A 1940 report said that:

Economic planks of the “unalterable program” on the basis of which the National Socialists campaigned before they came to power in 1933 were designed to win the support of as many disgruntled voters as possible rather than to present a coordinated plan for a new economic system. Within the party there has always been, and there still is, serious disagreement about the extent to which the “socialist” part of the party’s title is to be applied… The planks calling for expropriation have been least honored in the fulfillment of this platform; in practice, the economic reorganizations undertaken by the Nazis have followed a very different pattern from the one which was originally projected.

That pattern was tighter regulation, generous contracts, economic recovery programs for ordinary people, and so on, though the occasional State takeover did occur. All this makes sense given what we’ve seen. The Nazis weren’t interested in the socialism of the Marxists, the communists. Hitler, in his words, rejected “the false notion that the economic system could exist and operate entirely freely and entirely outside of any control or supervision on the part of the State,” but business ultimately belonged to the capitalists.

The Bramberg Conference of 1926 was a key moment for the direction of the Nazi Party: would it go in an earnestly socialist direction or simply use this new, diluted version Hitler was fond of? There were ideological divisions that had to be addressed. Hitler, as party leader since 1921 and with the conference officially establishing Fuhrerprinzip (absolute power of the party leader), was likely to win from the beginning. Gregor Strasser led the push at this convening of Nazi leaders for socialist policies, backed by others from Germany’s northern urban, industrial areas. Leaders from the rural south stood opposed; they wanted to instead lean into nationalism, populism, racialism. One such policy was the seizing of the estates of rich nobles, the landed princes — did the National Socialist Program not say land could be expropriated for the common good? “The law must remain the law for aristocrats as well,” Hitler said. “No questioning of private property!” This was communism, that old Jewish plot. Hitler made sure the idea, being pursued at the time by the social democratic and communist parties, died in its cradle. “For us there are today no princes, only Germans,” he said. “We stand on the basis of the law, and will not give a Jewish system of exploitation a legal pretext for the complete plundering of our people.” Again, the rejection of the class war and overthrow of the rich inherent to socialism and instead a simple focus on the Jews — Hitler was “replacing class with race,” as one historian put it, swapping out “the usual terms of socialist ideology.” Hitler was “a reactionary,” Joseph Goebbels realized. After this, Strasser backed off, and the socialist push in the party was quelled.

Similar to State ownership, while the German Workers Party in 1919 spoke of worker cooperatives — worker ownership — the Nazis had no actual interest in this, in fact making cooperative entities targets to be destroyed in Germany and conquered nations because they smacked of Marxism. A dictatorship isn’t going to give ordinary people power.

Outside observers continued to mock Hitler’s socialism — this isn’t simply a tactic of an embarrassed American Left today. As we’ve seen, people of the era noticed the meaning was changed and watched how the Nazis acted when in power. For Leon Trotsky, an actual communist-style socialist writing in 1934, Nazi “socialism” was always in derisive quotation marks. “The Nazis required the programme in order to assume the power; but power serves Hitler not all for the purpose of fulfilling the programme,” with “the social system untouched,” the “class nature” and competition of capitalism alive and well. Stalin said in 1936, “The foundation of [Soviet] society is public property: state, i.e., national, and also co-operative, collective farm property. Neither Italian fascism nor German National-‘Socialism’ has anything in common with such a society. Primarily, this is because the private ownership of the factories and works, of the land, the banks, transport, etc., has remained intact, and, therefore, capitalism remains in full force in Germany and in Italy.”

When one considers how actual socialists were treated under the Reich, the point is driven home.

Communist and social democratic politicians were purged from the legislature and imprisoned. Dachau, the first concentration camp, first held political enemies such as socialists. In an article in The Guardian from March 21, 1933, the president of the Munich police said, “Communists, ‘Marxists’ and Reichsbanner [social democratic] leaders” would be imprisoned there. The next year reports of the horrid conditions inside emerged, such as that in The New Republic, likewise noting the “Social Democrats, Socialist Workers’ party members,” and others held within. Part of the impetus for the Night of the Long Knives in 1934, in which Hitler had Nazi Party members killed, was too much talk of workers, actual socialism, anti-capitalist ideas. Gregor Strasser was murdered that night. Otto fled for his life.

There is a famous saying that is in fact authentic. Lutheran pastor Martin Niemöller of Germany often said various versions of the following after the war:

First they came for the Communists
And I did not speak out
Because I was not a Communist

Then they came for the Socialists
And I did not speak out
Because I was not a Socialist

Then they came for the trade unionists
And I did not speak out
Because I was not a trade unionist

Then they came for the Jews
And I did not speak out
Because I was not a Jew

Then they came for me
And there was no one left
To speak out for me

One might wonder why the socialists would be coming for the socialists. But if this new socialism simply had to do with race and land, opposing State or worker ownership, it begins to make sense. You have to take care of ideological opponents, whether through a conference or a concentration camp. In response, communists and socialists took part in the valiant resistance to Nazism in Germany and throughout Europe.

The recent articles offering a Yes or No answer to the question “Were Hitler and the Nazis Socialists?” are far too simplistic. Honest history can’t always be captured in a word. Here is an attempt to do so in a paragraph:

Foundationally, socialists wanted either worker ownership of workplaces or government ownership of workplaces, the removal of capitalists. The Nazi Party had actual socialists. But over time they grew frustrated that the party wasn’t pursuing socialism; some left. Other members, including party leader Adolf Hitler, opposed actual socialism, and changed the definition of socialism to simply mean unity of the Aryan race and its collective flourishing. True to this, when he seized power, Hitler did not implement socialism, leaving capitalists in place, and instead crushed those speaking of actual socialism.

For more from the author, subscribe and follow or read his books.

Faith and Intelligence

Atheists and agnostics are sometimes accused of seeing themselves as more intelligent than people of faith. Which begs the question: as a former believer, do I consider myself to be smarter now that I am a freethinker? In a sense yes, in that I’ve gained knowledge I did not possess before and have developed critical thinking skills that I likewise used to lack. It feels like learning an instrument, in fact a good analogy. People who learn the violin are from one perspective smarter than they were before, with new knowledge and abilities, a brain rewired, and indeed smarter than me, and others, in that respect. But this is a rather informal meaning of intelligence. Virtually anyone can learn the violin, and virtually anyone can find the knowledge and skills I did. Now we’re talking about capacity. We’ve entered the more formal definition of intelligence, under which the answer is obviously no, I’m not smarter than my old self or believers. So the answer is yes and no, as is often the case with variable meanings.

Consider this in detail. There are many definitions of “intelligence” (“smart” can simply be used as a synonym). The formal definition of intelligence generally has to do with the ability or capacity to gain knowledge and skills. You wouldn’t grow in intelligence by gaining knowledge and skills, but rather by somehow expanding the capacity to do so in the first place. (Granted, it could well be that doing the former does impact the latter, a virtuous cycle.) The human and the ape have different capacities, a sizable intelligence gap. Humans have differences too, in terms of genetic predispositions granted by the birth lottery and environmental factors. An ape won’t get far on the violin, and some humans will struggle more, or less, than others to learn it. Human beings have greater or weaker baseline capacities in various areas, different intelligence levels, but most can learn the basics (the idea that enough practice can make anyone advanced or expert has been thoroughly blown up). So under the formal framework, the believer and the skeptic have roughly the same intelligence on average, with the same ability to discover certain knowledge and develop certain skills — whether that ever happens is a separate question entirely, coming down to luck, life experiences, environment, and so on. While studies have often found that religiosity correlates with lower IQ, the difference is very small, with possible causes ranging from autistic persons helping tip the scales for the non-religious to people of faith relying too much on intuition rather than logic or reason when problem-solving, a problem of “behavioral biases rather than impaired general intelligence” — and behavior can be changed, very different than capacity. If this latter hypothesis is true, it would be like giving a violin proficiency test to both violin students and non-students and marveling that the non-students underperformed. Had my logic and reasoning been tested before my transition from pious to dubious, I suspect it would have been lower than today, as I learned many critical thinking skills during and after, but this is not about capacity; it’s just learning anyone can do. Under the more serious definition of intelligence, I don’t believe I’m smarter than my former self or the faithful.

But now we can work under the informal, colloquial meaning, where growing intelligence simply has something to do with a growing base of knowledge and new skill sets. Do we not often say “He’s really smart” of someone who knows copious facts about astronomy or history? Don’t we consider a woman highly intelligent who speaks multiple languages, or is a blazingly fast coder? When we suspect that if we devoted the same time and energy to those things, we could probably hold our own? (Rightly or wrongly, as noted. Either way, we often don’t think as much about capacity as simple acquisition.) This writer, at least, sometimes uses these flattering terms to describe possession of much information or foreign abilities.

In that sense, I certainly believe I’m smarter than I used to be. I realize how insulting that sounds, given that the natural extension is that I consider myself smarter than religious persons. But I don’t know how unique that is. When the weak Christian becomes a strong Christian through reading and thinking and conversing, she may consider herself smarter than before — perhaps even more knowledgable and a more sensible thinker than an atheist! In other words, more intelligent than a nonbeliever (wouldn’t you have to be a fool to think existence, the universe, is possible without a creator being?). When a man learns vast amounts about aerophysics, he sees himself as smarter than before and by extension others on this topic; when he masters the skill of building planes that fly, the same. If intelligence simply means more knowledgable about or skilled at something, everyone thinks they’re smarter than their past selves and by extension other people, with, obviously, many clashing and contradictory opinions between individuals (the Christian and the atheist both thinking they are more knowledgable, for instance).

Some examples are in order from my personal growth, just to illuminate my perspective better. I’ll offer two. I used to believe that, among other reasons, the gospels could be trusted as being entirely factual because they were written 30-40 years after the alleged miraculous events they describe (at least, Mark was; the others were later). “Too soon after to be fictional.” But then I learned something. Other religions, which I disbelieved, had much shorter timespans between supposed events and written accounts! Made-up nonsense about what happened on Day X to Person A was being written about and earnestly believed just a year or two later, in some cases just a day or two later — birthing new religions and stories still believed today! That was just the way humans operated; it’s never too soon for fictions, things can be invented and spread immediately, never to be tamped down. So, I’d gained knowledge. I felt more intelligent because of this — even embarrassed at my old ways of thinking. Not right away, but eventually. How could anyone learn this and not change their way of thinking accordingly, realizing that this argument for the gospels’ trustworthiness is simply dreadful and should be retired?

Since the first example was in the knowledge category, the second can concern critical thinking skills, and is neatly paired with the first. I used to suppose that it was sensible to believe in the gospels (and of course God) because they could not be disproved. After all, why not? If you can’t disprove them, they could be true. So why not continue to believe the gospels to be full of truths rather than fictions, as you’ve been raised or long held? Eventually I started thinking more critically, more clearly. This was the argument from ignorance fallacy: if something hasn’t been disproved that’s reason to suppose there’s truth to it. It’s rather irrational — there are a million stories from all human religions that cannot be disproved…therefore it’s reasonable to think they are true? You can’t disprove that the Greek gods formed Mount Olympus, that Allah or Thor exists, that the god Krishna spoke with Arjuna as described in the Bhagavad Gita, or that we’re living in a simulation. The ocean of unprovable things is infinite and of course highly contradictory, with many sets of things that cannot both or all be true. There are too many fictions in this ocean — you may believe in one of them. To only apply the argument from ignorance to your own faith, to believe that the gospels are true because they cannot be disproved but not all these other things for the precise same reason, is simple bias. Mightn’t it be more sensible to believe that which can be proven, rather than what cannot be disproven? That would be, in stark contrast, a solid justification. Now on the other side of the gulf, I can barely understand how I ever thought in such fallacious ways. But better, more logical ways of thinking I simply developed over time, and as with the development of any skill I can’t help but feel more intelligent because of it.

One does regret how derogatory this may seem to many readers. Yet it is impossible to avoid. I consider myself more intelligent than I used to be because I have knowledge I did not possess before and ways of thinking I consider better than prior ones. By extension, it seems I have to consider myself more intelligent, in this area, than those who, like my past self, do not possess that knowledge or those habits of critical thinking. However (and apologies for growing repetitive, it stems from a desire not to offend too much), this is no different than any person who uses the informal meaning of intelligence in any context. If you use that definition, and believe yourself to be more knowledge of the contents of the bible or biology, or more skilled at mathematics or reading people, than before or compared to others, you consider yourself smarter than other people, in those areas but not necessarily in others. If you instead use the formal definition of intelligence, regarding the mere capacity to gain knowledge and develop skills, then you’d say you’re not actually smarter than others (as they could simply do as you have done) or at least not necessarily or only possibly smarter (again, there are differences in capacities between human beings; some will be naturally better at mathematics no matter how hard others practice). In this latter sense, I’m again compelled in my answer: I essentially have to say I’m not smarter than my former self or current believers who think as I once did.

For more from the author, subscribe and follow or read his books.

Review: ‘A History of the American People’

At times I read books from the other side of the political spectrum, and conservative Paul Johnson’s A History of the American People (1998) was the latest.

This was mostly a decent book, and Johnson deserves credit for various inclusions: a look at how British democracy influenced American colonial democracy, the full influence of religion on early American society, Jefferson’s racism, U.S. persecution of socialists and Wobblies during World War I, how the Democratic Party was made up of southern conservatives and northern progressives for a long time, and more.

However, in addition to (and in alignment with) being a top-down, “Great Men,” traditionalist history, the work dodges the darkness of our national story in significant ways. That’s the only way, after all, you can say things like Americans are “sometimes wrong-headed but always generous” (a blatant contradiction — go ask the Japanese in the camps about generosity) or “The creation of the United States of America is the greatest of all human adventures” (what a wonderful adventure black people had in this country). It’s the pitfall of conservative, patriotic histories — if you want the U.S. to be the greatest country ever, our horrors must necessarily be downplayed.

Thus, black Americans don’t get much coverage until the Civil War, whereas Native Americans aren’t really worth discussing before or after the Trail of Tears era. Shockingly, in this history the internment of the Japanese never occurred. It’s simply not mentioned! Johnson offers a rosy view of what the U.S. did in Vietnam, believing that we should have inflicted more vigorous violence on both Vietnam and Cuba. Poverty doesn’t get much attention. The Founding Fathers’ expressions of protecting their own wealth, class interests, and aristocratic power when designing our democracy naturally go unmentioned. Likewise, American attacks on other countries are always from a place of benevolence and good intentions, rather than, as they often were in actuality, for economic or business interests, to maintain global power, or to seize land and resources. To Johnson, the U.S. had “one” imperialist adventure, its war with Spain — this incredible statement was made not long after his outline of the U.S. invasion of Mexico to expand its borders to the Pacific.

Other events and people given short shrift include LGBTQ Americans, non-European immigrants, and the abolitionist movement — until the end of the book when the modern pro-life movement is compared to it in approving fashion. The labor and feminist movements aren’t worth mentioning for their crucial successes, or intersectional solidarity in some places, only for their racism in others. Johnson is rather sympathetic of Richard Nixon, and somehow describes his downfall with no mention of Nixon’s attempts, recorded on White House tapes, to obstruct the Watergate investigation — the discovery of which led to his resignation. If anything, the book is a valuable study on how bias, in serious history and journalism, usually manifests itself in the sin of omission, conscious or no, rather than outright falsities, conscious or no (not that conservatives are the only ones who do this, of course; the Left, which can take the opposite approach and downplay positive happenings in American history, shouldn’t shy away from acknowledging, for instance, that the U.S. Constitution was a strong step forward for representative democracy, secular government, and personal rights, despite the obvious exclusivity, compared to Europe’s systems).

Things really start to go off the rails with this book in the 1960s and later, when America loses its way and becomes not-great (something slavery and women as second-class citizens could somehow never cause), with much whining about welfare, academia, political correctness, and the media (he truly should have read Manufacturing Consent before propagating the myth that the liberal media turned everyone against the war in Vietnam). Affirmative action receives special attention and passion, far more than slavery or Jim Crow, and Johnson proves particularly thick-skulled on other matters of race (Malcolm X is a “black racist,” slang and rap are super dangerous, no socio-economic and historical causes are mentioned that could illuminate highlighted racial discrepancies, and so on). Cringingly blaming the 1960-1990 crime wave on a less religious society, one wonders what Johnson would make of the dramatic decrease in crime from the 1990s to today, occurring as the percentage of religious Americans continues to plunge — a good lesson on false causation.

All this may not sound at all like a “mostly decent” book, but I did enjoy reading most of it, and — despite the serious flaws outlined here, some unforgivable — most of the information in the space of 1,000 pages was accurate and interesting. It served as a good refresher on many of the major people and events in U.S. history, a look at the perspective of the other side, a prompt for thinking about bias (omission vs. inaccuracy, subconscious vs. conscious), and a reminder of who and what are left out of history — and why.

For more from the author, subscribe and follow or read his books.

Woke Cancel Culture Through the Lens of Reason

What follows are a few thoughts on how to view wokeism and cancel culture with nuance:

Two Basic Principles (or, Too Much of a Good Thing)

There are two principles that first spring to mind when considering cancel culture. First, reason and ethics, to this writer, suggest that social consequences are a good thing. There are certain words and actions that one in a free society would certainly not wish to result in fines, community service, imprisonment, or execution by government, but are deserving of proportional and reasonable punishments by private actors, ordinary people. It is right that someone who uses a racial slur loses their job or show or social media account. A decent person and decent society wants there to be social consequences for immoral actions, because it discourages such actions and helps build a better world. One can believe in this while also supporting free speech rights and the First Amendment, which obviously have to do with how the government responds to what you say and do, not private persons and entities.

The second principle acknowledges that there will be many cases where social consequences are not proportional or reasonable, where things go too far and people, Right and Left, are crushed for rather minor offenses. It’s difficult to think of many social trends or ideological movements that did not go overboard in some fashion, after all. There are simply some circumstances where there was an overreaction to words and deeds, where mercy should have been the course rather than retribution. (Especially worthy of consideration: was the perpetrator young at the time of the crime, with an underdeveloped brain? Was the offense in the past, giving someone time to change and grow, to regret it?) Readers will disagree over which specific cases fall into this category, but surely most will agree with the general principle, simply that overreaction in fact occurs. I can’t be the only Leftist who both nods approvingly in some cases and in others thinks, “She didn’t deserve that” or “My, what a disproportionate response.” Stupid acts might deserve a different response than racist ones, dumb ideas a different tack than dangerous ones, and so on. It might be added that overreactions not only punish others improperly, but also encourage forced, insincere apologies — somewhat reminiscent of the adage than you shouldn’t make faith a requirement of holding office, as you’ll only end up with performative religiosity.

Acknowledging and pondering both these principles is important.

“Free Speech” Only Concerns Government-Citizen Interaction

Again, in most cases, the phrase “free speech” is basically irrelevant to the cancel culture conversation. It’s worth emphasizing. Businesses and individuals — social media companies, workplaces, show venues, a virtual friend who blocks you or deletes your comment — have every right to de-platform, cancel, censor, and fire. The whining about someone’s “free speech” being violated when they’re cancelled is sophomoric and ignorant — the First Amendment and free speech rights are about whether the government will punish you, not non-government actors.

Which makes sense, for an employer or individual could just as easily be said to have the “free speech right” to fire or cancel you — why is your “free speech right” mightier than theirs?

Public universities and government workplaces, a bit different, are discussed below.

Why is the Left at Each Other’s Throats?

At times the national conversation is about the left-wing mob coming for conservatives, but we know it comes for its own with just as much enthusiasm. Maybe more, some special drive to purge bad ideas and practices from our own house. Few involved in left-wing advocacy of some kind haven’t found themselves in the circular firing squad, whether firing or getting blasted — most of us have probably experienced both. It’s a race to be the most woke, and can lead to a lot of nastiness.

What produces this? Largely pure motives, for if there’s a path that’s more tolerant, more just, that will build a better future, we want others to see and take it. It’s a deep desire to do what’s right and get others to do the same. (That the pursuit of certain kinds of tolerance [racial, gender, etc.] would lead to ideological intolerance has been called ironic or hypocritical, but seems, while it can go too far at times, more natural and inevitable — there’s no ending separate drinking fountains without crushing the segregationist’s ideology.)

But perhaps the inner turmoil also comes from troublesome ideas of group monolithic thinking, plus a desperate desire for there to be one right answer when there isn’t one. Because we sometimes look at impacted groups as comprised of members all thinking the same way, or enough thinking the same way, there is therefore one right answer and anyone who questions it should be trampled on. For example, you could use “person with autism” (person-first language) rather than “autistic person” (identity-first language) and fall under attack for not being woke enough. Identity-first language is more popular among the impacted group members, and the common practice with language among non-impacted persons is to defer to majority opinions. But majority opinions aren’t strictly “right” — to say this is of course to say the minority of the impacted group members are simply wrong. Who would have the arrogance and audacity to say this? It’s simply different opinions, diversity of thought. (Language and semantics are minefields on the Left, but also varying policy ideas.) There’s nothing wrong with deferring to majority opinion, but if we were not so focused on there being one right answer, if we didn’t view groups as single-minded or single-minded enough, we would be much more tolerant of people’s “mistakes” and less likely to stoop to nastiness. We’d respect and explore and perhaps even celebrate different views within our side of the political spectrum. It’s worth adding that we go just as crazy when the majority impacted group opinion is against an idea. It may be more woke, for example, to support police abolition or smaller police presences in black neighborhoods, but 81% of black Americans don’t want the police going anywhere, so the majority argument won’t always help a case. Instead of condemning someone who isn’t on board with such policies as not caring enough about racial justice, not being woke enough, being dead wrong, we should again remember there is great diversity of thought out there and many ideas, many possible right answers beyond our own, to consider and discuss with civility. One suspects that few individuals, if intellectually honest, would always support the most radical or woke policy posited (more likely, you’ll disagree with something), so more tolerance and humility is appropriate.

The same should be shown toward many in the middle and on the Right as well. Some deserve a thrashing. Others don’t.

The University Onus

One hardly envies the position college administrators find themselves in, pulled between the idea that a true place of learning should include diverse and dissenting opinions, the desire to punish and prevent hate speech or awful behaviors, the interest in responding to student demands, and the knowledge that the loudest, best organized demands are at times themselves minority opinions, not representative.

Private universities are like private businesses, in that there’s no real argument against them cancelling as they please.

But public universities, owned by the states, have a special responsibility to protect a wide range of opinion, from faculty, students, guest speakers, and more, as I’ve written elsewhere. As much as this writer loves seeing the power of student organizing and protest, and the capitulation to that power by decision-makers at the top, public colleges should take a harder line in many cases to defend views or actions that are deemed offensive, in order to keep these spaces open to ideological diversity and not drive away students who could very much benefit from being in an environment with people of different classes, ethnicities, genders, sexual orientations, religions, and politics. Similar to the above, that is a sensible general principle. There will of course be circumstances where words and deeds should be crushed, cancellation swift and terrible. Where that line is, again, is a matter of disagreement. But the principle is simply that public colleges should save firings, censorship, cancellation, suspension, and expulsion for more extreme cases than is current practice. The same for other public entities and public workplaces. Such spaces are linked to the government, which actually does bring the First Amendment and other free speech rights into the conversation, and therefore there exists a special onus to allow broader ranges of views.

Cancel Culture Isn’t New — It’s Just the Left’s Turn

If you look at the surveys that have been conducted, two things become clear: 1) support for cancel culture is higher on the Left, but 2) it’s also a problem on the Right.

50% of staunch progressives “would support firing a business executive who personally donated to Donald Trump’s campaign,” vs. 36% of staunch conservatives who “would support firing Biden donors.” Republicans are much more worried about their beliefs costing them their jobs (though a quarter of Democrats worry, too), conservatives are drastically more afraid to share opinions (nearly 80%, vs. just over 40% for strong liberals), and only in the “strong liberal” camp does a majority (58%) feel free to speak its mind without offending others (liberals 48%, conservatives 23%). While almost 100% of the most conservative Americans see political correctness as a problem, 30% of the most progressive Americans agree, not an insignificant figure (overall, 80% of citizens agree). There’s some common ground here.

While the Left is clearly leading modern cancel culture, it’s important to note that conservatives often play by the same rules, despite rhetoric about how they are the true defenders of “free speech.” If Kaepernick kneels for the anthem, he should be fired. If a company (Nike, Gillette, Target, NASCAR, Keurig, MLB, Delta, etc.) gets political on the wrong side of the spectrum, boycott it and destroy your possessions, while Republican officials legislate punishment. If Republican Liz Cheney denounces Trump’s lies, remove her from her leadership post. Rage over and demand cancellation of Ellen, Beyonce, Jane Fonda, Samantha Bee, Kathy Griffin, Michelle Wolf, and Bill Maher for using their free speech. Obviously, no one called for more firings for views he didn’t like than Trump. If the Dixie Chicks criticize the invasion of Iraq, wipe them from the airways, destroy their CDs. Thomas Hitchner recently put together an important piece on conservative censorship and cancellation during the post-9/11 orgy of patriotism, for those interested. And don’t forget what happened to Sinéad O’Connor after she tore up a photograph of the Pope (over the Catholic Church sexual abuse scandal) on SNL in 1992: her records were crushed under a steamroller in Times Square and her career was cancelled.

More importantly, when we place this phenomenon of study in the context of history, we come to suspect that rather than being something special to the Left (or naturally more powerful on the Left, because liberals hate free speech and so on), cancel culture seems to be, predictably, led by the strongest cultural and political ideology of the moment. When the U.S. was more conservative, it was the Right that was leading the charge to ensure people with dissenting views were fired, censored, and so on. The hammer, rather than wielded by the far Left, came down on it.

You could look to the socialists and radicals, like Eugene Debs, who were literally imprisoned for speaking out against World War I, but more recently the McCarthy era after World War II, when government workers, literary figures, media anchors, and Hollywood writers, actors, and filmmakers accused of socialist or communist sympathies were hunted down and fired, blacklisted, slandered, imprisoned for refusing to answer questions at the witch trials, and so forth, as discussed in A History of the American People by conservative Paul Johnson. The Red Scare was in many ways far worse than modern cancel culture — it wasn’t simply the mob that came for you, it was the mob and the government. However, lest anyone think this was just Republican Big Government run amok rather than a cultural craze working in concert, recall that it was the movie studios doing the actual firing and blacklisting, the universities letting faculty go, LOOK and other magazines reprinting Army “How to Spot a Communist” propaganda, ordinary people pushing and marching and rallying against communism, etc.

All this overlapped, as leftwing economic philosophies usually do, with the fight for racial justice. Kali Holloway writes for The Nation:

There was also [black socialist] Paul Robeson, who had his passport revoked by the US State Department for his political beliefs and was forced to spend more than a decade living abroad. Racism and red-scare hysteria also canceled the acting career of Canada Lee, who was blacklisted from movies and died broke in 1952 at the age of 45. The [anti-segregationist] song “Mississippi Goddam” got Nina Simone banned from the radio and much of the American South, and the Federal Bureau of Narcotics essentially hounded Billie Holiday to death for the sin of stubbornly refusing to stop performing the anti-lynching song “Strange Fruit.”

Connectedly, there was the Lavender Scare, a purge of gays and suspected gays from government and private workplaces. 5,000-10,000 people lost their jobs:

“It’s important to remember that the Cold War was perceived as a kind of moral crusade,” says [historian David K.] Johnson, whose 2004 book The Lavender Scare popularized the phrase and is widely regarded as the first major historical examination of the policy and its impact. The political and moral fears about alleged subversives became intertwined with a backlash against homosexuality, as gay and lesbian culture had grown in visibility in the post-war years. The Lavender Scare tied these notions together, conflating gay people with communists and alleging they could not be trusted with government secrets and labelling them as security risks, even though there was no evidence to prove this.

The 1950s was a difficult era for the Left and its civil rights advocates, class warriors, and gay liberators, with persecution and censorship the norm. More conservative times, a stronger conservative cancel culture. This did not end in this decade, of course (one of my own heroes, Howard Zinn, was fired from Spelman College in 1963 for his civil rights activism), but soon a long transition began. Paul Johnson mused:

The significant fact about McCarthyism, seen in retrospect, was that it was the last occasion, in the 20th century, when the hysterical pressure on the American people to conform came from the right of the political spectrum, and when the witchhunt was organized by conservative elements. Thereafter the hunters became the hunted.

While, as we saw, the Right are still often hunters as well, and therefore we see much hypocrisy today, there is some truth to this statement, as from the 1960s and ’70s the nation began slowly liberalizing. Individuals increasingly embraced liberalism, as did some institutions, like academia, the media, and Hollywood (others, such as the church, military, and law enforcement remain quite conservative). The U.S. is still growing increasingly liberal, more favoring New Deal policies, for example, even though more Americans still identify as conservative:

Since 1992, the percentage of Americans identifying as liberal has risen from 17% then to 26% today. This has been mostly offset by a shrinking percentage of moderates, from 43% to 35%. Meanwhile, from 1993 to 2016 the percentage conservative was consistently between 36% and 40%, before dipping to 35% in 2017 and holding at that level in 2018.

On top of this, the invention and growth of social media since the mid-2000s has dramatically changed the way public anger coalesces and is heard — and greatly increased its power.

So the Left has grown in strength at the same time as technology that can amplify and expand cancel culture, a convergence that is both fortunate and unfortunate — respectively, for those who deserve harsh social consequences and for those who do not.

For more from the author, subscribe and follow or read his books.

The Great Debate Over Robert Owen’s Five Fundamental Facts

In the early 1830s, British social reformer Robert Owen, called the “Founder of Socialism”[1] by contemporaries, brought forth his “Five Fundamental Facts” on human nature and ignited in London and elsewhere a dramatic debate — in the literal sense of fiery public discussions, as well as in books, pamphlets, and other works. While the five facts are cited in the extant literature on Owen and his utopian movement, a full exploration of the controversy is lacking, which is unfortunate for a moment that left such an impression on witnesses and participants. Famous secularist and editor George Jacob Holyoake, at the end of his life in 1906, wrote, “Human nature in England was never so tried as it was during the first five years” after Owen’s writings, when these five facts “were discussed in every town in the kingdom. When a future generation has courage to look into this unprecedented code as one of the curiosities of propagandism, it will find many sensible and wholesome propositions, which nobody now disputes, and sentiments of toleration and practical objects of wise import.”[2]

The discourse continued into the 1840s, but its intensity lessened, and thus we will focus our attention on its decade of origin. This work will add to scholarship a little-explored subject, and argue that the great debate transcended common ideological divisions, not simply pitting socialist against anti-socialist and freethinker against believer, but freethinker against freethinker and socialist against socialist as well. The debate was nuanced and complex, and makes for a fascinating study of intellectual history in Victorian Britain, an overlooked piece of the Western discourse on free will going back to the ancient Greek philosophers and nature-nurture stirred up by John Locke and René Descartes in the 17th century.

The limited historiography of the “Five Fundamental Facts” recognizes their significance. J.F.C. Harrison of the University of Sussex wrote that Owen, in his “confidence in the discoverability of laws governing human action,” thought as immutable as physical laws, in fact “provided the beginnings of behavioural science.”[3] Indeed, “in an unsophisticated form, and without the conceptual tools of later social psychology, Owen had hit upon the crucial role of character structure in the social process.”[4] Further, Nanette Whitbread wrote that the school Owen founded to put his five facts into action and change human nature, the New Lanark Infant School, could “be justly described as the first in the developmental tradition of primary education.”[5] However, the facts are normally mentioned only in passing — works on Owen and his movement that make no mention of them at all are not unusual — and for anything close to an exploration of the debate surrounding them one must turn to brief outlines in works like Robert Owen: A Biography by Frank Podmore, not an historian at all, but rather a parapsychologist and a founder of the Fabian Society.[6]

Robert Owen, to quote The Morning Post in 1836, was “alternately venerated as an apostle, ridiculed as a quack, looked up to and followed as the founder of a new philosophy, contemned as a visionary enthusiast, denounced as a revolutionary adventurer.”[7] He was born in Wales in 1771, and as a young man came to manage a large textile mill in Manchester and then buy one in New Lanark, Scotland. Influenced by the conditions of the working poor and the ideas of the Enlightenment, and as a prosperous man, he engaged in writing, advocacy, and philanthropy for better working conditions and early childhood education in Britain after the turn of the century. Adopting a philosophy of cooperative, communal economics, Owen purchased an American town, New Harmony in Indiana, in 1825 and ran a utopian experiment, inspiring many more across the U.S. and elsewhere, that was ultimately unsuccessful. He returned home in 1828, living in London and continuing to write and lecture for broad social change.

Soon Owen brought forth his Outline of the Rational System of Society, in circulation as early as 1832 — and by 1836 “too well known to make it requisite now to repeat,” as a Mr. Alger put it in the Owenite weekly New Moral World.[8] The Home Colonisation Society in London, an organization promoting the formation of utopian communities with “good, practical education” and “permanent beneficial employment” for all, without the “present competitive arrangements of society,” was just one of the work’s many publishers.[9] Owen, not one for modesty, declared it developed “the First Principles of the Science of Human Nature” and constituted “the only effectual Remedy for the Evils experienced by the Population of the world,” addressing human society’s “moral and physical Evils, by removing the Causes which produce them.”[10]

The text from the Home Colonisation Society began with Owen’s “Five Fundamental Facts,” the key to his rational system and therefore the prime target of later criticism.[11] They assert:

1st. That man is a compound being, whose character is formed of his constitution or organization at birth, and of the effects of external circumstances upon it from birth to death; such original organization and external influences continually acting and re-acting each upon the other.

2d. That man is compelled by his original constitution to receive his feelings and his convictions independently of his will.

3d. That his feelings, or his convictions, or both of them united, create the motive to action called the will, which stimulates him to act, and decides his actions.  

4th. That the organization of no two human beings is ever precisely similar at birth; nor can art subsequently form any two individuals, from infancy to maturity, to be precisely similar.

5th. That, nevertheless, the constitution of every infant, except in the case of organic disease, is capable of being formed into a very inferior, or a very superior, being, according to the qualities of the external circumstances allowed to influence that constitution from birth.[12]

As crucial as Owen’s five facts were to the subsequent arguments, he offered no defense of them in the short Society pamphlet, stating them, perhaps expectedly, as fact and immediately proceeding to build upon them, offering twenty points comprising “The Fundamental Laws of Human Nature.” Here again he explained that the character of an individual was malleable according to the environment and society in which he or she developed and existed — and how by building a superior society humanity could allow its members to flourish and maximize well-being. This was the materialism of the early socialists. That section was followed by “The Conditions Requisite for Human Happiness,” “The Principles and Practice of the Rational Religion,” “The Elements of the Science of Society,” and finally a constitution for a new civilization.

This paper will not explore Owen’s specific utopian designs in detail, but at a glance the rational society offered a government focused on human happiness, with free speech, equality for persons of all religions, education for all, gender equality, communal property, a mix of direct and representative democracy, the replacement of the family unit with the larger community structure, an end to punishments, and more. Overall, the needs of all would be provided for collectively, and work would be done collectively — the termination of “ignorance, poverty, individual competition…and national wars” was in reach.[13] Happier people were thought better people — by creating a socialist society, addressing human needs and happiness, “remodelling the character of man” was possible.[14] The five facts aimed to demonstrate this. While this pamphlet and others were brief, in The Book of the New Moral World, Owen devoted a chapter to justifying and explaining each of the five facts, and wrote of them in other publications as well. In that work he clarified, for instance, that it was an “erroneous supposition that the will is free,” an implication of the second and third facts.[15]

The reaction? As Holyoake wrote, in a front-page piece in The Oracle of Reason, “Political economists have run wild, immaculate bishops raved, and parsons have been convulsed at [Owen’s] communities and five facts.”[16] The facts, to many of the pious, smacked of the determinism rejected by their Christian sects. An anonymous letter on the front page of a later edition of the same publication laid out a view held by both Christians and freethinkers: “‘Man’s character is formed for him and not by him’ — therefore, all the religions of the world are false, is the sum and substance of the moral philosophy of R. Owen.”[17] With biological inheritances and environmental influences birthing one’s “feelings and convictions,” one’s “character,” free will was put into question. What moral culpability did human beings then have for their actions, and how could an individual truly be said to make a “choice” to believe or follow religious doctrine? Any religion that rested on free will would be contradictory to reality, and thus untrue. But, the anonymous writer noted, Calvinists and other determinists were safer — they believed in “supernatural” causes that formed one’s character, thus it would be disingenuous to say “all the religions of the world” were fiction, solely on the grounds that individuals did not have mastery over who they were.

The writer then offered further nuance and assistance to ideological opponents (he or she was clearly a freethinker, not only given the journal read and written to but also revealed by lines such as: “But what care religionists for justice in this world or the next? If they cared anything about ‘justice,’ and knew what the word meant, they would have long ere this abandoned the doctrine of an eternal hell”).[18] It was pointed out that “original sin” was found in non-deterministic and deterministic Christian sects alike — a formation of character before birth. “How then can the ‘five facts’ refute all religions…?”[19] If human beings were, from the universal or at least near-universal Christian point of view, shaped by supernatural forces beyond their control after Adam and Eve’s storied betrayal, it was a non sequitur, in the anonymous author’s mind, to say the molding of character invalidated common religions. Here we see an introduction to the complex ways the British of the Victorian era approached the debate.

Yet others were not always so gracious. In 1836, The Monthly Review wrote that “No one doubts the sincerity of Mr. Owen” and his desire to “create a world of happiness,” but “no man who takes for his guides common observation, and common sense — much more, that no person who has studied and who confides in the doctrines of the Bible, can ever become a convert to his views.”[20] The five facts were “intangible” and “obscure,” the arguments “bold, unauthorised, unsupported, ridiculous,” the vision for society as a whole “fanciful, impractical, and irreligious.”[21] How was it, the periodical asked, that these views could be “demonstrably true” yet had “never found acceptance with the mass of sober intelligent thinkers,” only the “paltry, insignificant, uninfluential, and ridiculed class of people” that were the Owenites, and Owen himself, who was “incompetent”?[22] The writer (or writers) further resented how Owen centered himself as something of a savior figure. Ridding the world of evil could be “accomplished by one whose soul like a mirror was to receive and reflect the whole truth and light which concerned the happiness of the world — and I, Robert Owen, am that mirror” — and did not the New Testament already serve the purpose of outlining the path to a more moral and happier world?[23] Overall, it was a scathing attack, an example of the hardline Christian view.

The January 1838 volume of The Christian Teacher, published to “uphold the religion of the New Testament, in contradistinction to the religion of creeds and parties,” included a writing by H. Clarke of Chorley.[24] To him the facts were “inconsistent and fallacious”: facts one, two, and four contradicted the fifth.[25] The first, second, and fourth facts established that a “man’s self” at birth “has at least something to do with forming his character,” but then the fifth established that “by the influence of external circumstances alone, any being” could be transformed into a “superior being.”[26] To Clarke, the facts at first emphasized that one’s biological constitution played a sizable, seemingly co-equal, role in forming one’s character — then the fifth fact threw all that out the window. If anyone could be made into a superior being, just via environment, what sense did it make to say that biology had any effect whatsoever on an individual’s nature?

Owen did seem to view circumstances as the predominant power. Though he firmly believed there existed, as he wrote, a “decided and palatable difference between [infants] at birth” due to biology, he indeed believed in bold, universal results: “selfishness…will cease to exist” alongside “all motives to individual pride and vanity,” and as “all shall be trained, from infancy, to be rational,” a humanity of “superior beings physically, intellectually, and morally” could arise.[27] Clarke was not alone in this critique. J.R. Beard wrote something similar in The Religion of Jesus Christ Defended from the Assaults of Owenism, which further held the common blank slate view of human nature (“at birth there is no mental or moral development”), meaning environment was all that was left: “What is this but to make ‘external circumstances’ the sole creator of the lot of man?”[28]

Clarke further took issue with what he viewed as the contradictory or hypocritical language of the Owenites. “So I learn from the votaries of Owenism…man’s feelings and convictions are forced upon him irrespective of his will, it is [therefore] the extreme of folly to ask a man to believe this or that.”[29] The Christian believed in belief, but “Owenism denies that man can believe as he pleases…yet strange to tell, almost the first question asked by an Owenite is, ‘Do you believe Mr. Owen’s five fundamental facts?’”[30] Belief in the five facts, Clarke pointed out, was required to be a member of Owen’s association, which an “Appendix to the Laws and Regulations” of the association printed in The New Moral World in 1836 made clear.[31] If one’s convictions were formed against one’s will, what sense did it make to ask after or require beliefs? Clarke’s own beliefs, one should note, while against Owen’s views of human nature, were not necessarily hostile to socialism. He prefered “Christ to Mr. Owen, Christian Socialism to the five-fact-socialism.”[32]

There were some who saw a distinction between the value of Owen’s theories on human nature and that of his planned civilization. In 1836, The Morning Post found Owen, in his Book of the New Moral World, to be “radical” and “destructive,” wanting to dissolve civilization and remake it; the idea that humanity had for millenia been living in systems contrary to their own nature and happiness was “almost incredible.”[33] But the Post came from a more philosophical position and background than theological (“the Millenium [is] about as probable a consummation as the ‘Rational System’”).[34] Owen had therefore “displayed considerable acuteness and ability” regarding “metaphysical discussions,” making the book worth a read for ontologists and those who enjoyed a “‘keen encounter of the wit.’”[35]

As we saw with the anonymous writer in The Oracle of Reason, the five facts divided not only freethinkers and Christians, but also freethinkers as a group. There was too much intellectual diversity for consensus. For example, Charles Southwell, who was “rapidly becoming one of the most popular freethought lecturers in London,” debated Owen’s facts with well-known atheist Richard Carlile in Lambeth, a borough of south London.[36] The room “was crowded to suffocation, and hundreds retired unable to attain admittance. The discussion lasted two nights, and was conducted with talent and good feeling by both parties.”[37] Southwell defended the facts, while Carlile went on the offensive against them. 

The agnostic Lloyd Jones, journalist and friend of Owen, had much to say of Richard Carlile’s lectures on this topic.[38] In A Reply to Mr. R. Carlile’s Objections to the Five Fundamental Facts as Laid Down by Mr. Owen, Jones remarked that Carlile had called Owen’s Book of the New Moral World a “book of blunders” during his talk on November 27, 1837, but the audience “certainly could not avoid observing the multitudinous blunders committed by yourself, in endeavouring to prove it such.”[39] Carlile, according to Jones, insisted that individuals had much more power to steel themselves against circumstances and environments than Owen was letting on, throwing facts one and two into doubt. This is all rather one-sided, as Jones did not even bother to quote Carlile directly, but instead wrote, “You tell us we have a power to adopt or reject [convictions and feelings]: you have not given us your reasons for so saying; in fact, you did not condescend to reason upon any of the subjects broached during the evening’s discussion.”[40] Carlile should “try the question… Can you, by a voluntary action of your mind, believe that to be true which you now consider to be false; — or believe that to be false which you now consider true?… Certainly not.”[41] Jones also defended the idea that conviction and will were distinct, rather than one and the same as Carlile insisted.[42]

For the socialists, many of them of course Owenites anyway, there was much acceptance of the five facts. James Pate, for the Socialists of Padiham, wrote that an Owenite named Mr. Fleming came to their organization and, to a full house of about 300 people, “proved, in a plain yet forcible manner, the truth of the five fundamental facts; and…showed how little difficulty there would be in the practical application of Mr. Owen’s views to all classes of society.”[43] The audience was “so fully convinced” that few “dared venture to question any remarks.”[44] But here divergent thoughts existed too, as we saw with H. Clarke. The branches of religious socialism and secular socialism made for varying thoughts on human nature among the radicals, or simply those sympathetic to or not offended by the idea of socialism. Frederick Lees, for instance, secretary of the British Association for the Suppression of Intemperance, castigated the “infidelity” of Owenism and his five facts but had little to say of socialism, save that it was a front for the former: “In the fair name of Socialism, and in the mask of friendship, Judas like, she [untruth, especially as related to infidelity] seeks to ensnare and betray.”[45] Owen’s followers, while they professed to desire the “establishment of a ‘SOCIAL COMMUNITY,’ their chief and greatest object is the ascendancy of an ‘INFIDEL CREED.’”[46] Lees, striking a sympathetic note once more, added that Owenites should “dissolve the forced and arbitrary union between their absurd and infidel metaphysics, and the practical or working part of Socialism, which association of the two excites the rightful opposition of all lovers of christian truth…”[47]

For a forceful defense of religious socialism, take T.H. Hudson’s lengthy work Christian Socialism, Explained and Enforced, and Compared with Infidel Fellowship: Especially, as Propounded by Robert Owen, Esq., and His Disciples. It was up to “the Christian Religion to secure true socialism,” whereas Owen’s views were “more likely to serve the purposes of the Prince of darkness.”[48] Hudson spent one chapter, about forty pages, attacking the five facts, followed by three chapters, over 120 pages, advocating for Christian Socialism. The five facts were “based on the false assumptions, that man is good by nature” and were “decidedly irreligious.”[49] Hudson lambasted the “disguised atheism” of the first fact: it did not mention God as man’s creator, nor his spirit or soul, and left him helpless before nature, without free will.[50] The “infidel Socialist,” in believing facts two and three, deepened trust in fatalism and the irresponsibility of individuals, but also fell for a “gross contradiction.”[51] Hudson pointed out that the second fact established feelings and convictions were received independently of one’s will, yet the third fact stated the will was made up of, created by, one’s feelings and convictions.[52] Initially presented as distinct phenomena, subsequently as a unified phenomenon. J.R. Beard echoed this: it would have been better to say feelings and convictions were received “anteriorly ‘to his will’; for it is obviously his notion that man’s will is not independent, but the result, the creation of his feelings and convictions.”[53]

Like the atheist Carlile, Hudson thought one could put up “resistance” to external influences, could decide whether to “receive” or reject feelings and convictions — an exercise in willpower, which was thus independent of and prior to feelings and convictions; a person was not a “slave to circumstances.”[54] This was a refrain of Owen’s critics, with the added element at times of the impossibility of personal change under Owen’s theory (indeed the impossibility that changing circumstances could change people). For instance, Minister John Eustace Giles, in Socialism, as a Religious Theory, Irrational and Absurd (1839), based on his lectures in Leeds, wondered how Owen could believe that “‘man is the creature of circumstances’” yet “professes to have become wise” — did that not show Owen had “resisted” circumstances?[55] Did not this, plus Owen’s desire to “change the condition of the world…thus shew that while man is the creature of circumstances, circumstances are the creatures of man”?[56] After focusing on semantics and perceived ambiguities in the fourth fact, but not closed to the possibility it was a simple truism, Hudson saw the improvement of individuals in the fifth fact true but was insulted that Christianity, no longer “being alienated from God” and addressing humanity’s “depraved nature,” was not thought necessary to this improvement alongside changing environments.[57] Indeed, most egregious was the Owenite belief that people were fundamentally good.[58]

Whether due to varying personal beliefs or simply varying cautions about driving away potential converts in a pious age, the actual presentation of the fundamental facts as irreligious was not consistent. Lloyd Jones, in an 1839 debate over whether socialism was atheistic with Mr. Troup, editor of The Montrose Review, asked some variant of “Where is the Atheism here?” after reading each of the five facts.[59] Whereas Owen, also an unbeliever, in an 1837 debate with Rev. J.H. Roebuck of Manchester, called religions “geographical insanities” that could be wiped away by the five facts.[60] “Mr. Roebuck stated…that the two systems for which we contend are opposed to each other, and that both, therefore, cannot be true. Herein we perfectly agree.”[61] The national discourse so intertwined the facts and the question of God that a person, on either side of the debate, could not help but assume that one would accompany the other. When a debate on “the mystery of God” was proposed to Owenite J. Smith in January 1837, “the challenge was [mis]understood by myself and all our friends, to be the discussion of the five fundamental facts.”[62]

Overall, perhaps Robert Owen’s facts flustered the religious and irreligious, and socialists and anti-socialists alike, because they were simply so counterintuitive — not to mention theoretical, without contemporary science to back them up. Owen wrote, in The Book of the New Moral World, for instance: “Man is not, therefore, to be made a being of a superior order by teaching him that he is responsible for his will and his actions.”[63] Such blunt statements turned on its head what many, across ideologies, judged common sense. Owen’s ideas were “contrary to common sense” for Hudson, Christian socialist, in the same way they were “opposed to the common sense of mankind” for Giles, anti-socialist.[64] Would not teaching individual moral responsibility enable personal change and create a better society? Not so for Owen. The will was formed by circumstances — thus true personal change came about by purposefully changing environments. Create a better society first, and the positive personal change would follow. These were, according to Owen, “the laws of nature respecting man, individually, and the science of society,” and few posited laws of nature, proven or otherwise, do not provoke intense philosophical debate.[65]

For more from the author, subscribe and follow or read his books.

[1] J. Eustace Giles, Socialism, as a Religious Theory, Irrational and Absurd: the First of Three Lectures on Socialism (as Propounded by Robert Owen and Others) Delivered in the Baptist Chapel South-Parade, Leeds, September 23, 1838 (London: Simpkin, Marshall, & Co., Ward & Co., G. Wightman, 1838), 4, retrieved from

[2] George Jacob Holyoake, The History of Co-operation (New York: E.P. Dutton & Company, 1906), 1:147.

[3] J.F.C. Harrison, Robert Owen and the Owenites in Britain and America (Abingdon: Routledge, 2009), 66.

[4] Ibid.

[5] Nanette Whitbread, The Evolution of the Nursery-infant School: A History of Infant and Nursery Education in Britain, 1800-1970 (Abingdon: Routledge, 2007), 39:9-10.

[6] Frank Podmore, Robert Owen: A Biography (London: Hutchinson & CO, 1906), 481-482, 499-502.

[7] The Morning Post, September 14, 1836, cited in “The Book of the New Moral World,” The New Moral World (Manchester: Abel Heywood, 1836-7), 3:6, retrieved from

[8] The Westminster Review (London: Robert Heward, 1832), 26:317, retrieved from; The New Moral World (London: Thomas Stagg, 1836), 2:62, retrieved from

[9] Robert Owen, Outline of the Rational System of Society (London: Home Colonization Society, 1841), 2, retrieved from

[10] Ibid, 1.

[11] This was explicitly stated by critics. Dismantle the five facts and the rest of the system goes down with it. See T.H. Hudson, Christian Socialism, Explained and Enforced, and Compared with Infidel Fellowship, Especially, As Propounded by Robert Owen, Esq., and His Disciples (London: Hamilton, Adams, and Co., 1839), 52, retrieved from

[12] Owen, Outline, 3.

[13] Ibid, 14.

[14] Ibid.

[15] Robert Owen, The Book of the New Moral World (London: Richard Taylor, 1836), 17, retrieved from

[16] The Oracle of Reason (London: Thomas Paterson, 1842), 1:113, retrieved from

[17] Ibid, 161.

[18] Ibid.

[19] Ibid.

[20] The Monthly Review (London: G. Henderson, 1836), 3:62, retrieved from

[21] Ibid, 62, 67-68.

[22] Ibid, 63.

[23] Ibid, 62-63.

[24] The Christian Teacher and Chronicle of Beneficence (London: Charles Fox, 1838), 4:219, retrieved from

[25] Ibid.

[26] Ibid, 220.

[27] Owen, Book, 22-24.

[28] J.R. Beard, The Religion of Jesus Christ Defended from the Assaults of Owenism (London: Simpkin, Marshall and Company, 1839), 233, retrieved from

[29] Christian Teacher, 220.

[30] Ibid.

[31] Ibid, 220; New Moral World, 2:261.

[32] Christian Teacher, 220.

[33] New Moral World, 3:6.

[34] Ibid.

[35] Ibid.

[36] Edward Royle, Victorian Infidels: The Origins of the British Secularist Movement, 1791-1866 (Manchester: Manchester University Press, 1974), 69.

[37] The New Moral World (Leeds: Joshua Hobson, 1839), 6:957, retrieved from

[38] Regarding Jones’ agnosticism, see: Report of the Discussion betwixt Mr Troup, Editor of the Montrose Review, on the part of the Philalethean Society, and Mr Lloyd Jones, of Glasgow, on the part of the Socialists, in the Watt Institution Hall, Dundee on the propositions, I That Socialism is Atheistical; and II That Atheism is Incredible and Absurd (Dundee: James Chalmers & Alexander Reid, 1839), retrieved from

[39] Lloyd Jones, A Reply to Mr. Carlile’s Objections to the Five Fundamental Facts as Laid Down by Mr. Owen (Manchester: A. Heywood, 1837), 4, retrieved from

[40] Ibid, 9.

[41] Ibid.

[42] Ibid, 10-11.

[43] New Moral World, 3:380.

[44] Ibid.

[45] Frederick R. Lees, Owenism Dissected: A Calm Examination of the Fundamental Principles of Robert Owen’s Misnamed “Rational System” (Leeds: W.H. Walker, 1838), 7, retrieved from

[46] Ibid, 16.

[47] Ibid.

[48] Hudson, Christian Socialism, 4, 13.

[49] Ibid, 50-51.

[50] Ibid, 53-63.

[51] Ibid, 63-64, 66.

[52] Ibid, 66.

[53] Beard, Religion, 234.

[54] Hudson, Christian Socialism, 65-66.

[55] Giles, Socialism, 7.

[56] Ibid.

[57] Hudson, Christian Socialism, 72-81, 87-88.

[58] Ibid, 89.

[59] Report of the Discussion, 12.

[60] Public Discussion, between Robert Owen, Late of New Lanark, and the Rev. J.H. Roebuck, of Manchester (Manchester: A. Heywood, 1837), 106-107, retrieved from

[61] Ibid, 107.

[62] New Moral World, 3:122.

[63] Owen, Book, 20.

[64] Hudson, Christian Socialism, 65; Giles, Socialism, 36.

[65] Owen, Book, 20.

On the Spring-Stone Debate

While finding a decisive victor in debates on semantics and historical interpretation often proves difficult, in the lively clash between historians David Spring and Lawrence Stone on social mobility into Britain’s landed elite, the former presented the strongest case. The discourse, of the mid-1980s, centered around the questions of how to define “open” when considering how open the upper echelon was to newcomers from 1540-1880 and, most importantly, to newcomers who came from the business world. On both counts, Spring offered a more compelling perspective on how one should regard the historical evidence and data Stone collected in his work An Open Elite? Namely, that it was reasonable to call the landed elite open to members of lower strata, including business leaders.

The debate quickly obfuscated lines between the two questions. In his review of An Open Elite?, Spring noted that Stone showed a growth in elite families from 1540-1879, beginning with forty and seeing 480 join them, though not all permanently. Further, “Stone shows that regularly one-fifth of elite families were newcomers.”[1] In his reply, Stone declined to explore the “openness” of a twenty percent entry rate because it was, allegedly, irrelevant to his purpose: he was only interested in the entry of businessmen like merchants, speculators, financiers, and manufacturers, who did not come from the gentry, the relatively well-off stratum knocking at the gate of the landed elite. Spring “failed to distinguish between openness to new men, almost all from genteel families, who made a fortune in the law, the army, the administration or politics…and openness to access by successful men of business, mostly of low social origins.”[2]

True, Stone made clear who and what he was looking at in An Open Elite?: the “self-made men,” the “upward mobility by successful men of business,” and so on, but leaned into, rather than brushed aside or contradicted, the idea of general social immobility.[3] For instance, observe the positioning of: “When analysed with care…the actual volume of social mobility has turned out to be far less than might have been expected. Moreover, those who did move up were rarely successful men of business.”[4] The notion of the landed elite being closed off in general was presented, followed by the specific concern about businessmen. Stone went beyond business many times (for instance: “the degree of mere gentry penetration up into the elite was far smaller than the earlier calculations would indicate”[5]), positing that not only was the landed elite closed to businessmen but also universally, making his protestations against Spring rather disingenuous. Stone insisted to Spring that an open elite specifically meant, to historians and economists, a ruling class open to businessmen, not to all, but Stone himself opened the door to the question of whether the landed elite was accessible to everyone by answering nay in his book. Therefore, the question was admissible, or fair game, in the debate, and Spring was there to provide a more convincing answer. A group comprised of twenty percent newcomers from below, to most reasonable persons, could be described as relatively open. Even more so with the sons of newcomers added in: the landed elite was typically one-third newcomers and sons of newcomers, as Spring pointed out. Though it should be noted both scholars highlighted the challenge of using quantitative data to answer such historical questions. The collection and publication of such numbers is highly important, but it hardly ends the discussion — the question of openness persists, and any answer is inherently subjective.

However, it was the second point of contention where Spring proved most perceptive. He pointed out that while the gentry constituted 181 entrants into the landed elite during the observed centuries, those involved in business were not far behind, with 157, according to Stone’s data. This dwarfed the seventy-two from politics and seventy from the law. As Spring wrote, Stone’s quantitative tables conflicted with his text. Stone wrote in An Open Elite? that “most of the newcomers were rising parish gentry or office-holders or lawyers, men from backgrounds not too dissimilar to those of the existing county elite. Only a small handful of very rich merchants succeeded in buying their way into the elite…”[6] Clearly, even with different backgrounds, businessmen were in fact more successful at entering the landed elite than politicians and lawyers in the three counties Stone studied. What followed a few lines down in the book from Stone’s selected words made far more sense when considering the data: businessmen comprised “only a third of all purchasers…”[7] The use of “only” was perhaps rather biased, but, more significantly, one-third aligned not with the idea of a “small handful,” but of 157 new entrants — a third business entrants, a bit more than a third gentry, and a bit less than a third lawyers, politicians, and so on. Spring could have stressed the absurdity, in this context, of the phrase “only a third,” but was sure to highlight the statistic in his rejoinder, where he drove home the basic facts of Stone’s findings and reiterated that the landed elite was about as open to businessmen as others. Here is where quantitative data truly shines in history, for you can compare numbers against each other. The question of whether a single given number or percentage is big or small is messy and subjective, but whether one number is larger than another is not, and provides clarity regarding issues like whether businessmen had some special difficulty accessing Britain’s landed elite.

Stone failed to respond directly to this point, a key moment that weakened his case, but instead sidetracked into issues concerning permanence of newcomers and by-county versus global perspectives on the data, areas he explored earlier in his response, now awkwardly grafted on to Spring’s latest argument. Yet the reader is largely left to pick up on what is being implied, based on Stone’s earlier comments on said issues. He noted that only twenty-five businessmen of the 157 came from the two counties distant from London, seemingly implying that Hertfordshire, the London-area county, had tipped the scales. Merchants and others were not as likely to rise into the landed elite in more rural areas. What relevance that had is an open question — it seemed more a truism than an argument against Spring’s point, as London was a center for business, and thus that result was perhaps expected. Regardless, he did not elaborate. The adjacent implication was that Spring was again seeing “everything from a global point of view which has no meaning in reality, and nothing from the point of view of the individual counties.”[8] In the debate, Stone often cautioned that it made sense to look at counties individually, as they could be radically distinct — one should not simply look at the aggregated data. But Stone’s inherent problem, in his attempt at a rebuttal, was that he was using the global figures to make his overall case. He took three counties and lifted them up to represent a relatively closed elite in Britain as a whole. It would not do to now brush aside one county or focus heavily on another to bolster an argument. Spring, in a footnote, wrote something similar, urging Stone to avoid “making generalizations on the basis of one county. [Your] three counties were chosen as together a sample of the nation.”[9] To imply, as Stone did, that London could be ignored as some kind of anomaly contradicted his entire project.

Stone’s dodge into the permanence of entrants was likewise not a serious response to Spring’s observation that business-oriented newcomers nearly rivaled those from the gentry and far outpaced lawyers and politicians. He wrote that “of the 132 business purchasers in Hertfordshire, only 68 settled in for more than a generation…”[10] The transient nature of newcomers arose elsewhere in the debate as well. Here Stone moved the goalposts slightly: instead of mere entrants into the landed elite, look at who managed to remain. Only “4% out of 2246 owners” in the three counties over these 340 years were permanent newcomers from the business world.[11] It was implied these numbers were both insignificant and special to businesspersons. Yet footnote five, that associated with the statistic, undercut Stone’s point. Here he admitted Spring correctly observed that politicians and officeholders were forced to sell their county seats, their magnificent mansions, and abandon the landed elite, as defined by Stone, at nearly the same rate as businessmen, at least in Hertfordshire. Indeed, it was odd Stone crafted this response, given Spring’s earlier dismantling of the issue. The significance of Stone’s rebuttal was therefore unclear. If only sixty-eight businessmen lasted more than a generation, how did that compare to lawyers, office-holders, and the gentry? Likewise, if four percent of businessmen established permanent generational residence among the landed elite, what percentages did other groups earn? Again, Stone did not elaborate. But from his admission and what Spring calculated, it seems unlikely Stone’s numbers, when put in context, would help his case. Even more than the aggregate versus county comment, this was a non-answer.

The debate would conclude with a non-answer as well. There was of course more to the discussion — it should be noted Stone put up an impressive defense of the selection of his counties and the inability to include more, in response to Spring questioning how representative they truly were — but Spring clearly showed, using Stone’s own evidence, that the landed elite was what a reasonable person could call open to outsiders in general and businessmen in particular, contradicting Stone’s positions on both in An Open Elite? Stone may have recognized this, given the paucity of counterpoints in his “Non-Rebuttal.” Spring would, in Stone’s view, “fail altogether to deal in specific details with the arguments used in my Reply,” and therefore “there is nothing to rebut.”[12] While it is true that Spring, in his rejoinder, did not address all of Stone’s points, he did focus tightly on the main ideas discussed in the debate and this paper. So, as further evidence that Spring constructed the better case, Stone declined to return to Spring’s specific and central arguments about his own data. He pointed instead to other research that more generally supported the idea of a closed elite. Stone may have issued a “non-rebuttal” not because Spring had ignored various points, but rather because he had stuck to the main ones, and there was little to be said in response.

For more from the author, subscribe and follow or read his books.

[1] Eileen Spring and David Spring, “The English Landed Elite, 1540-1879: A Review,” Albion: A Quarterly Journal Concerned with British Studies 17, no. 2 (Summer 1985): 152.

[2] Lawrence Stone, “Spring Back,” Albion: A Quarterly Journal Concerned with British Studies 17, no. 2 (Summer 1985): 168.

[3] Lawrence Stone, An Open Elite? England 1540-1880, abridged edition (Oxford: Oxford University Press, 1986), 3-4.

[4] Ibid, 283.

[5] Ibid, 130.

[6] Ibid, 283.

[7] Ibid.

[8] Stone, “Spring Back,” 169.

[9] Spring, “A Review,” 154.

[10] Stone, “Spring Back,” 171.

[11] Ibid.

[12] Lawrence Stone, “A Non-Rebuttal,” Albion: A Quarterly Journal Concerned with British Studies 17, no. 3 (Autumn 1985): 396. For Spring’s rejoinder, see Eileen Spring and David Spring, “The English Landed Elite, 1540-1879: A Rejoinder,” Albion: A Quarterly Journal Concerned with British Studies 17, no. 3 (Autumn 1985): 393-396.

Did Evolution Make it Difficult for Humans to Understand Evolution?

It’s well known that people are dreadful at comprehending and visualizing large numbers, such as a million or billion. This is understandable in terms of our development as a species, as grasping the tiny numbers of, say, your clan compared to a rival one you’re about to be in conflict with, or understanding amounts of resources like food and game in particular places, would aid survival (pace George Dvorsky). But there was little evolutionary reason to adeptly process a million of something, intuitively knowing the difference between a million and a billion as easily as we do four versus six. A two second difference, for instance, we get — but few intuitively sense a million seconds is about 11 days and a billion seconds 31 years (making for widespread shock on social media).

As anthropologist Caleb Everett, who pointed out a word for “million” did not even appear until the 14th century, put it, “It makes sense that we as a species would evolve capacities that are naturally good at discriminating small quantities and naturally poor at discriminating large quantities.”

Evolution, therefore, made it difficult to understand evolution, which deals with slight changes to species over vast periods of time, resulting in dramatic differences (see Yes, Evolution Has Been Proven). It took 16 million years for Canthumeryx, with a look and size similar to a deer, to evolve into, among other new species, the 18-foot-tall giraffe. It took 250 million years for the first land creatures to finally have descendants that could fly. It stands to reason that such statements seem incredible to many people not only due to old religious tales they support that evidence does not but also because it’s hard to grasp how much time that actually constitutes. Perhaps it would be easier to comprehend and visualize how small genetic changes between parent creatures and offspring could add up, eventually resulting in descendants that look nothing like ancient ancestors, if we could better comprehend and visualize the timeframes, the big numbers, in which evolution operates. 16 million years is a long time — long enough.

This is hardly the first time it’s been suggested that its massive timescales make evolution tough to envision and accept, but it’s interesting to think about how this fact connects to our own evolutionary history and survival needs.

Just one of those wonderful oddities of life.

For more from the author, subscribe and follow or read his books.

Suicide is (Often?) Immoral

Suicide as an immoral act is typically a viewpoint of the religious — it’s a sin against God, “thou shalt not kill,” and so on. For those free of religion, and of course some who aren’t, ethics are commonly based on what does harm to others, not yourself or deities — under this framework, the conclusion that suicide is immoral in many circumstances is difficult to avoid.

A sensible ethical philosophy considers physical harm and psychological harm. These harms can be actual (known consequences) or potential (possible or unknown consequences). The actual harm of, say, shooting a stranger in the heart is that person’s suffering and death. The potential harm on top of that is wide-ranging: if the stranger had kids it could be their emotional agony, for instance. The shooter simply would not know. Most suicides will entail these sorts of things.

First, most suicides will bring massive psychological harm, lasting many years, to family and friends. Were I to commit suicide, this would be a known consequence, known to me beforehand. Given my personal ethics, aligning with those described above, the act would then necessarily be unethical, would it not? This seems to hold true, in my view, even given my lifelong depression (I am no stranger to visualizations of self-termination and its aftermath, though fortunately with more morbid curiosity than seriousness to date; medication is highly useful and recommended). One can suffer and, by finding relief in nonexistence, cause suffering. As a saying goes, “Suicide doesn’t end the pain, it simply passes it to someone else.” Perhaps the more intense my mental suffering, the less unethical the act (more on this in a moment), but given that the act will cause serious pain to others whether my suffering be mild or extreme, it appears from the outset to be immoral to some degree.

Second, there’s the potential harms, always trickier. There are many unknowns that could result from taking my own life. The potential harms could be more extreme psychological harms, a family member driven to severe depression or madness or alcoholism. (In reality, psychological harms are physical harms — consciousness is a byproduct of brain matter — and vice versa, so stress on one affects the other.) But they could be physical as well. Suicide, we know, is contagious. Taking my own life could inspire others to do the same. Not only could I be responsible for contributing, even indirectly, to the death of another person, I would also have a hand in all the actual and potential harms that result from his or her death! It’s a growing moral burden.

Of course, all ethics are situational. This is accepted by just about everyone — it’s why killing in self-defense seems less wrong than killing in cold blood, or why completely accidental killings seem less unethical than purposeful ones. These things can even seem ethically neutral. So there will always be circumstances that change the moral calculus. One questions if old age alone is enough (one of your parents or grandparents taking their own lives would surely be about as traumatic as anyone else), but intense suffering from age or disease could make the act less unethical, in the same way deeper and deeper levels of depression may do the same. Again, less unethical is used here. Can the act reach an ethically neutral place? The key may simply be the perceptions and emotions of others. Perhaps with worsening disease, decay, or depression, a person’s suicide would be less painful to friends and family. It would be hard to lose someone in that way, but, as we often hear when someone passes away of natural but terrible causes, “She’s not suffering anymore.” Perhaps at some point the scale is tipped, with too much agony for the individual weighing down one side and too much understanding from friends and family lifting up the other. One is certainly able to visualize this — no one wants their loved ones to suffer, and the end of their suffering can be a relief as well as a sorrow, constituting a reduction in actual harm — and this is no doubt reality in various cases. This writing simply posits that not all suicides will fall into that category (many are unexpected), and, while a distinguishing line may be frequently impossible to see or determine, the suicides outside it are morally questionable due to the ensuing harm.

If all this is nonsense, and such sympathetic understanding of intense suffering brings no lesser amount of harm to loved ones, then we’re in trouble, for how else can the act break free from that immoral place, for those operating under the moral framework that causing harm is wrong?

It should also be noted that the rare individuals without any real friends or family seem to have less moral culpability here. And perhaps admitted plans and assisted suicide diminish the immorality of the act, regardless of the extent of your suffering — if you tell your loved ones in advance you are leaving, if they are there by your side in the hospital to say goodbye, isn’t that less traumatizing and painful than a sudden, unexpected event, with your body found cold in your apartment? In these cases, however, the potential harms, while some may be diminished in likelihood alongside the actual, still abound. A news report on your case could still inspire someone else to commit suicide. One simply cannot predict the future, all the effects of your cause.

As a final thought, it’s difficult not to see some contradiction in believing in suicide prevention, encouraging those you know or those you don’t not to end their lives, and believing suicide to be ethically neutral or permissible. If it’s ethically neutral, why bother? If you don’t want someone to commit suicide, it’s because you believe they have value, whether inherent or simply to others (whether one can have inherent value without a deity is for another day). And destroying that value, bringing all that pain to others or eliminating all of the individual’s potential positive experiences and interactions, is considered wrong, undesirable. Immorality and prevention go hand-in-hand. But with folks who are suffering we let go of prevention, even advocating for assisted suicide, because only in those cases do we begin to consider suicide ethically neutral or permissible.

In sum, one finds oneself believing that if causing harm to others is wrong, and suicide causes harm to others, suicide must in some general sense be wrong — but acknowledging that there must be specific cases and circumstances where suicide is less wrong, approaching ethical neutrality, or even breaking into it.

For more from the author, subscribe and follow or read his books.

Expanding the Supreme Court is a Terrible Idea

Expanding the Supreme Court would be disastrous. We hardly want an arms race in which the party that controls Congress and the White House expands the Court to achieve a majority. It may feel good when the Democrats do it, but it won’t when it’s the Republicans’ turn. 

The problem with the Court is that the system of unwritten rules, of the “gentlemen’s agreement,” is completely breaking down. There have been expansions and nomination fights or shenanigans before in U.S. history, but generally when a justice died or retired a Senate controlled by Party A would grudgingly approve a new justice nominated by a president of Party B — because eventually the situation would be reversed, and you wanted and expected the other party to show you the same courtesy. It was reciprocal altruism. It all seemed fair enough, because apart from a strategic retirement, it was random luck — who knew when a justice would die? 

The age of unwritten rules is over. The political climate is far too polarized and hostile to allow functionality under such a system. When Antonin Scalia died, Obama should have been able to install Merrick Garland on the Court — Mitch McConnell and the GOP Senate infamously wouldn’t even hold a vote, much less vote Garland down, for nearly 300 days. They simply delayed until a new Republican president could install Neil Gorsuch. Democrats attempted to block this appointment, as well as Kavanaugh (replacing the retiring Kennedy) and Barrett (replacing the passed Ginsburg). The Democrats criticized the Barrett case for occurring too close to an election, mere weeks away, the same line the GOP had used with Garland, and conservatives no doubt saw the investigation into Kavanaugh as an obstructionist hit job akin to the Garland case. But it was entirely fair for Trump to replace Kennedy and Ginsberg, as it was fair for Obama to replace Garland. That’s how it’s supposed to work. But that’s history — and now, with Democrats moving forward on expansion, things are deteriorating further.

This has been a change building over a couple decades. Gorsuch, Kavanaugh, and Barrett received just four Democratic votes. The justices Obama was able to install, Kagan and Sotomayor, received 14 Republican votes. George W. Bush’s Alito and Roberts received 26 Democratic votes. Clinton’s Breyer and Ginsburg received 74 Republican votes. George H.W. Bush’s nominees, Souter and Thomas, won over 57 Democrats. When Ronald Reagan nominated Kennedy, more Democrats voted yes than Republicans, 51-46! Reagan’s nominees (Kennedy, Scalia, Rehnquist, O’Connor) won 159 Democratic votes, versus 199 Republican. Times have certainly changed. Partisanship has poisoned the well, and obstruction and expansion are the result.

Some people defend the new normal, correctly noting the Constitution simply allows the president to nominate and the Senate to confirm or deny. Those are the written rules, so that’s all that matters. And that’s the problem, the systemic flaw. It’s why you can obstruct and expand and break everything, make it all inoperable. And with reciprocal altruism, fairness, and bipartisanship out the window, it’s not hard to imagine things getting worse. If a party could deny a vote on a nominee for the better part of a year (shrinking the Court to eight, one notices, which can be advantageous), could it do so longer? Delaying for years, perhaps four or eight? Why not, there are no rules against it. Years of obstruction would become years of 4-4 votes on the Court, a completely neutered branch of government, checks and balances be damned. Or, if each party packs the Court when it’s in power, we’ll have an ever-growing Court, a major problem. The judiciary automatically aligning with the party that also controls Congress and the White House is again the serious weakening of a check and balance. Democrats may want a stable, liberal Court around some day to strike down rightwing initiatives coming out of Congress and the Oval Office. True, an expanding Court will hurt and help parties equally, and parties won’t always be able to expand, but for any person who sees value in real checks on legislative and executive power, this is a poor idea. All the same can be said for obstruction.

Here is a better idea. The Constitution should be amended to reflect the new realities of American politics. This is to preserve functionality and meaningful checks and balances, though admittedly the only way to save the latter may be to undercut it in a smaller way elsewhere. The Court should permanently be set at nine justices, doing away with expansions. Election year appointments should be codified as obviously fine. The selection of a new justice must pass to one decision-making body: the president, the Senate, the House, or a popular vote by the citizenry. True, doing away with a nomination by one body and confirmation by another itself abolishes a check on power, but this may be the only way to avoid the obstruction, the tied Court, the total gridlock until a new party wins the presidency. It may be a fair tradeoff, sacrificing a smaller check for a more significant one. However, this change could be accompanied by much-discussed term limits, say 16, 20, or 24 years, for justices. So while only one body could appoint, the appointment would not last extraordinary lengths of time.

For more from the author, subscribe and follow or read his books.