Were Hitler and the Nazis Socialists? Only Kind Of

How socialist were the National Socialists?

We know there will be times when an organization or national name doesn’t tell the whole story. As Jacobin writes, how democratic is the Democratic People’s Republic of (North) Korea? It’s hardly a republic either. (Hitler once asked, “Is there a truer form of Democracy” than the Reich — dictators, apparently, misuse terms.) Or look to the Likud, the National Liberals, one of Israel’s major conservative parties. And if the Christian Knights of the Ku Klux Klan were Christians, do they represent Christianity at large? So let us examine the Nazis and see if they fall into this category.

The first task, as always, is to define socialism. Like today, “socialism” and “communism” were used by some in the early 20th century to mean the same thing (communism) and by others to mean different things. As a poet from the 1880s put it, there are indeed “two socialisms”: the one where the workers own their workplaces and the one where the government owns the workplaces. We must remember these different term uses, but to make it easy we will simply be open to both: “Were the Nazis socialists?” can therefore mean either. There is more to it than that, of course, such as direct democracy and large government programs. But these additions are not sufficient qualifiers. There will be whining that the Nazi regime had large government programs and thus it was socialist, but if that’s the criteria then so were all the nations fighting the Nazis, including the U.S. (remember our huge public jobs programs and Social Security Act of the era?). Advanced societies tend to have sizable State services — and you can have these things without being truly socialist. If one has even a minimal understanding of socialist thought and history, then the conclusion that no country can earnestly be called socialist without worker or State ownership of business is hardly controversial. To speak of socialism was to speak of the elimination of private ownership of the means of production (called “private property,” businesses), with transfer of ownership away from capitalists to one of the two aforementioned bodies.

The German Workers Party, founded in 1919 in Munich by Anton Drexler and renamed the National Socialist German Workers Party in 1920, included actual socialists. Gregor and Otto Strasser, for instance, supported nationalization of industry — it’s simply not accurate to say the rhetoric of ending capitalism, building socialism, of revolution, workers, class, exploitation, and so on was solely propaganda. It was a mix of honest belief and empty propagandistic promises to attract voters in a time of extreme poverty and economic crisis, all depending on which Nazi was using it, as we will see. Socialists can be anti-semites, racists, patriots, and authoritarians, just like non-socialists and people of other belief systems. (I’ve written more elsewhere about the separability of ideologies and horrific things, if interested, typically using socialism and Christianity as examples. The response to “Nazis were socialists, so socialism is pure evil” is of course “Nazis were also Christians — Germany was an extremely religious nation — so is Christianity also pure evil? If the Nazis distorted Christianity, changing what it fundamentally was with their ‘Positive Christianity,’ advocated for in the Nazi platform, is true Christianity to be abandoned alongside true socialism if that has been distorted as well?”)

The meaning of socialism was distorted by Hitler and other party members. To Hitler, socialism meant the common weal, the common good for a community. While rhetorically familiar, this was divorced from ideas of worker or State ownership of the means of production. In a 1923 interview with The Guardian‘s George Sylvester Viereck, Hitler made this clear. After vowing to end Bolshevism (communism), Hitler got the key question:

“Why,” I asked Hitler, “do you call yourself a National Socialist, since your party programme is the very antithesis of that commonly accredited to socialism?”

“Socialism,” he retorted, putting down his cup of tea, pugnaciously, “is the science of dealing with the common weal. Communism is not Socialism. Marxism is not Socialism. The Marxians have stolen the term and confused its meaning. I shall take Socialism away from the Socialists.

“Socialism is an ancient Aryan, Germanic institution. Our German ancestors held certain lands in common. They cultivated the idea of the common weal. Marxism has no right to disguise itself as socialism. Socialism, unlike Marxism, does not repudiate private property. Unlike Marxism, it involves no negation of personality, and unlike Marxism, it is patriotic.

“We might have called ourselves the Liberal Party. We chose to call ourselves the National Socialists. We are not internationalists. Our socialism is national. We demand the fulfilment of the just claims of the productive classes by the state on the basis of race solidarity. To us state and race are one.”

Hitler’s socialism, then, had to do with the common good of one race, united as a nation around ancestral Aryan land and identity. What socialism meant to Hitler and other Nazis can only be understood through the lens of racial purity and extreme nationalism. They come first, forming the colander, and everything else is filtered through. In the same way, what Christianity meant to Hitler was fully shaped by these obsessions: it was a false religion invented by the Jews (who Jesus fought!), but could at the same time be used to justify their destruction. Bolshevism was likewise labeled a sinister Jewish creation (was not Marx ethnically Jewish?): “The Jewish doctrine of Marxism rejects the aristocratic principle of Nature…” Further, when Hitler criticized capitalists, it was often specific: Germany needed “delivery from the Jewish capitalist shackles,” the Jews being to blame for economic problems. A consumed conspiratorial bigot, and often contradictory and nonsensical, he would attack both sides of any issue if they smacked to him of Judaism. But we see Hitler’s agreement that National Socialism was the “antithesis of that commonly accredited to socialism”: there would still be private property, private ownership of the means of production; the internationalism and the racial diversity and tolerance at times preached by other socialists would be rejected. (So would class conflict: “National Socialism always bears in mind the interests of the people as a whole and not the interests of one class or another.”) Racial supremacy and the worship of country — elements of the new fascism, and the latter a typical element of the Right, not traditional socialism — were in order. (If these things were socialism, then again the nations fighting Germany were socialist: Jim Crow laws in America were used as models by Nazi planners, there existed devotion to American exceptionalism and greatness, and so forth.)

Hitler often repeated his view. On May 21, 1935:

National Socialism is a doctrine that has reference exclusively to the German people. Bolshevism lays stress on international mission. We National Socialists believe a man can, in the long run, be happy only among his own people… We National Socialists see in private property a higher level of human economic development that according to the differences in performance controls the management of what has been accomplished enabling and guaranteeing the advantage of a higher standard of living for everyone. Bolshevism destroys not only private property but also private initiative and the readiness to shoulder responsibility.

In a December 28, 1938 speech he declared:

A Socialist is one who serves the common good without giving up his individuality or personality or the product of his personal efficiency. Our adopted term ‘Socialist’ has nothing to do with Marxian Socialism. Marxism is anti-property; true socialism is not. Marxism places no value on the individual or the individual effort, or efficiency; true Socialism values the individual and encourages him in individual efficiency, at the same time holding that his interests as an individual must be in consonance with those of the community.

He who believed in “Germany, people and land — that man is a Socialist.” Otto Strasser, in his 1940 book Hitler and I, wrote that Hitler told him in 1930 that the revolution would be racial, not economic; that democracy should not be brought into the economic sphere; and that large corporations should be left alone; to which Strasser replied, “If you wish to preserve the capitalist regime, Herr Hitler, you have no right to talk of socialism. For our supporters are socialists, and your programme demands the socialisation of private enterprise.” Hitler responded:

That word ‘socialism’ is the trouble… I have never said that all enterprises should be socialised. On the contrary, I have maintained that we might socialise enterprises prejudicial to the interests of the nation. Unless they were so guilty, I should consider it a crime to destroy essential elements in our economic life… There is only one economic system, and that is responsibility and authority on the part of directors and executives. That is how it has been for thousands of years, and that is how it will always be. Profit-sharing and the workers’ right to be consulted are Marxist principles. I consider that the right to exercise influence on private enterprise should be conceded only to the state, directed by the superior class… The capitalists have worked their way to the top through their capacity, and on the basis of this selection, which again only proves their higher race, they have a right to lead. Now you want an incapable government council or works council, which has no notion of anything, to have a say; no leader in economic life would tolerate it.

Otto Strasser and his brother grew disillusioned that the party wasn’t pursuing actual socialism, and upset that Hitler supported and worked with big business, industrialists, capitalists, German princes. Otto was expelled from the party in 1930. Gregor resigned two years later.

The referenced National Socialist Program, or 25-point Plan, of 1920 demanded the “nationalization of all enterprises (already) converted into corporations (trusts),” “profit-sharing in large enterprises,” “communalization of the large department stores, which are to be leased at low rates to small tradesmen,” and nationalization “of land for public purposes.” Hitler clarified that since “the NSDAP stands on the platform of private ownership,” the nationalization of land for public use “concerns only the creation of legal opportunities to expropriate if necessary, land which has been illegally acquired or is not administered from the view-point of the national welfare. This is directed primarily against the Jewish land-speculation companies.” Large department stores were largely Jewish-run. And above we saw Hitler’s resistance to profit-sharing. Further, nationalization of businesses would be limited, as noted, to trusts. It could be that the disproportionately strong representation of Jews in ownership of big German companies played a role here, too. Now, a “secret” interview with Hitler that some scholars suspect is a forgery contains the quote: “Point No. 13 in that programme demands the nationalisation of all public companies, in other words socialisation, or what is known here as socialism,” yet even this limits the promise to publicly traded companies, and Hitler goes on, tellingly, to speak of “owners” and their “possessions,” “property owners,” “the bourgeoisie,” etc. that, while “controlled” by the State, plainly exist independently of it in his socialist vision. Nevertheless, the program has a socialist flair, making Otto Strasser’s comment in 1930 comprehensible, yet its timidity vis-à-vis economics (compare it to the German communist party’s platform of 1932) and its embrace of nationalism and rejection of internationalism would understandably make some ask the question George Sylvester Viereck did in 1923.

This socialist tinge, apart from attacks on Jewish businesses, was forgotten when the Nazis came to power. Historian Karl Bracher said such things to Hitler were “little more than an effective, persuasive propaganda weapon for mobilizing and manipulating the masses. Once it had brought him to power, it became pure decoration: ‘unalterable,’ yet unrealized in its demands for nationalization and expropriation, for land reform…” Indeed, while other Western nations were bringing businesses under State control to combat the Depression, the Nazis in the 1930s ran a program of privatization. Many firms and sectors were handed back to the private sphere. The Nazis valued private ownership for its efficiency. The German economy was State-directed in the sense that the government made purchases, contracting with private firms to produce commodities, such as armaments, and regulated business in many ways, as advanced nations often do, including the U.S. Historian Ian Kershaw wrote: “Hitler was never a socialist. But although he upheld private property, individual entrepreneurship, and economic competition, and disapproved of trade unions and workers’ interference in the freedom of owners and managers to run their concerns, the state, not the market, would determine the shape of economic development. Capitalism was, therefore, left in place. But in operation it was turned into an adjunct of the state.” While the regime incentivized business and regulated it, especially in preparation for war, intervening to keep entities aligned with State goals and ideology, “there occurred hardly any nationalizations of private firms during the Third Reich. In addition, there were few enterprises newly created as state-run firms,” summarized Christoph Buchheim and Jonas Scherner in The Journal of Economic History. Companies retained their independence and autonomy: they still “had ample scope to follow their own production plans… The state normally did not use power to secure the unconditional support of industry,” but rather offered attractive contracts. Socialism cannot simply be regulation of and incentives for private companies, to meet national goals — again, this is what non-socialist states do every day (and the U.S. war economy had plenty of centrally planned production goals and quotas, contracts, regulations, rationing, and even government takeovers).

The betrayal of the program was noticed at the time. A 1940 report said that:

Economic planks of the “unalterable program” on the basis of which the National Socialists campaigned before they came to power in 1933 were designed to win the support of as many disgruntled voters as possible rather than to present a coordinated plan for a new economic system. Within the party there has always been, and there still is, serious disagreement about the extent to which the “socialist” part of the party’s title is to be applied… The planks calling for expropriation have been least honored in the fulfillment of this platform; in practice, the economic reorganizations undertaken by the Nazis have followed a very different pattern from the one which was originally projected.

That pattern was tighter regulation, generous contracts, economic recovery programs for ordinary people, and so on, though the occasional State takeover did occur. All this makes sense given what we’ve seen. The Nazis weren’t interested in the socialism of the Marxists, the communists. Hitler, in his words, rejected “the false notion that the economic system could exist and operate entirely freely and entirely outside of any control or supervision on the part of the State,” but business ultimately belonged to the capitalists.

The Bramberg Conference of 1926 was a key moment for the direction of the Nazi Party: would it go in an earnestly socialist direction or simply use this new, diluted version Hitler was fond of? There were ideological divisions that had to be addressed. Hitler, as party leader since 1921 and with the conference officially establishing Fuhrerprinzip (absolute power of the party leader), was likely to win from the beginning. Gregor Strasser led the push at this convening of Nazi leaders for socialist policies, backed by others from Germany’s northern urban, industrial areas. Leaders from the rural south stood opposed; they wanted to instead lean into nationalism, populism, racialism. One such policy was the seizing of the estates of rich nobles, the landed princes — did the National Socialist Program not say land could be expropriated for the common good? “The law must remain the law for aristocrats as well,” Hitler said. “No questioning of private property!” This was communism, that old Jewish plot. Hitler made sure the idea, being pursued at the time by the social democratic and communist parties, died in its cradle. “For us there are today no princes, only Germans,” he said. “We stand on the basis of the law, and will not give a Jewish system of exploitation a legal pretext for the complete plundering of our people.” Again, the rejection of the class war and overthrow of the rich inherent to socialism and instead a simple focus on the Jews — Hitler was “replacing class with race,” as one historian put it, swapping out “the usual terms of socialist ideology.” Hitler was “a reactionary,” Joseph Goebbels realized. After this, Strasser backed off, and the socialist push in the party was quelled.

Similar to State ownership, while the German Workers Party in 1919 spoke of worker cooperatives — worker ownership — the Nazis had no actual interest in this, in fact making cooperative entities targets to be destroyed in Germany and conquered nations because they smacked of Marxism. A dictatorship isn’t going to give ordinary people power.

Outside observers continued to mock Hitler’s socialism — this isn’t simply a tactic of an embarrassed American Left today. As we’ve seen, people of the era noticed the meaning was changed and watched how the Nazis acted when in power. For Leon Trotsky, an actual communist-style socialist writing in 1934, Nazi “socialism” was always in derisive quotation marks. “The Nazis required the programme in order to assume the power; but power serves Hitler not all for the purpose of fulfilling the programme,” with “the social system untouched,” the “class nature” and competition of capitalism alive and well. Stalin said in 1936, “The foundation of [Soviet] society is public property: state, i.e., national, and also co-operative, collective farm property. Neither Italian fascism nor German National-‘Socialism’ has anything in common with such a society. Primarily, this is because the private ownership of the factories and works, of the land, the banks, transport, etc., has remained intact, and, therefore, capitalism remains in full force in Germany and in Italy.”

When one considers how actual socialists were treated under the Reich, the point is driven home.

Communist and social democratic politicians were purged from the legislature and imprisoned. Dachau, the first concentration camp, first held political enemies such as socialists. In an article in The Guardian from March 21, 1933, the president of the Munich police said, “Communists, ‘Marxists’ and Reichsbanner [social democratic] leaders” would be imprisoned there. The next year reports of the horrid conditions inside emerged, such as that in The New Republic, likewise noting the “Social Democrats, Socialist Workers’ party members,” and others held within. Part of the impetus for the Night of the Long Knives in 1934, in which Hitler had Nazi Party members killed, was too much talk of workers, actual socialism, anti-capitalist ideas. Gregor Strasser was murdered that night. Otto fled for his life.

There is a famous saying that is in fact authentic. Lutheran pastor Martin Niemöller of Germany often said various versions of the following after the war:

First they came for the Communists
And I did not speak out
Because I was not a Communist

Then they came for the Socialists
And I did not speak out
Because I was not a Socialist

Then they came for the trade unionists
And I did not speak out
Because I was not a trade unionist

Then they came for the Jews
And I did not speak out
Because I was not a Jew

Then they came for me
And there was no one left
To speak out for me

One might wonder why the socialists would be coming for the socialists. But if this new socialism simply had to do with race and land, opposing State or worker ownership, it begins to make sense. You have to take care of ideological opponents, whether through a conference or a concentration camp. In response, communists and socialists took part in the valiant resistance to Nazism in Germany and throughout Europe.

The recent articles offering a Yes or No answer to the question “Were Hitler and the Nazis Socialists?” are far too simplistic. Honest history can’t always be captured in a word. Here is an attempt to do so in a paragraph:

Foundationally, socialists wanted either worker ownership of workplaces or government ownership of workplaces, the removal of capitalists. The Nazi Party had actual socialists. But over time they grew frustrated that the party wasn’t pursuing socialism; some left. Other members, including party leader Adolf Hitler, opposed actual socialism, and changed the definition of socialism to simply mean unity of the Aryan race and its collective flourishing. True to this, when he seized power, Hitler did not implement socialism, leaving capitalists in place, and instead crushed those speaking of actual socialism.

For more from the author, subscribe and follow or read his books.

Faith and Intelligence

Atheists and agnostics are sometimes accused of seeing themselves as more intelligent than people of faith. Which begs the question: as a former believer, do I consider myself to be smarter now that I am a freethinker? In a sense yes, in that I’ve gained knowledge I did not possess before and have developed critical thinking skills that I likewise used to lack. It feels like learning an instrument, in fact a good analogy. People who learn the violin are from one perspective smarter than they were before, with new knowledge and abilities, a brain rewired, and indeed smarter than me, and others, in that respect. But this is a rather informal meaning of intelligence. Virtually anyone can learn the violin, and virtually anyone can find the knowledge and skills I did. Now we’re talking about capacity. We’ve entered the more formal definition of intelligence, under which the answer is obviously no, I’m not smarter than my old self or believers. So the answer is yes and no, as is often the case with variable meanings.

Consider this in detail. There are many definitions of “intelligence” (“smart” can simply be used as a synonym). The formal definition of intelligence generally has to do with the ability or capacity to gain knowledge and skills. You wouldn’t grow in intelligence by gaining knowledge and skills, but rather by somehow expanding the capacity to do so in the first place. (Granted, it could well be that doing the former does impact the latter, a virtuous cycle.) The human and the ape have different capacities, a sizable intelligence gap. Humans have differences too, in terms of genetic predispositions granted by the birth lottery and environmental factors. An ape won’t get far on the violin, and some humans will struggle more, or less, than others to learn it. Human beings have greater or weaker baseline capacities in various areas, different intelligence levels, but most can learn the basics (the idea that enough practice can make anyone advanced or expert has been thoroughly blown up). So under the formal framework, the believer and the skeptic have roughly the same intelligence on average, with the same ability to discover certain knowledge and develop certain skills — whether that ever happens is a separate question entirely, coming down to luck, life experiences, environment, and so on. While studies have often found that religiosity correlates with lower IQ, the difference is very small, with possible causes ranging from autistic persons helping tip the scales for the non-religious to people of faith relying too much on intuition rather than logic or reason when problem-solving, a problem of “behavioral biases rather than impaired general intelligence” — and behavior can be changed, very different than capacity. If this latter hypothesis is true, it would be like giving a violin proficiency test to both violin students and non-students and marveling that the non-students underperformed. Had my logic and reasoning been tested before my transition from pious to dubious, I suspect it would have been lower than today, as I learned many critical thinking skills during and after, but this is not about capacity; it’s just learning anyone can do. Under the more serious definition of intelligence, I don’t believe I’m smarter than my former self or the faithful.

But now we can work under the informal, colloquial meaning, where growing intelligence simply has something to do with a growing base of knowledge and new skill sets. Do we not often say “He’s really smart” of someone who knows copious facts about astronomy or history? Don’t we consider a woman highly intelligent who speaks multiple languages, or is a blazingly fast coder? When we suspect that if we devoted the same time and energy to those things, we could probably hold our own? (Rightly or wrongly, as noted. Either way, we often don’t think as much about capacity as simple acquisition.) This writer, at least, sometimes uses these flattering terms to describe possession of much information or foreign abilities.

In that sense, I certainly believe I’m smarter than I used to be. I realize how insulting that sounds, given that the natural extension is that I consider myself smarter than religious persons. But I don’t know how unique that is. When the weak Christian becomes a strong Christian through reading and thinking and conversing, she may consider herself smarter than before — perhaps even more knowledgable and a more sensible thinker than an atheist! In other words, more intelligent than a nonbeliever (wouldn’t you have to be a fool to think existence, the universe, is possible without a creator being?). When a man learns vast amounts about aerophysics, he sees himself as smarter than before and by extension others on this topic; when he masters the skill of building planes that fly, the same. If intelligence simply means more knowledgable about or skilled at something, everyone thinks they’re smarter than their past selves and by extension other people, with, obviously, many clashing and contradictory opinions between individuals (the Christian and the atheist both thinking they are more knowledgable, for instance).

Some examples are in order from my personal growth, just to illuminate my perspective better. I’ll offer two. I used to believe that, among other reasons, the gospels could be trusted as being entirely factual because they were written 30-40 years after the alleged miraculous events they describe (at least, Mark was; the others were later). “Too soon after to be fictional.” But then I learned something. Other religions, which I disbelieved, had much shorter timespans between supposed events and written accounts! Made-up nonsense about what happened on Day X to Person A was being written about and earnestly believed just a year or two later, in some cases just a day or two later — birthing new religions and stories still believed today! That was just the way humans operated; it’s never too soon for fictions, things can be invented and spread immediately, never to be tamped down. So, I’d gained knowledge. I felt more intelligent because of this — even embarrassed at my old ways of thinking. Not right away, but eventually. How could anyone learn this and not change their way of thinking accordingly, realizing that this argument for the gospels’ trustworthiness is simply dreadful and should be retired?

Since the first example was in the knowledge category, the second can concern critical thinking skills, and is neatly paired with the first. I used to suppose that it was sensible to believe in the gospels (and of course God) because they could not be disproved. After all, why not? If you can’t disprove them, they could be true. So why not continue to believe the gospels to be full of truths rather than fictions, as you’ve been raised or long held? Eventually I started thinking more critically, more clearly. This was the argument from ignorance fallacy: if something hasn’t been disproved that’s reason to suppose there’s truth to it. It’s rather irrational — there are a million stories from all human religions that cannot be disproved…therefore it’s reasonable to think they are true? You can’t disprove that the Greek gods formed Mount Olympus, that Allah or Thor exists, that the god Krishna spoke with Arjuna as described in the Bhagavad Gita, or that we’re living in a simulation. The ocean of unprovable things is infinite and of course highly contradictory, with many sets of things that cannot both or all be true. There are too many fictions in this ocean — you may believe in one of them. To only apply the argument from ignorance to your own faith, to believe that the gospels are true because they cannot be disproved but not all these other things for the precise same reason, is simple bias. Mightn’t it be more sensible to believe that which can be proven, rather than what cannot be disproven? That would be, in stark contrast, a solid justification. Now on the other side of the gulf, I can barely understand how I ever thought in such fallacious ways. But better, more logical ways of thinking I simply developed over time, and as with the development of any skill I can’t help but feel more intelligent because of it.

One does regret how derogatory this may seem to many readers. Yet it is impossible to avoid. I consider myself more intelligent than I used to be because I have knowledge I did not possess before and ways of thinking I consider better than prior ones. By extension, it seems I have to consider myself more intelligent, in this area, than those who, like my past self, do not possess that knowledge or those habits of critical thinking. However (and apologies for growing repetitive, it stems from a desire not to offend too much), this is no different than any person who uses the informal meaning of intelligence in any context. If you use that definition, and believe yourself to be more knowledge of the contents of the bible or biology, or more skilled at mathematics or reading people, than before or compared to others, you consider yourself smarter than other people, in those areas but not necessarily in others. If you instead use the formal definition of intelligence, regarding the mere capacity to gain knowledge and develop skills, then you’d say you’re not actually smarter than others (as they could simply do as you have done) or at least not necessarily or only possibly smarter (again, there are differences in capacities between human beings; some will be naturally better at mathematics no matter how hard others practice). In this latter sense, I’m again compelled in my answer: I essentially have to say I’m not smarter than my former self or current believers who think as I once did.

For more from the author, subscribe and follow or read his books.

Review: ‘A History of the American People’

At times I read books from the other side of the political spectrum, and conservative Paul Johnson’s A History of the American People (1998) was the latest.

This was mostly a decent book, and Johnson deserves credit for various inclusions: a look at how British democracy influenced American colonial democracy, the full influence of religion on early American society, Jefferson’s racism, U.S. persecution of socialists and Wobblies during World War I, how the Democratic Party was made up of southern conservatives and northern progressives for a long time, and more.

However, in addition to (and in alignment with) being a top-down, “Great Men,” traditionalist history, the work dodges the darkness of our national story in significant ways. That’s the only way, after all, you can say things like Americans are “sometimes wrong-headed but always generous” (a blatant contradiction — go ask the Japanese in the camps about generosity) or “The creation of the United States of America is the greatest of all human adventures” (what a wonderful adventure black people had in this country). It’s the pitfall of conservative, patriotic histories — if you want the U.S. to be the greatest country ever, our horrors must necessarily be downplayed.

Thus, black Americans don’t get much coverage until the Civil War, whereas Native Americans aren’t really worth discussing before or after the Trail of Tears era. Shockingly, in this history the internment of the Japanese never occurred. It’s simply not mentioned! Johnson offers a rosy view of what the U.S. did in Vietnam, believing that we should have inflicted more vigorous violence on both Vietnam and Cuba. Poverty doesn’t get much attention. The Founding Fathers’ expressions of protecting their own wealth, class interests, and aristocratic power when designing our democracy naturally go unmentioned. Likewise, American attacks on other countries are always from a place of benevolence and good intentions, rather than, as they often were in actuality, for economic or business interests, to maintain global power, or to seize land and resources. To Johnson, the U.S. had “one” imperialist adventure, its war with Spain — this incredible statement was made not long after his outline of the U.S. invasion of Mexico to expand its borders to the Pacific.

Other events and people given short shrift include LGBTQ Americans, non-European immigrants, and the abolitionist movement — until the end of the book when the modern pro-life movement is compared to it in approving fashion. The labor and feminist movements aren’t worth mentioning for their crucial successes, or intersectional solidarity in some places, only for their racism in others. Johnson is rather sympathetic of Richard Nixon, and somehow describes his downfall with no mention of Nixon’s attempts, recorded on White House tapes, to obstruct the Watergate investigation — the discovery of which led to his resignation. If anything, the book is a valuable study on how bias, in serious history and journalism, usually manifests itself in the sin of omission, conscious or no, rather than outright falsities, conscious or no (not that conservatives are the only ones who do this, of course; the Left, which can take the opposite approach and downplay positive happenings in American history, shouldn’t shy away from acknowledging, for instance, that the U.S. Constitution was a strong step forward for representative democracy, secular government, and personal rights, despite the obvious exclusivity, compared to Europe’s systems).

Things really start to go off the rails with this book in the 1960s and later, when America loses its way and becomes not-great (something slavery and women as second-class citizens could somehow never cause), with much whining about welfare, academia, political correctness, and the media (he truly should have read Manufacturing Consent before propagating the myth that the liberal media turned everyone against the war in Vietnam). Affirmative action receives special attention and passion, far more than slavery or Jim Crow, and Johnson proves particularly thick-skulled on other matters of race (Malcolm X is a “black racist,” slang and rap are super dangerous, no socio-economic and historical causes are mentioned that could illuminate highlighted racial discrepancies, and so on). Cringingly blaming the 1960-1990 crime wave on a less religious society, one wonders what Johnson would make of the dramatic decrease in crime from the 1990s to today, occurring as the percentage of religious Americans continues to plunge — a good lesson on false causation.

All this may not sound at all like a “mostly decent” book, but I did enjoy reading most of it, and — despite the serious flaws outlined here, some unforgivable — most of the information in the space of 1,000 pages was accurate and interesting. It served as a good refresher on many of the major people and events in U.S. history, a look at the perspective of the other side, a prompt for thinking about bias (omission vs. inaccuracy, subconscious vs. conscious), and a reminder of who and what are left out of history — and why.

For more from the author, subscribe and follow or read his books.

Woke Cancel Culture Through the Lens of Reason

What follows are a few thoughts on how to view wokeism and cancel culture with nuance:

Two Basic Principles (or, Too Much of a Good Thing)

There are two principles that first spring to mind when considering cancel culture. First, reason and ethics, to this writer, suggest that social consequences are a good thing. There are certain words and actions that one in a free society would certainly not wish to result in fines, community service, imprisonment, or execution by government, but are deserving of proportional and reasonable punishments by private actors, ordinary people. It is right that someone who uses a racial slur loses their job or show or social media account. A decent person and decent society wants there to be social consequences for immoral actions, because it discourages such actions and helps build a better world. One can believe in this while also supporting free speech rights and the First Amendment, which obviously have to do with how the government responds to what you say and do, not private persons and entities.

The second principle acknowledges that there will be many cases where social consequences are not proportional or reasonable, where things go too far and people, Right and Left, are crushed for rather minor offenses. It’s difficult to think of many social trends or ideological movements that did not go overboard in some fashion, after all. There are simply some circumstances where there was an overreaction to words and deeds, where mercy should have been the course rather than retribution. (Especially worthy of consideration: was the perpetrator young at the time of the crime, with an underdeveloped brain? Was the offense in the past, giving someone time to change and grow, to regret it?) Readers will disagree over which specific cases fall into this category, but surely most will agree with the general principle, simply that overreaction in fact occurs. I can’t be the only Leftist who both nods approvingly in some cases and in others thinks, “She didn’t deserve that” or “My, what a disproportionate response.” Stupid acts might deserve a different response than racist ones, dumb ideas a different tack than dangerous ones, and so on. It might be added that overreactions not only punish others improperly, but also encourage forced, insincere apologies — somewhat reminiscent of the adage than you shouldn’t make faith a requirement of holding office, as you’ll only end up with performative religiosity.

Acknowledging and pondering both these principles is important.

“Free Speech” Only Concerns Government-Citizen Interaction

Again, in most cases, the phrase “free speech” is basically irrelevant to the cancel culture conversation. It’s worth emphasizing. Businesses and individuals — social media companies, workplaces, show venues, a virtual friend who blocks you or deletes your comment — have every right to de-platform, cancel, censor, and fire. The whining about someone’s “free speech” being violated when they’re cancelled is sophomoric and ignorant — the First Amendment and free speech rights are about whether the government will punish you, not non-government actors.

Which makes sense, for an employer or individual could just as easily be said to have the “free speech right” to fire or cancel you — why is your “free speech right” mightier than theirs?

Public universities and government workplaces, a bit different, are discussed below.

Why is the Left at Each Other’s Throats?

At times the national conversation is about the left-wing mob coming for conservatives, but we know it comes for its own with just as much enthusiasm. Maybe more, some special drive to purge bad ideas and practices from our own house. Few involved in left-wing advocacy of some kind haven’t found themselves in the circular firing squad, whether firing or getting blasted — most of us have probably experienced both. It’s a race to be the most woke, and can lead to a lot of nastiness.

What produces this? Largely pure motives, for if there’s a path that’s more tolerant, more just, that will build a better future, we want others to see and take it. It’s a deep desire to do what’s right and get others to do the same. (That the pursuit of certain kinds of tolerance [racial, gender, etc.] would lead to ideological intolerance has been called ironic or hypocritical, but seems, while it can go too far at times, more natural and inevitable — there’s no ending separate drinking fountains without crushing the segregationist’s ideology.)

But perhaps the inner turmoil also comes from troublesome ideas of group monolithic thinking, plus a desperate desire for there to be one right answer when there isn’t one. Because we sometimes look at impacted groups as comprised of members all thinking the same way, or enough thinking the same way, there is therefore one right answer and anyone who questions it should be trampled on. For example, you could use “person with autism” (person-first language) rather than “autistic person” (identity-first language) and fall under attack for not being woke enough. Identity-first language is more popular among the impacted group members, and the common practice with language among non-impacted persons is to defer to majority opinions. But majority opinions aren’t strictly “right” — to say this is of course to say the minority of the impacted group members are simply wrong. Who would have the arrogance and audacity to say this? It’s simply different opinions, diversity of thought. (Language and semantics are minefields on the Left, but also varying policy ideas.) There’s nothing wrong with deferring to majority opinion, but if we were not so focused on there being one right answer, if we didn’t view groups as single-minded or single-minded enough, we would be much more tolerant of people’s “mistakes” and less likely to stoop to nastiness. We’d respect and explore and perhaps even celebrate different views within our side of the political spectrum. It’s worth adding that we go just as crazy when the majority impacted group opinion is against an idea. It may be more woke, for example, to support police abolition or smaller police presences in black neighborhoods, but 81% of black Americans don’t want the police going anywhere, so the majority argument won’t always help a case. Instead of condemning someone who isn’t on board with such policies as not caring enough about racial justice, not being woke enough, being dead wrong, we should again remember there is great diversity of thought out there and many ideas, many possible right answers beyond our own, to consider and discuss with civility. One suspects that few individuals, if intellectually honest, would always support the most radical or woke policy posited (more likely, you’ll disagree with something), so more tolerance and humility is appropriate.

The same should be shown toward many in the middle and on the Right as well. Some deserve a thrashing. Others don’t.

The University Onus

One hardly envies the position college administrators find themselves in, pulled between the idea that a true place of learning should include diverse and dissenting opinions, the desire to punish and prevent hate speech or awful behaviors, the interest in responding to student demands, and the knowledge that the loudest, best organized demands are at times themselves minority opinions, not representative.

Private universities are like private businesses, in that there’s no real argument against them cancelling as they please.

But public universities, owned by the states, have a special responsibility to protect a wide range of opinion, from faculty, students, guest speakers, and more, as I’ve written elsewhere. As much as this writer loves seeing the power of student organizing and protest, and the capitulation to that power by decision-makers at the top, public colleges should take a harder line in many cases to defend views or actions that are deemed offensive, in order to keep these spaces open to ideological diversity and not drive away students who could very much benefit from being in an environment with people of different classes, ethnicities, genders, sexual orientations, religions, and politics. Similar to the above, that is a sensible general principle. There will of course be circumstances where words and deeds should be crushed, cancellation swift and terrible. Where that line is, again, is a matter of disagreement. But the principle is simply that public colleges should save firings, censorship, cancellation, suspension, and expulsion for more extreme cases than is current practice. The same for other public entities and public workplaces. Such spaces are linked to the government, which actually does bring the First Amendment and other free speech rights into the conversation, and therefore there exists a special onus to allow broader ranges of views.

Cancel Culture Isn’t New — It’s Just the Left’s Turn

If you look at the surveys that have been conducted, two things become clear: 1) support for cancel culture is higher on the Left, but 2) it’s also a problem on the Right.

50% of staunch progressives “would support firing a business executive who personally donated to Donald Trump’s campaign,” vs. 36% of staunch conservatives who “would support firing Biden donors.” Republicans are much more worried about their beliefs costing them their jobs (though a quarter of Democrats worry, too), conservatives are drastically more afraid to share opinions (nearly 80%, vs. just over 40% for strong liberals), and only in the “strong liberal” camp does a majority (58%) feel free to speak its mind without offending others (liberals 48%, conservatives 23%). While almost 100% of the most conservative Americans see political correctness as a problem, 30% of the most progressive Americans agree, not an insignificant figure (overall, 80% of citizens agree). There’s some common ground here.

While the Left is clearly leading modern cancel culture, it’s important to note that conservatives often play by the same rules, despite rhetoric about how they are the true defenders of “free speech.” If Kaepernick kneels for the anthem, he should be fired. If a company (Nike, Gillette, Target, NASCAR, Keurig, MLB, Delta, etc.) gets political on the wrong side of the spectrum, boycott it and destroy your possessions, while Republican officials legislate punishment. If Republican Liz Cheney denounces Trump’s lies, remove her from her leadership post. Rage over and demand cancellation of Ellen, Beyonce, Jane Fonda, Samantha Bee, Kathy Griffin, Michelle Wolf, and Bill Maher for using their free speech. Obviously, no one called for more firings for views he didn’t like than Trump. If the Dixie Chicks criticize the invasion of Iraq, wipe them from the airways, destroy their CDs. Thomas Hitchner recently put together an important piece on conservative censorship and cancellation during the post-9/11 orgy of patriotism, for those interested.

More importantly, when we place this phenomenon of study in the context of history, we come to suspect that rather than being something special to the Left (or naturally more powerful on the Left, because liberals hate free speech and so on), cancel culture seems to be, predictably, led by the strongest cultural and political ideology of the moment. When the U.S. was more conservative, it was the Right that was leading the charge to ensure people with dissenting views were fired, censored, and so on. The hammer, rather than wielded by the far Left, came down on it.

You could look to the socialists and radicals, like Eugene Debs, who were literally imprisoned for speaking out against World War I, but more recently the McCarthy era after World War II, when government workers, literary figures, media anchors, and Hollywood writers, actors, and filmmakers accused of socialist or communist sympathies were hunted down and fired, blacklisted, slandered, imprisoned for refusing to answer questions at the witch trials, and so forth, as discussed in A History of the American People by conservative Paul Johnson. The Red Scare was in many ways far worse than modern cancel culture — it wasn’t simply the mob that came for you, it was the mob and the government. However, lest anyone think this was just Republican Big Government run amok rather than a cultural craze working in concert, recall that it was the movie studios doing the actual firing and blacklisting, the universities letting faculty go, LOOK and other magazines reprinting Army “How to Spot a Communist” propaganda, ordinary people pushing and marching and rallying against communism, etc.

All this overlapped, as leftwing economic philosophies usually do, with the fight for racial justice. Kali Holloway writes for The Nation:

There was also [black socialist] Paul Robeson, who had his passport revoked by the US State Department for his political beliefs and was forced to spend more than a decade living abroad. Racism and red-scare hysteria also canceled the acting career of Canada Lee, who was blacklisted from movies and died broke in 1952 at the age of 45. The [anti-segregationist] song “Mississippi Goddam” got Nina Simone banned from the radio and much of the American South, and the Federal Bureau of Narcotics essentially hounded Billie Holiday to death for the sin of stubbornly refusing to stop performing the anti-lynching song “Strange Fruit.”

Connectedly, there was the Lavender Scare, a purge of gays and suspected gays from government and private workplaces. 5,000-10,000 people lost their jobs:

“It’s important to remember that the Cold War was perceived as a kind of moral crusade,” says [historian David K.] Johnson, whose 2004 book The Lavender Scare popularized the phrase and is widely regarded as the first major historical examination of the policy and its impact. The political and moral fears about alleged subversives became intertwined with a backlash against homosexuality, as gay and lesbian culture had grown in visibility in the post-war years. The Lavender Scare tied these notions together, conflating gay people with communists and alleging they could not be trusted with government secrets and labelling them as security risks, even though there was no evidence to prove this.

The 1950s was a difficult era for the Left and its civil rights advocates, class warriors, and gay liberators, with persecution and censorship the norm. More conservative times, a stronger conservative cancel culture. This did not end in this decade, of course (one of my own heroes, Howard Zinn, was fired from Spelman College in 1963 for his civil rights activism), but soon a long transition began. Paul Johnson mused:

The significant fact about McCarthyism, seen in retrospect, was that it was the last occasion, in the 20th century, when the hysterical pressure on the American people to conform came from the right of the political spectrum, and when the witchhunt was organized by conservative elements. Thereafter the hunters became the hunted.

While, as we saw, the Right are still often hunters as well, and therefore we see much hypocrisy today, there is some truth to this statement, as from the 1960s and ’70s the nation began slowly liberalizing. Individuals increasingly embraced liberalism, as did some institutions, like academia, the media, and Hollywood (others, such as the church, military, and law enforcement remain quite conservative). The U.S. is still growing increasingly liberal, more favoring New Deal policies, for example, even though more Americans still identify as conservative:

Since 1992, the percentage of Americans identifying as liberal has risen from 17% then to 26% today. This has been mostly offset by a shrinking percentage of moderates, from 43% to 35%. Meanwhile, from 1993 to 2016 the percentage conservative was consistently between 36% and 40%, before dipping to 35% in 2017 and holding at that level in 2018.

On top of this, the invention and growth of social media since the mid-2000s has dramatically changed the way public anger coalesces and is heard — and greatly increased its power.

So the Left has grown in strength at the same time as technology that can amplify and expand cancel culture, a convergence that is both fortunate and unfortunate — respectively, for those who deserve harsh social consequences and for those who do not.

For more from the author, subscribe and follow or read his books.

The Great Debate Over Robert Owen’s Five Fundamental Facts

In the early 1830s, British social reformer Robert Owen, called the “Founder of Socialism”[1] by contemporaries, brought forth his “Five Fundamental Facts” on human nature and ignited in London and elsewhere a dramatic debate — in the literal sense of fiery public discussions, as well as in books, pamphlets, and other works. While the five facts are cited in the extant literature on Owen and his utopian movement, a full exploration of the controversy is lacking, which is unfortunate for a moment that left such an impression on witnesses and participants. Famous secularist and editor George Jacob Holyoake, at the end of his life in 1906, wrote, “Human nature in England was never so tried as it was during the first five years” after Owen’s writings, when these five facts “were discussed in every town in the kingdom. When a future generation has courage to look into this unprecedented code as one of the curiosities of propagandism, it will find many sensible and wholesome propositions, which nobody now disputes, and sentiments of toleration and practical objects of wise import.”[2]

The discourse continued into the 1840s, but its intensity lessened, and thus we will focus our attention on its decade of origin. This work will add to scholarship a little-explored subject, and argue that the great debate transcended common ideological divisions, not simply pitting socialist against anti-socialist and freethinker against believer, but freethinker against freethinker and socialist against socialist as well. The debate was nuanced and complex, and makes for a fascinating study of intellectual history in Victorian Britain, an overlooked piece of the Western discourse on free will going back to the ancient Greek philosophers and nature-nurture stirred up by John Locke and René Descartes in the 17th century.

The limited historiography of the “Five Fundamental Facts” recognizes their significance. J.F.C. Harrison of the University of Sussex wrote that Owen, in his “confidence in the discoverability of laws governing human action,” thought as immutable as physical laws, in fact “provided the beginnings of behavioural science.”[3] Indeed, “in an unsophisticated form, and without the conceptual tools of later social psychology, Owen had hit upon the crucial role of character structure in the social process.”[4] Further, Nanette Whitbread wrote that the school Owen founded to put his five facts into action and change human nature, the New Lanark Infant School, could “be justly described as the first in the developmental tradition of primary education.”[5] However, the facts are normally mentioned only in passing — works on Owen and his movement that make no mention of them at all are not unusual — and for anything close to an exploration of the debate surrounding them one must turn to brief outlines in works like Robert Owen: A Biography by Frank Podmore, not an historian at all, but rather a parapsychologist and a founder of the Fabian Society.[6]

Robert Owen, to quote The Morning Post in 1836, was “alternately venerated as an apostle, ridiculed as a quack, looked up to and followed as the founder of a new philosophy, contemned as a visionary enthusiast, denounced as a revolutionary adventurer.”[7] He was born in Wales in 1771, and as a young man came to manage a large textile mill in Manchester and then buy one in New Lanark, Scotland. Influenced by the conditions of the working poor and the ideas of the Enlightenment, and as a prosperous man, he engaged in writing, advocacy, and philanthropy for better working conditions and early childhood education in Britain after the turn of the century. Adopting a philosophy of cooperative, communal economics, Owen purchased an American town, New Harmony in Indiana, in 1825 and ran a utopian experiment, inspiring many more across the U.S. and elsewhere, that was ultimately unsuccessful. He returned home in 1828, living in London and continuing to write and lecture for broad social change.

Soon Owen brought forth his Outline of the Rational System of Society, in circulation as early as 1832 — and by 1836 “too well known to make it requisite now to repeat,” as a Mr. Alger put it in the Owenite weekly New Moral World.[8] The Home Colonisation Society in London, an organization promoting the formation of utopian communities with “good, practical education” and “permanent beneficial employment” for all, without the “present competitive arrangements of society,” was just one of the work’s many publishers.[9] Owen, not one for modesty, declared it developed “the First Principles of the Science of Human Nature” and constituted “the only effectual Remedy for the Evils experienced by the Population of the world,” addressing human society’s “moral and physical Evils, by removing the Causes which produce them.”[10]

The text from the Home Colonisation Society began with Owen’s “Five Fundamental Facts,” the key to his rational system and therefore the prime target of later criticism.[11] They assert:

1st. That man is a compound being, whose character is formed of his constitution or organization at birth, and of the effects of external circumstances upon it from birth to death; such original organization and external influences continually acting and re-acting each upon the other.

2d. That man is compelled by his original constitution to receive his feelings and his convictions independently of his will.

3d. That his feelings, or his convictions, or both of them united, create the motive to action called the will, which stimulates him to act, and decides his actions.  

4th. That the organization of no two human beings is ever precisely similar at birth; nor can art subsequently form any two individuals, from infancy to maturity, to be precisely similar.

5th. That, nevertheless, the constitution of every infant, except in the case of organic disease, is capable of being formed into a very inferior, or a very superior, being, according to the qualities of the external circumstances allowed to influence that constitution from birth.[12]

As crucial as Owen’s five facts were to the subsequent arguments, he offered no defense of them in the short Society pamphlet, stating them, perhaps expectedly, as fact and immediately proceeding to build upon them, offering twenty points comprising “The Fundamental Laws of Human Nature.” Here again he explained that the character of an individual was malleable according to the environment and society in which he or she developed and existed — and how by building a superior society humanity could allow its members to flourish and maximize well-being. This was the materialism of the early socialists. That section was followed by “The Conditions Requisite for Human Happiness,” “The Principles and Practice of the Rational Religion,” “The Elements of the Science of Society,” and finally a constitution for a new civilization.

This paper will not explore Owen’s specific utopian designs in detail, but at a glance the rational society offered a government focused on human happiness, with free speech, equality for persons of all religions, education for all, gender equality, communal property, a mix of direct and representative democracy, the replacement of the family unit with the larger community structure, an end to punishments, and more. Overall, the needs of all would be provided for collectively, and work would be done collectively — the termination of “ignorance, poverty, individual competition…and national wars” was in reach.[13] Happier people were thought better people — by creating a socialist society, addressing human needs and happiness, “remodelling the character of man” was possible.[14] The five facts aimed to demonstrate this. While this pamphlet and others were brief, in The Book of the New Moral World, Owen devoted a chapter to justifying and explaining each of the five facts, and wrote of them in other publications as well. In that work he clarified, for instance, that it was an “erroneous supposition that the will is free,” an implication of the second and third facts.[15]

The reaction? As Holyoake wrote, in a front-page piece in The Oracle of Reason, “Political economists have run wild, immaculate bishops raved, and parsons have been convulsed at [Owen’s] communities and five facts.”[16] The facts, to many of the pious, smacked of the determinism rejected by their Christian sects. An anonymous letter on the front page of a later edition of the same publication laid out a view held by both Christians and freethinkers: “‘Man’s character is formed for him and not by him’ — therefore, all the religions of the world are false, is the sum and substance of the moral philosophy of R. Owen.”[17] With biological inheritances and environmental influences birthing one’s “feelings and convictions,” one’s “character,” free will was put into question. What moral culpability did human beings then have for their actions, and how could an individual truly be said to make a “choice” to believe or follow religious doctrine? Any religion that rested on free will would be contradictory to reality, and thus untrue. But, the anonymous writer noted, Calvinists and other determinists were safer — they believed in “supernatural” causes that formed one’s character, thus it would be disingenuous to say “all the religions of the world” were fiction, solely on the grounds that individuals did not have mastery over who they were.

The writer then offered further nuance and assistance to ideological opponents (he or she was clearly a freethinker, not only given the journal read and written to but also revealed by lines such as: “But what care religionists for justice in this world or the next? If they cared anything about ‘justice,’ and knew what the word meant, they would have long ere this abandoned the doctrine of an eternal hell”).[18] It was pointed out that “original sin” was found in non-deterministic and deterministic Christian sects alike — a formation of character before birth. “How then can the ‘five facts’ refute all religions…?”[19] If human beings were, from the universal or at least near-universal Christian point of view, shaped by supernatural forces beyond their control after Adam and Eve’s storied betrayal, it was a non sequitur, in the anonymous author’s mind, to say the molding of character invalidated common religions. Here we see an introduction to the complex ways the British of the Victorian era approached the debate.

Yet others were not always so gracious. In 1836, The Monthly Review wrote that “No one doubts the sincerity of Mr. Owen” and his desire to “create a world of happiness,” but “no man who takes for his guides common observation, and common sense — much more, that no person who has studied and who confides in the doctrines of the Bible, can ever become a convert to his views.”[20] The five facts were “intangible” and “obscure,” the arguments “bold, unauthorised, unsupported, ridiculous,” the vision for society as a whole “fanciful, impractical, and irreligious.”[21] How was it, the periodical asked, that these views could be “demonstrably true” yet had “never found acceptance with the mass of sober intelligent thinkers,” only the “paltry, insignificant, uninfluential, and ridiculed class of people” that were the Owenites, and Owen himself, who was “incompetent”?[22] The writer (or writers) further resented how Owen centered himself as something of a savior figure. Ridding the world of evil could be “accomplished by one whose soul like a mirror was to receive and reflect the whole truth and light which concerned the happiness of the world — and I, Robert Owen, am that mirror” — and did not the New Testament already serve the purpose of outlining the path to a more moral and happier world?[23] Overall, it was a scathing attack, an example of the hardline Christian view.

The January 1838 volume of The Christian Teacher, published to “uphold the religion of the New Testament, in contradistinction to the religion of creeds and parties,” included a writing by H. Clarke of Chorley.[24] To him the facts were “inconsistent and fallacious”: facts one, two, and four contradicted the fifth.[25] The first, second, and fourth facts established that a “man’s self” at birth “has at least something to do with forming his character,” but then the fifth established that “by the influence of external circumstances alone, any being” could be transformed into a “superior being.”[26] To Clarke, the facts at first emphasized that one’s biological constitution played a sizable, seemingly co-equal, role in forming one’s character — then the fifth fact threw all that out the window. If anyone could be made into a superior being, just via environment, what sense did it make to say that biology had any effect whatsoever on an individual’s nature?

Owen did seem to view circumstances as the predominant power. Though he firmly believed there existed, as he wrote, a “decided and palatable difference between [infants] at birth” due to biology, he indeed believed in bold, universal results: “selfishness…will cease to exist” alongside “all motives to individual pride and vanity,” and as “all shall be trained, from infancy, to be rational,” a humanity of “superior beings physically, intellectually, and morally” could arise.[27] Clarke was not alone in this critique. J.R. Beard wrote something similar in The Religion of Jesus Christ Defended from the Assaults of Owenism, which further held the common blank slate view of human nature (“at birth there is no mental or moral development”), meaning environment was all that was left: “What is this but to make ‘external circumstances’ the sole creator of the lot of man?”[28]

Clarke further took issue with what he viewed as the contradictory or hypocritical language of the Owenites. “So I learn from the votaries of Owenism…man’s feelings and convictions are forced upon him irrespective of his will, it is [therefore] the extreme of folly to ask a man to believe this or that.”[29] The Christian believed in belief, but “Owenism denies that man can believe as he pleases…yet strange to tell, almost the first question asked by an Owenite is, ‘Do you believe Mr. Owen’s five fundamental facts?’”[30] Belief in the five facts, Clarke pointed out, was required to be a member of Owen’s association, which an “Appendix to the Laws and Regulations” of the association printed in The New Moral World in 1836 made clear.[31] If one’s convictions were formed against one’s will, what sense did it make to ask after or require beliefs? Clarke’s own beliefs, one should note, while against Owen’s views of human nature, were not necessarily hostile to socialism. He prefered “Christ to Mr. Owen, Christian Socialism to the five-fact-socialism.”[32]

There were some who saw a distinction between the value of Owen’s theories on human nature and that of his planned civilization. In 1836, The Morning Post found Owen, in his Book of the New Moral World, to be “radical” and “destructive,” wanting to dissolve civilization and remake it; the idea that humanity had for millenia been living in systems contrary to their own nature and happiness was “almost incredible.”[33] But the Post came from a more philosophical position and background than theological (“the Millenium [is] about as probable a consummation as the ‘Rational System’”).[34] Owen had therefore “displayed considerable acuteness and ability” regarding “metaphysical discussions,” making the book worth a read for ontologists and those who enjoyed a “‘keen encounter of the wit.’”[35]

As we saw with the anonymous writer in The Oracle of Reason, the five facts divided not only freethinkers and Christians, but also freethinkers as a group. There was too much intellectual diversity for consensus. For example, Charles Southwell, who was “rapidly becoming one of the most popular freethought lecturers in London,” debated Owen’s facts with well-known atheist Richard Carlile in Lambeth, a borough of south London.[36] The room “was crowded to suffocation, and hundreds retired unable to attain admittance. The discussion lasted two nights, and was conducted with talent and good feeling by both parties.”[37] Southwell defended the facts, while Carlile went on the offensive against them. 

The agnostic Lloyd Jones, journalist and friend of Owen, had much to say of Richard Carlile’s lectures on this topic.[38] In A Reply to Mr. R. Carlile’s Objections to the Five Fundamental Facts as Laid Down by Mr. Owen, Jones remarked that Carlile had called Owen’s Book of the New Moral World a “book of blunders” during his talk on November 27, 1837, but the audience “certainly could not avoid observing the multitudinous blunders committed by yourself, in endeavouring to prove it such.”[39] Carlile, according to Jones, insisted that individuals had much more power to steel themselves against circumstances and environments than Owen was letting on, throwing facts one and two into doubt. This is all rather one-sided, as Jones did not even bother to quote Carlile directly, but instead wrote, “You tell us we have a power to adopt or reject [convictions and feelings]: you have not given us your reasons for so saying; in fact, you did not condescend to reason upon any of the subjects broached during the evening’s discussion.”[40] Carlile should “try the question… Can you, by a voluntary action of your mind, believe that to be true which you now consider to be false; — or believe that to be false which you now consider true?… Certainly not.”[41] Jones also defended the idea that conviction and will were distinct, rather than one and the same as Carlile insisted.[42]

For the socialists, many of them of course Owenites anyway, there was much acceptance of the five facts. James Pate, for the Socialists of Padiham, wrote that an Owenite named Mr. Fleming came to their organization and, to a full house of about 300 people, “proved, in a plain yet forcible manner, the truth of the five fundamental facts; and…showed how little difficulty there would be in the practical application of Mr. Owen’s views to all classes of society.”[43] The audience was “so fully convinced” that few “dared venture to question any remarks.”[44] But here divergent thoughts existed too, as we saw with H. Clarke. The branches of religious socialism and secular socialism made for varying thoughts on human nature among the radicals, or simply those sympathetic to or not offended by the idea of socialism. Frederick Lees, for instance, secretary of the British Association for the Suppression of Intemperance, castigated the “infidelity” of Owenism and his five facts but had little to say of socialism, save that it was a front for the former: “In the fair name of Socialism, and in the mask of friendship, Judas like, she [untruth, especially as related to infidelity] seeks to ensnare and betray.”[45] Owen’s followers, while they professed to desire the “establishment of a ‘SOCIAL COMMUNITY,’ their chief and greatest object is the ascendancy of an ‘INFIDEL CREED.’”[46] Lees, striking a sympathetic note once more, added that Owenites should “dissolve the forced and arbitrary union between their absurd and infidel metaphysics, and the practical or working part of Socialism, which association of the two excites the rightful opposition of all lovers of christian truth…”[47]

For a forceful defense of religious socialism, take T.H. Hudson’s lengthy work Christian Socialism, Explained and Enforced, and Compared with Infidel Fellowship: Especially, as Propounded by Robert Owen, Esq., and His Disciples. It was up to “the Christian Religion to secure true socialism,” whereas Owen’s views were “more likely to serve the purposes of the Prince of darkness.”[48] Hudson spent one chapter, about forty pages, attacking the five facts, followed by three chapters, over 120 pages, advocating for Christian Socialism. The five facts were “based on the false assumptions, that man is good by nature” and were “decidedly irreligious.”[49] Hudson lambasted the “disguised atheism” of the first fact: it did not mention God as man’s creator, nor his spirit or soul, and left him helpless before nature, without free will.[50] The “infidel Socialist,” in believing facts two and three, deepened trust in fatalism and the irresponsibility of individuals, but also fell for a “gross contradiction.”[51] Hudson pointed out that the second fact established feelings and convictions were received independently of one’s will, yet the third fact stated the will was made up of, created by, one’s feelings and convictions.[52] Initially presented as distinct phenomena, subsequently as a unified phenomenon. J.R. Beard echoed this: it would have been better to say feelings and convictions were received “anteriorly ‘to his will’; for it is obviously his notion that man’s will is not independent, but the result, the creation of his feelings and convictions.”[53]

Like the atheist Carlile, Hudson thought one could put up “resistance” to external influences, could decide whether to “receive” or reject feelings and convictions — an exercise in willpower, which was thus independent of and prior to feelings and convictions; a person was not a “slave to circumstances.”[54] This was a refrain of Owen’s critics, with the added element at times of the impossibility of personal change under Owen’s theory (indeed the impossibility that changing circumstances could change people). For instance, Minister John Eustace Giles, in Socialism, as a Religious Theory, Irrational and Absurd (1839), based on his lectures in Leeds, wondered how Owen could believe that “‘man is the creature of circumstances’” yet “professes to have become wise” — did that not show Owen had “resisted” circumstances?[55] Did not this, plus Owen’s desire to “change the condition of the world…thus shew that while man is the creature of circumstances, circumstances are the creatures of man”?[56] After focusing on semantics and perceived ambiguities in the fourth fact, but not closed to the possibility it was a simple truism, Hudson saw the improvement of individuals in the fifth fact true but was insulted that Christianity, no longer “being alienated from God” and addressing humanity’s “depraved nature,” was not thought necessary to this improvement alongside changing environments.[57] Indeed, most egregious was the Owenite belief that people were fundamentally good.[58]

Whether due to varying personal beliefs or simply varying cautions about driving away potential converts in a pious age, the actual presentation of the fundamental facts as irreligious was not consistent. Lloyd Jones, in an 1839 debate over whether socialism was atheistic with Mr. Troup, editor of The Montrose Review, asked some variant of “Where is the Atheism here?” after reading each of the five facts.[59] Whereas Owen, also an unbeliever, in an 1837 debate with Rev. J.H. Roebuck of Manchester, called religions “geographical insanities” that could be wiped away by the five facts.[60] “Mr. Roebuck stated…that the two systems for which we contend are opposed to each other, and that both, therefore, cannot be true. Herein we perfectly agree.”[61] The national discourse so intertwined the facts and the question of God that a person, on either side of the debate, could not help but assume that one would accompany the other. When a debate on “the mystery of God” was proposed to Owenite J. Smith in January 1837, “the challenge was [mis]understood by myself and all our friends, to be the discussion of the five fundamental facts.”[62]

Overall, perhaps Robert Owen’s facts flustered the religious and irreligious, and socialists and anti-socialists alike, because they were simply so counterintuitive — not to mention theoretical, without contemporary science to back them up. Owen wrote, in The Book of the New Moral World, for instance: “Man is not, therefore, to be made a being of a superior order by teaching him that he is responsible for his will and his actions.”[63] Such blunt statements turned on its head what many, across ideologies, judged common sense. Owen’s ideas were “contrary to common sense” for Hudson, Christian socialist, in the same way they were “opposed to the common sense of mankind” for Giles, anti-socialist.[64] Would not teaching individual moral responsibility enable personal change and create a better society? Not so for Owen. The will was formed by circumstances — thus true personal change came about by purposefully changing environments. Create a better society first, and the positive personal change would follow. These were, according to Owen, “the laws of nature respecting man, individually, and the science of society,” and few posited laws of nature, proven or otherwise, do not provoke intense philosophical debate.[65]

For more from the author, subscribe and follow or read his books.


[1] J. Eustace Giles, Socialism, as a Religious Theory, Irrational and Absurd: the First of Three Lectures on Socialism (as Propounded by Robert Owen and Others) Delivered in the Baptist Chapel South-Parade, Leeds, September 23, 1838 (London: Simpkin, Marshall, & Co., Ward & Co., G. Wightman, 1838), 4, retrieved from https://babel.hathitrust.org/cgi/pt?id=uiuo.ark:/13960/t63560551&view=1up&seq=10&q1=founder.

[2] George Jacob Holyoake, The History of Co-operation (New York: E.P. Dutton & Company, 1906), 1:147.

[3] J.F.C. Harrison, Robert Owen and the Owenites in Britain and America (Abingdon: Routledge, 2009), 66.

[4] Ibid.

[5] Nanette Whitbread, The Evolution of the Nursery-infant School: A History of Infant and Nursery Education in Britain, 1800-1970 (Abingdon: Routledge, 2007), 39:9-10.

[6] Frank Podmore, Robert Owen: A Biography (London: Hutchinson & CO, 1906), 481-482, 499-502.

[7] The Morning Post, September 14, 1836, cited in “The Book of the New Moral World,” The New Moral World (Manchester: Abel Heywood, 1836-7), 3:6, retrieved from https://babel.hathitrust.org/cgi/pt?id=uc1.31970026956075&view=1up&seq=18&size=125&q1=%22five%20fundamental%20facts%22.

[8] The Westminster Review (London: Robert Heward, 1832), 26:317, retrieved from https://babel.hathitrust.org/cgi/pt?id=nyp.33433096159896&view=1up&seq=329&q1=%22five%20fundamental%20facts%22; The New Moral World (London: Thomas Stagg, 1836), 2:62, retrieved from https://babel.hathitrust.org/cgi/pt?id=uc1.31970026956117&view=1up&seq=74&q1=%22five%20fundamental%20facts%22.

[9] Robert Owen, Outline of the Rational System of Society (London: Home Colonization Society, 1841), 2, retrieved fromhttps://babel.hathitrust.org/cgi/pt?id=hvd.hnsp9t&view=1up&seq=6.

[10] Ibid, 1.

[11] This was explicitly stated by critics. Dismantle the five facts and the rest of the system goes down with it. See T.H. Hudson, Christian Socialism, Explained and Enforced, and Compared with Infidel Fellowship, Especially, As Propounded by Robert Owen, Esq., and His Disciples (London: Hamilton, Adams, and Co., 1839), 52, retrieved from https://babel.hathitrust.org/cgi/pt?id=nyp.33433075925721&view=1up&seq=62&q1=%22fundamental%20facts%22.

[12] Owen, Outline, 3.

[13] Ibid, 14.

[14] Ibid.

[15] Robert Owen, The Book of the New Moral World (London: Richard Taylor, 1836), 17, retrieved from https://babel.hathitrust.org/cgi/pt?id=mdp.39015003883991&view=1up&seq=47&q1=%22five%20fundamental%20facts%22.

[16] The Oracle of Reason (London: Thomas Paterson, 1842), 1:113, retrieved from https://archive.org/details/oracleofreasonor01lond/page/112/mode/2up?q=five+facts.

[17] Ibid, 161.

[18] Ibid.

[19] Ibid.

[20] The Monthly Review (London: G. Henderson, 1836), 3:62, retrieved from https://babel.hathitrust.org/cgi/pt?id=umn.319510028065374&view=1up&seq=80&q1=%22five%20fundamental%20facts%22.

[21] Ibid, 62, 67-68.

[22] Ibid, 63.

[23] Ibid, 62-63.

[24] The Christian Teacher and Chronicle of Beneficence (London: Charles Fox, 1838), 4:219, retrieved from https://babel.hathitrust.org/cgi/pt?id=hvd.ah6jrz&view=1up&seq=255&q1=%22five%20facts%22.

[25] Ibid.

[26] Ibid, 220.

[27] Owen, Book, 22-24.

[28] J.R. Beard, The Religion of Jesus Christ Defended from the Assaults of Owenism (London: Simpkin, Marshall and Company, 1839), 233, retrieved from https://babel.hathitrust.org/cgi/pt?id=hvd.hnmy5r&view=1up&seq=243&q1=%22second%20fact%22.

[29] Christian Teacher, 220.

[30] Ibid.

[31] Ibid, 220; New Moral World, 2:261.

[32] Christian Teacher, 220.

[33] New Moral World, 3:6.

[34] Ibid.

[35] Ibid.

[36] Edward Royle, Victorian Infidels: The Origins of the British Secularist Movement, 1791-1866 (Manchester: Manchester University Press, 1974), 69.

[37] The New Moral World (Leeds: Joshua Hobson, 1839), 6:957, retrieved from https://babel.hathitrust.org/cgi/pt?id=uc1.31970026956133&view=1up&seq=361&size=125&q1=%22five%20fundamental%20facts%22.

[38] Regarding Jones’ agnosticism, see: Report of the Discussion betwixt Mr Troup, Editor of the Montrose Review, on the part of the Philalethean Society, and Mr Lloyd Jones, of Glasgow, on the part of the Socialists, in the Watt Institution Hall, Dundee on the propositions, I That Socialism is Atheistical; and II That Atheism is Incredible and Absurd (Dundee: James Chalmers & Alexander Reid, 1839), retrieved from shorturl.at/pvxM1.

[39] Lloyd Jones, A Reply to Mr. Carlile’s Objections to the Five Fundamental Facts as Laid Down by Mr. Owen (Manchester: A. Heywood, 1837), 4, retrieved from https://babel.hathitrust.org/cgi/pt?id=wu.89097121669&view=1up&seq=12&q1=%22five%20fundamental%20facts%22.

[40] Ibid, 9.

[41] Ibid.

[42] Ibid, 10-11.

[43] New Moral World, 3:380.

[44] Ibid.

[45] Frederick R. Lees, Owenism Dissected: A Calm Examination of the Fundamental Principles of Robert Owen’s Misnamed “Rational System” (Leeds: W.H. Walker, 1838), 7, retrieved from https://babel.hathitrust.org/cgi/pt?id=uiug.30112054157646&view=1up&seq=7&q1=%22socialism%22.

[46] Ibid, 16.

[47] Ibid.

[48] Hudson, Christian Socialism, 4, 13.

[49] Ibid, 50-51.

[50] Ibid, 53-63.

[51] Ibid, 63-64, 66.

[52] Ibid, 66.

[53] Beard, Religion, 234.

[54] Hudson, Christian Socialism, 65-66.

[55] Giles, Socialism, 7.

[56] Ibid.

[57] Hudson, Christian Socialism, 72-81, 87-88.

[58] Ibid, 89.

[59] Report of the Discussion, 12.

[60] Public Discussion, between Robert Owen, Late of New Lanark, and the Rev. J.H. Roebuck, of Manchester (Manchester: A. Heywood, 1837), 106-107, retrieved fromhttps://babel.hathitrust.org/cgi/pt?id=uc1.c080961126&view=1up&seq=111&q1=%22fundamental%20facts%22.

[61] Ibid, 107.

[62] New Moral World, 3:122.

[63] Owen, Book, 20.

[64] Hudson, Christian Socialism, 65; Giles, Socialism, 36.

[65] Owen, Book, 20.

On the Spring-Stone Debate

While finding a decisive victor in debates on semantics and historical interpretation often proves difficult, in the lively clash between historians David Spring and Lawrence Stone on social mobility into Britain’s landed elite, the former presented the strongest case. The discourse, of the mid-1980s, centered around the questions of how to define “open” when considering how open the upper echelon was to newcomers from 1540-1880 and, most importantly, to newcomers who came from the business world. On both counts, Spring offered a more compelling perspective on how one should regard the historical evidence and data Stone collected in his work An Open Elite? Namely, that it was reasonable to call the landed elite open to members of lower strata, including business leaders.

The debate quickly obfuscated lines between the two questions. In his review of An Open Elite?, Spring noted that Stone showed a growth in elite families from 1540-1879, beginning with forty and seeing 480 join them, though not all permanently. Further, “Stone shows that regularly one-fifth of elite families were newcomers.”[1] In his reply, Stone declined to explore the “openness” of a twenty percent entry rate because it was, allegedly, irrelevant to his purpose: he was only interested in the entry of businessmen like merchants, speculators, financiers, and manufacturers, who did not come from the gentry, the relatively well-off stratum knocking at the gate of the landed elite. Spring “failed to distinguish between openness to new men, almost all from genteel families, who made a fortune in the law, the army, the administration or politics…and openness to access by successful men of business, mostly of low social origins.”[2]

True, Stone made clear who and what he was looking at in An Open Elite?: the “self-made men,” the “upward mobility by successful men of business,” and so on, but leaned into, rather than brushed aside or contradicted, the idea of general social immobility.[3] For instance, observe the positioning of: “When analysed with care…the actual volume of social mobility has turned out to be far less than might have been expected. Moreover, those who did move up were rarely successful men of business.”[4] The notion of the landed elite being closed off in general was presented, followed by the specific concern about businessmen. Stone went beyond business many times (for instance: “the degree of mere gentry penetration up into the elite was far smaller than the earlier calculations would indicate”[5]), positing that not only was the landed elite closed to businessmen but also universally, making his protestations against Spring rather disingenuous. Stone insisted to Spring that an open elite specifically meant, to historians and economists, a ruling class open to businessmen, not to all, but Stone himself opened the door to the question of whether the landed elite was accessible to everyone by answering nay in his book. Therefore, the question was admissible, or fair game, in the debate, and Spring was there to provide a more convincing answer. A group comprised of twenty percent newcomers from below, to most reasonable persons, could be described as relatively open. Even more so with the sons of newcomers added in: the landed elite was typically one-third newcomers and sons of newcomers, as Spring pointed out. Though it should be noted both scholars highlighted the challenge of using quantitative data to answer such historical questions. The collection and publication of such numbers is highly important, but it hardly ends the discussion — the question of openness persists, and any answer is inherently subjective.

However, it was the second point of contention where Spring proved most perceptive. He pointed out that while the gentry constituted 181 entrants into the landed elite during the observed centuries, those involved in business were not far behind, with 157, according to Stone’s data. This dwarfed the seventy-two from politics and seventy from the law. As Spring wrote, Stone’s quantitative tables conflicted with his text. Stone wrote in An Open Elite? that “most of the newcomers were rising parish gentry or office-holders or lawyers, men from backgrounds not too dissimilar to those of the existing county elite. Only a small handful of very rich merchants succeeded in buying their way into the elite…”[6] Clearly, even with different backgrounds, businessmen were in fact more successful at entering the landed elite than politicians and lawyers in the three counties Stone studied. What followed a few lines down in the book from Stone’s selected words made far more sense when considering the data: businessmen comprised “only a third of all purchasers…”[7] The use of “only” was perhaps rather biased, but, more significantly, one-third aligned not with the idea of a “small handful,” but of 157 new entrants — a third business entrants, a bit more than a third gentry, and a bit less than a third lawyers, politicians, and so on. Spring could have stressed the absurdity, in this context, of the phrase “only a third,” but was sure to highlight the statistic in his rejoinder, where he drove home the basic facts of Stone’s findings and reiterated that the landed elite was about as open to businessmen as others. Here is where quantitative data truly shines in history, for you can compare numbers against each other. The question of whether a single given number or percentage is big or small is messy and subjective, but whether one number is larger than another is not, and provides clarity regarding issues like whether businessmen had some special difficulty accessing Britain’s landed elite.

Stone failed to respond directly to this point, a key moment that weakened his case, but instead sidetracked into issues concerning permanence of newcomers and by-county versus global perspectives on the data, areas he explored earlier in his response, now awkwardly grafted on to Spring’s latest argument. Yet the reader is largely left to pick up on what is being implied, based on Stone’s earlier comments on said issues. He noted that only twenty-five businessmen of the 157 came from the two counties distant from London, seemingly implying that Hertfordshire, the London-area county, had tipped the scales. Merchants and others were not as likely to rise into the landed elite in more rural areas. What relevance that had is an open question — it seemed more a truism than an argument against Spring’s point, as London was a center for business, and thus that result was perhaps expected. Regardless, he did not elaborate. The adjacent implication was that Spring was again seeing “everything from a global point of view which has no meaning in reality, and nothing from the point of view of the individual counties.”[8] In the debate, Stone often cautioned that it made sense to look at counties individually, as they could be radically distinct — one should not simply look at the aggregated data. But Stone’s inherent problem, in his attempt at a rebuttal, was that he was using the global figures to make his overall case. He took three counties and lifted them up to represent a relatively closed elite in Britain as a whole. It would not do to now brush aside one county or focus heavily on another to bolster an argument. Spring, in a footnote, wrote something similar, urging Stone to avoid “making generalizations on the basis of one county. [Your] three counties were chosen as together a sample of the nation.”[9] To imply, as Stone did, that London could be ignored as some kind of anomaly contradicted his entire project.

Stone’s dodge into the permanence of entrants was likewise not a serious response to Spring’s observation that business-oriented newcomers nearly rivaled those from the gentry and far outpaced lawyers and politicians. He wrote that “of the 132 business purchasers in Hertfordshire, only 68 settled in for more than a generation…”[10] The transient nature of newcomers arose elsewhere in the debate as well. Here Stone moved the goalposts slightly: instead of mere entrants into the landed elite, look at who managed to remain. Only “4% out of 2246 owners” in the three counties over these 340 years were permanent newcomers from the business world.[11] It was implied these numbers were both insignificant and special to businesspersons. Yet footnote five, that associated with the statistic, undercut Stone’s point. Here he admitted Spring correctly observed that politicians and officeholders were forced to sell their county seats, their magnificent mansions, and abandon the landed elite, as defined by Stone, at nearly the same rate as businessmen, at least in Hertfordshire. Indeed, it was odd Stone crafted this response, given Spring’s earlier dismantling of the issue. The significance of Stone’s rebuttal was therefore unclear. If only sixty-eight businessmen lasted more than a generation, how did that compare to lawyers, office-holders, and the gentry? Likewise, if four percent of businessmen established permanent generational residence among the landed elite, what percentages did other groups earn? Again, Stone did not elaborate. But from his admission and what Spring calculated, it seems unlikely Stone’s numbers, when put in context, would help his case. Even more than the aggregate versus county comment, this was a non-answer.

The debate would conclude with a non-answer as well. There was of course more to the discussion — it should be noted Stone put up an impressive defense of the selection of his counties and the inability to include more, in response to Spring questioning how representative they truly were — but Spring clearly showed, using Stone’s own evidence, that the landed elite was what a reasonable person could call open to outsiders in general and businessmen in particular, contradicting Stone’s positions on both in An Open Elite? Stone may have recognized this, given the paucity of counterpoints in his “Non-Rebuttal.” Spring would, in Stone’s view, “fail altogether to deal in specific details with the arguments used in my Reply,” and therefore “there is nothing to rebut.”[12] While it is true that Spring, in his rejoinder, did not address all of Stone’s points, he did focus tightly on the main ideas discussed in the debate and this paper. So, as further evidence that Spring constructed the better case, Stone declined to return to Spring’s specific and central arguments about his own data. He pointed instead to other research that more generally supported the idea of a closed elite. Stone may have issued a “non-rebuttal” not because Spring had ignored various points, but rather because he had stuck to the main ones, and there was little to be said in response.

For more from the author, subscribe and follow or read his books.


[1] Eileen Spring and David Spring, “The English Landed Elite, 1540-1879: A Review,” Albion: A Quarterly Journal Concerned with British Studies 17, no. 2 (Summer 1985): 152.

[2] Lawrence Stone, “Spring Back,” Albion: A Quarterly Journal Concerned with British Studies 17, no. 2 (Summer 1985): 168.

[3] Lawrence Stone, An Open Elite? England 1540-1880, abridged edition (Oxford: Oxford University Press, 1986), 3-4.

[4] Ibid, 283.

[5] Ibid, 130.

[6] Ibid, 283.

[7] Ibid.

[8] Stone, “Spring Back,” 169.

[9] Spring, “A Review,” 154.

[10] Stone, “Spring Back,” 171.

[11] Ibid.

[12] Lawrence Stone, “A Non-Rebuttal,” Albion: A Quarterly Journal Concerned with British Studies 17, no. 3 (Autumn 1985): 396. For Spring’s rejoinder, see Eileen Spring and David Spring, “The English Landed Elite, 1540-1879: A Rejoinder,” Albion: A Quarterly Journal Concerned with British Studies 17, no. 3 (Autumn 1985): 393-396.

Did Evolution Make it Difficult for Humans to Understand Evolution?

It’s well known that people are dreadful at comprehending and visualizing large numbers, such as a million or billion. This is understandable in terms of our development as a species, as grasping the tiny numbers of, say, your clan compared to a rival one you’re about to be in conflict with, or understanding amounts of resources like food and game in particular places, would aid survival (pace George Dvorsky). But there was little evolutionary reason to adeptly process a million of something, intuitively knowing the difference between a million and a billion as easily as we do four versus six. A two second difference, for instance, we get — but few intuitively sense a million seconds is about 11 days and a billion seconds 31 years (making for widespread shock on social media).

As anthropologist Caleb Everett, who pointed out a word for “million” did not even appear until the 14th century, put it, “It makes sense that we as a species would evolve capacities that are naturally good at discriminating small quantities and naturally poor at discriminating large quantities.”

Evolution, therefore, made it difficult to understand evolution, which deals with slight changes to species over vast periods of time, resulting in dramatic differences (see Yes, Evolution Has Been Proven). It took 16 million years for Canthumeryx, with a look and size similar to a deer, to evolve into, among other new species, the 18-foot-tall giraffe. It took 250 million years for the first land creatures to finally have descendants that could fly. It stands to reason that such statements seem incredible to many people not only due to old religious tales they support that evidence does not but also because it’s hard to grasp how much time that actually constitutes. Perhaps it would be easier to comprehend and visualize how small genetic changes between parent creatures and offspring could add up, eventually resulting in descendants that look nothing like ancient ancestors, if we could better comprehend and visualize the timeframes, the big numbers, in which evolution operates. 16 million years is a long time — long enough.

This is hardly the first time it’s been suggested that its massive timescales make evolution tough to envision and accept, but it’s interesting to think about how this fact connects to our own evolutionary history and survival needs.

Just one of those wonderful oddities of life.

For more from the author, subscribe and follow or read his books.

Suicide is (Often?) Immoral

Suicide as an immoral act is typically a viewpoint of the religious — it’s a sin against God, “thou shalt not kill,” and so on. For those free of religion, and of course some who aren’t, ethics are commonly based on what does harm to others, not yourself or deities — under this framework, the conclusion that suicide is immoral in many circumstances is difficult to avoid.

A sensible ethical philosophy considers physical harm and psychological harm. These harms can be actual (known consequences) or potential (possible or unknown consequences). The actual harm of, say, shooting a stranger in the heart is that person’s suffering and death. The potential harm on top of that is wide-ranging: if the stranger had kids it could be their emotional agony, for instance. The shooter simply would not know. Most suicides will entail these sorts of things.

First, most suicides will bring massive psychological harm, lasting many years, to family and friends. Were I to commit suicide, this would be a known consequence, known to me beforehand. Given my personal ethics, aligning with those described above, the act would then necessarily be unethical, would it not? This seems to hold true, in my view, even given my lifelong depression (I am no stranger to visualizations of self-termination and its aftermath, though fortunately with more morbid curiosity than seriousness to date; medication is highly useful and recommended). One can suffer and, by finding relief in nonexistence, cause suffering. As a saying goes, “Suicide doesn’t end the pain, it simply passes it to someone else.” Perhaps the more intense my mental suffering, the less unethical the act (more on this in a moment), but given that the act will cause serious pain to others whether my suffering be mild or extreme, it appears from the outset to be immoral to some degree.

Second, there’s the potential harms, always trickier. There are many unknowns that could result from taking my own life. The potential harms could be more extreme psychological harms, a family member driven to severe depression or madness or alcoholism. (In reality, psychological harms are physical harms — consciousness is a byproduct of brain matter — and vice versa, so stress on one affects the other.) But they could be physical as well. Suicide, we know, is contagious. Taking my own life could inspire others to do the same. Not only could I be responsible for contributing, even indirectly, to the death of another person, I would also have a hand in all the actual and potential harms that result from his or her death! It’s a growing moral burden.

Of course, all ethics are situational. This is accepted by just about everyone — it’s why killing in self-defense seems less wrong than killing in cold blood, or why completely accidental killings seem less unethical than purposeful ones. These things can even seem ethically neutral. So there will always be circumstances that change the moral calculus. One questions if old age alone is enough (one of your parents or grandparents taking their own lives would surely be about as traumatic as anyone else), but intense suffering from age or disease could make the act less unethical, in the same way deeper and deeper levels of depression may do the same. Again, less unethical is used here. Can the act reach an ethically neutral place? The key may simply be the perceptions and emotions of others. Perhaps with worsening disease, decay, or depression, a person’s suicide would be less painful to friends and family. It would be hard to lose someone in that way, but, as we often hear when someone passes away of natural but terrible causes, “She’s not suffering anymore.” Perhaps at some point the scale is tipped, with too much agony for the individual weighing down one side and too much understanding from friends and family lifting up the other. One is certainly able to visualize this — no one wants their loved ones to suffer, and the end of their suffering can be a relief as well as a sorrow, constituting a reduction in actual harm — and this is no doubt reality in various cases. This writing simply posits that not all suicides will fall into that category (many are unexpected), and, while a distinguishing line may be frequently impossible to see or determine, the suicides outside it are morally questionable due to the ensuing harm.

If all this is nonsense, and such sympathetic understanding of intense suffering brings no lesser amount of harm to loved ones, then we’re in trouble, for how else can the act break free from that immoral place, for those operating under the moral framework that causing harm is wrong?

It should also be noted that the rare individuals without any real friends or family seem to have less moral culpability here. And perhaps admitted plans and assisted suicide diminish the immorality of the act, regardless of the extent of your suffering — if you tell your loved ones in advance you are leaving, if they are there by your side in the hospital to say goodbye, isn’t that less traumatizing and painful than a sudden, unexpected event, with your body found cold in your apartment? In these cases, however, the potential harms, while some may be diminished in likelihood alongside the actual, still abound. A news report on your case could still inspire someone else to commit suicide. One simply cannot predict the future, all the effects of your cause.

As a final thought, it’s difficult not to see some contradiction in believing in suicide prevention, encouraging those you know or those you don’t not to end their lives, and believing suicide to be ethically neutral or permissible. If it’s ethically neutral, why bother? If you don’t want someone to commit suicide, it’s because you believe they have value, whether inherent or simply to others (whether one can have inherent value without a deity is for another day). And destroying that value, bringing all that pain to others or eliminating all of the individual’s potential positive experiences and interactions, is considered wrong, undesirable. Immorality and prevention go hand-in-hand. But with folks who are suffering we let go of prevention, even advocating for assisted suicide, because only in those cases do we begin to consider suicide ethically neutral or permissible.

In sum, one finds oneself believing that if causing harm to others is wrong, and suicide causes harm to others, suicide must in some general sense be wrong — but acknowledging that there must be specific cases and circumstances where suicide is less wrong, approaching ethical neutrality, or even breaking into it.

For more from the author, subscribe and follow or read his books.

Expanding the Supreme Court is a Terrible Idea

Expanding the Supreme Court would be disastrous. We hardly want an arms race in which the party that controls Congress and the White House expands the Court to achieve a majority. It may feel good when the Democrats do it, but it won’t when it’s the Republicans’ turn. 

The problem with the Court is that the system of unwritten rules, of the “gentlemen’s agreement,” is completely breaking down. There have been expansions and nomination fights or shenanigans before in U.S. history, but generally when a justice died or retired a Senate controlled by Party A would grudgingly approve a new justice nominated by a president of Party B — because eventually the situation would be reversed, and you wanted and expected the other party to show you the same courtesy. It was reciprocal altruism. It all seemed fair enough, because apart from a strategic retirement, it was random luck — who knew when a justice would die? 

The age of unwritten rules is over. The political climate is far too polarized and hostile to allow functionality under such a system. When Antonin Scalia died, Obama should have been able to install Merrick Garland on the Court — Mitch McConnell and the GOP Senate infamously wouldn’t even hold a vote, much less vote Garland down, for nearly 300 days. They simply delayed until a new Republican president could install Neil Gorsuch. Democrats attempted to block this appointment, as well as Kavanaugh (replacing the retiring Kennedy) and Barrett (replacing the passed Ginsburg). The Democrats criticized the Barrett case for occurring too close to an election, mere weeks away, the same line the GOP had used with Garland, and conservatives no doubt saw the investigation into Kavanaugh as an obstructionist hit job akin to the Garland case. But it was entirely fair for Trump to replace Kennedy and Ginsberg, as it was fair for Obama to replace Garland. That’s how it’s supposed to work. But that’s history — and now, with Democrats moving forward on expansion, things are deteriorating further.

This has been a change building over a couple decades. Gorsuch, Kavanaugh, and Barrett received just four Democratic votes. The justices Obama was able to install, Kagan and Sotomayor, received 14 Republican votes. George W. Bush’s Alito and Roberts received 26 Democratic votes. Clinton’s Breyer and Ginsburg received 74 Republican votes. George H.W. Bush’s nominees, Souter and Thomas, won over 57 Democrats. When Ronald Reagan nominated Kennedy, more Democrats voted yes than Republicans, 51-46! Reagan’s nominees (Kennedy, Scalia, Rehnquist, O’Connor) won 159 Democratic votes, versus 199 Republican. Times have certainly changed. Partisanship has poisoned the well, and obstruction and expansion are the result.

Some people defend the new normal, correctly noting the Constitution simply allows the president to nominate and the Senate to confirm or deny. Those are the written rules, so that’s all that matters. And that’s the problem, the systemic flaw. It’s why you can obstruct and expand and break everything, make it all inoperable. And with reciprocal altruism, fairness, and bipartisanship out the window, it’s not hard to imagine things getting worse. If a party could deny a vote on a nominee for the better part of a year (shrinking the Court to eight, one notices, which can be advantageous), could it do so longer? Delaying for years, perhaps four or eight? Why not, there are no rules against it. Years of obstruction would become years of 4-4 votes on the Court, a completely neutered branch of government, checks and balances be damned. Or, if each party packs the Court when it’s in power, we’ll have an ever-growing Court, a major problem. The judiciary automatically aligning with the party that also controls Congress and the White House is again the serious weakening of a check and balance. Democrats may want a stable, liberal Court around some day to strike down rightwing initiatives coming out of Congress and the Oval Office. True, an expanding Court will hurt and help parties equally, and parties won’t always be able to expand, but for any person who sees value in real checks on legislative and executive power, this is a poor idea. All the same can be said for obstruction.

Here is a better idea. The Constitution should be amended to reflect the new realities of American politics. This is to preserve functionality and meaningful checks and balances, though admittedly the only way to save the latter may be to undercut it in a smaller way elsewhere. The Court should permanently be set at nine justices, doing away with expansions. Election year appointments should be codified as obviously fine. The selection of a new justice must pass to one decision-making body: the president, the Senate, the House, or a popular vote by the citizenry. True, doing away with a nomination by one body and confirmation by another itself abolishes a check on power, but this may be the only way to avoid the obstruction, the tied Court, the total gridlock until a new party wins the presidency. It may be a fair tradeoff, sacrificing a smaller check for a more significant one. However, this change could be accompanied by much-discussed term limits, say 16, 20, or 24 years, for justices. So while only one body could appoint, the appointment would not last extraordinary lengths of time.

For more from the author, subscribe and follow or read his books.

Review: ‘The Language of God’

I recently read The Language of God. Every once in a while I read something from the other side of the religious or political divide, typically the popular books that arise in conversation. This one interested me because it was written by a serious scientist, geneticist Francis Collins, head of the Human Genome Project. I wanted to see how it would differ from others I read (Lewis, Strobel, Zacharias, McDowell, Little, Haught, and so forth).

You have to give Collins credit for his full embrace of the discoveries of human science. He includes a long, enthusiastic defense of evolution, dismantles the “irreducible complexity” myth, and the science he cites is largely accurate (the glaring exception being his assertion that humans are the only creatures that help each other when there’s no benefit or reward for doing so, an idea ethology has entirely blown up). He also dismisses Paley’s dreadful “Watchmaker” analogy, sternly warns against the equally unwise “God of the Gaps” argument (lack of scientific knowledge = evidence for God), stands against literal interpretations of the bible, and (properly) discourages skeptics from claiming evolution literally disproves a higher power. Some of this is different compared to the other writers above, and unexpected.

Unfortunately, Collins engages in many of the same practices the other authors do: unproven or even false premises that lead to total argumental collapse (there’s zero evidence that deep down inside all humans have the same ideas of right and wrong, if only we would listen to the “whisper” of the Judeo-Christian deity), argument by analogy, and other logical fallacies. Incredibly, he even uses the “God of the Gaps” argument, not even 20 pages before his serious warning against it (we don’t know what came before the Big Bang, what caused it, whether multiple universes exist, whether our one universe bangs and crunches ad infinitum…therefore God is real). The existence of existence is important to think about, and perhaps we do have a higher power to thank, but our lack of scientific knowledge isn’t “evidence for belief,” as the subtitle puts it. It’s “nonevidence” for belief. It’s “God of the Gaps.” The possibility of God being fictional remains, as large as ever. Overall, Collins doesn’t carry over principles very well, seeing the weakness of analogy, “God of the Gaps,” and literal biblical interpretations but using them anyway (it is possible Genesis has untruths, but of course not the gospels). Weird, contradictory stuff.

Overall, the gist of the book is “Here are amazing discoveries of science, but you can still believe in God and that humans are discovering God’s design.” Which is fine. While trust in science forces the abandonment of literal interpretations of ancient texts (first man from dirt, first woman from rib, birds being on earth before land animals, etc.), faith and science living in harmony isn’t that hard. You say “God did it that way” and move on. Evolution was God’s plan, and so forth. That’s really all the chapters build toward (Part 2, the science-y part, has three chapters: the origins of the universe chapter builds toward the “We don’t know, therefore God” argument, while the life on Earth and human genome chapters conclude with no argument at all, just the suggestion that “God did it that way.” I found this unsettling. In any case, “evidence for belief” wasn’t an accurate subtitle, as expected).

Finally, I was disappointed Collins didn’t dive deeper into his conversion to the faith, a subject that always interests me. He cites just one (poor) argument from C.S. Lewis that caused him to change his mind about everything, the right and wrong proposition mentioned above. I would have liked more of his story.

For more from the author, subscribe and follow or read his books.

A More Plausible God

Sometimes I worry I will burn in hell for not following the One True Religion. This lasts about two minutes, however. That’s all the time it takes to recall how unlikely — insane even — the idea seems.

If we assume that a deity or deities exist, it seems more reasonable to assume there is no punishment (of a miserable, torturous nature anyway) for non-belief. It’s simply a question of how likely it is that a higher power would be an immoral monster or a total madman. Whichever the One True Religion is, throughout history countless millions (almost without question billions) have been born, lived, and died without ever hearing about it. Even today, as Daniel Dennett points out in Breaking the Spell, “whichever religion is yours, there are more people in the world who don’t share it than who do.” There may be two billion Christians or Muslims, but the global population is nearly eight billion, and plenty in remote parts of the world won’t hear of either, and still more won’t ever be proselytized to or decide to study them (after all, how many Christians would undertake a serious, thoughtful study of Shenism, Sikhism, Santería, or Zoroastrianism, or grow beyond the most minimal understandings of major faiths like Islam, Hinduism, or Buddhism?). The idea that a god would bring eternal suffering to such people is mind-boggling. It would have to be evil or insane. But that’s the deity described in various religions — an honest description, not a dogmatic one about how this God is one of love, justice, and forgiveness. Yet if a supernatural being of superior intellect and power exists, it’s likely a little more reasonable than that. If there’s a wager to be made (better than that faulty Pascal’s Wager), it’s that if a god or gods exist they’d be more moral and sensible than sending people off to be tortured for something they had no control over. Perhaps instead all people reach paradise regardless of belief, or there is no afterlife for some, or no afterlife for any of us, or some go to a place that isn’t paradise but not uncomfortable, or it’s all determined by one’s deeds, not beliefs. Who knows? There are countless options far more moral!

Careful readers will notice there’s a bit of an assumption there. When I was young and devout, I used to imagine the Judeo-Christian God found a way to make sure every person across the globe heard about him — and, after the resurrection, Jesus. People would read about them, someone would speak of them, or God would appear or make himself known in some fashion, to cover those in secluded and faraway places. If a deity exists, we assume it has the power to do this, so the above assumes it’s refraining — that could be a critical error. All true. Yet that may not ease the being’s moral culpability much. Suppose you go through your life and suddenly hear of Shenism — you saw it mentioned in an article somewhere. You read the article, but didn’t study the religion. You didn’t think to, you have your own religion you’re sure is true, you’re busy and forgot, you prefer learning about other things, and so on. Missing your moment, do you deserve eternal punishment? Have you “made your choice”? Let’s go further and imagine God ensures every human being receives enough knowledge about the One True Religion to make an “informed choice.” Suppose you learn about Islam in school, or have a Muslim co-worker. You hear all about the faith — you even study it on your own, earnestly. But you’re just not convinced, the evidence and reasoning don’t seem strong enough — no, thank you. You’ll stick to Christianity or atheism or Hinduism or whatever, inadvertently rejecting the One True Religion, sealing your fate. If this is how affairs are arranged, billions aren’t persuaded and will burn. Some people will be swayed, maybe everyone who gets a flashy visit from God himself will convert, but the vast majority of humanity is toast. (And surely not all those billions recognized the One True Religion as true but ignored it for sinful, selfish reasons — I can hear that ludicrous line coming from the Christians.) So, do you deserve hell? Because what you heard or read didn’t convince you? Did you “make your choice”? One could phrase it that way, but do you really choose to believe something is true? Or do you simply believe it’s true? In any case, what kind of being would torture good people for eternity because they weren’t convinced of something? Being unpersuaded…that’s your sin! Now burn. It would again probably have to be an immoral monster or a total madman.

So if it seems plausible that a deity is more likely to be a moral and sensible being, who wouldn’t issue everlasting damnation on people who didn’t hear about her or simply weren’t convinced by the evidence and reasoning available and presented, there isn’t too much work remaining. God is clearly a reasonable fellow, and in that light special cases can be considered. What of apostates? Perhaps you belonged to the One True Religion and left it. This is too similar to the above musings to warrant much discussion — if you can be forgiven for not being convinced, mightn’t you also be forgiven for no longer being convinced? But what of atheists and agnostics who don’t follow any faith? Same story, that’s simply not being convinced of something. If the gods are moral and sensible enough to not torture someone unconvinced by the One True Religion, why would they torture someone unconvinced by the One True Religion and all false ones? This is why my worry, as both apostate and atheist, dissipates quickly. If God exists, he’s probably good enough to not do X, and if he’s good enough to not do X he’s probably good enough to not do Y.

This could all be wrong, of course. It could be that a higher power exists and he’s simply a tyrant, completely immoral and irrational in word and deed, shipping people to hell regardless of whether they’ve heard of him, regardless of how bad the “evidence” is. (Or only tormenting atheists and apostates!) We should sincerely hope the Judeo-Christian god, for instance, doesn’t exist or is at least radically different than advertised in holy books (he has a long history of choosing less moral options and even punishing people for things they had no control over, such as the sins of the father). Or it could be the deity is mad and wicked in the opposite way. It may have been former pastor Dan Barker who wrote that a god who only lets atheists and agnostics into paradise, as a reward for thinking critically, while letting believers burn, could easily exist. Humorous, yet entirely possible (the “evidence” for each is of comparable quality). Millions of gods could be and have been theorized. But it makes some sense to suppose a higher power would be moral, because it presumably created us, and we have a moral outrage about all this, at least in modern times: most people, even many believers, are horrified at the thought of billions being tortured forever because they believed differently through no real fault of their own. We would figure out “options far more moral,” like those above, if given the power. Wouldn’t the creator be more moral, more loving and forgiving, than the created? Can mortals really surpass the gods in ethical development, in an interest in fairness and minimizing harm? Regardless, in sum, it’s simply up to us to decide if it’s most plausible that an existent deity would be good and sane — if so, damning the vast majority of humanity to hell for not knowing about, studying, or being convinced of the One True Religion seems highly implausible.

For more from the author, subscribe and follow or read his books.

How to Write and Publish a Book (Odds Included)

My experience with writing books and finding publishers is extremely limited, but a few early insights might make it easier for others interested in doing the same. The following, it should be noted, relates to nonfiction works — the only world I know — but most of it could probably be applied to novels.

First, write your book. Take as much or as little time as needed. I cranked out the first draft of Racism in Kansas City in four months and promptly began sending it out to publishers (April 2014). Why America Needs Socialism I wrote off and on for six years, at the end throwing everything out (300 pages) and starting over (though making much use of old material), finishing a new version in five months. Just make your work the absolute best it can be, in terms of content and proper grammar. But you can reach out to certain publishers before your manuscript is wholly finished. Pay attention to the submission guidelines, but for most publishers it’s not a big deal (many ask you to explain how much of the work is complete and how long it will take you to finish). I feel safest having the manuscript done, and it would likely be risky to reach out if you didn’t have over half the piece written — your proposal to publishers will include sample chapters, and if they like those they will ask for more: the whole manuscript thus far.

You’ll scour the internet for publishers who print books like yours and who accept unsolicited materials, meaning you can contact them instead of a literary agent. If you want the big houses like Simon & Schuster, Penguin Random House, or HarperCollins, you’ll need an agent, and I have no experience with that and thus have no advice. But a million small- and medium-sized publishers exist that will accept unsolicited queries from you, including significant places like Harvard University Press or Oxford University Press.

Following the firm’s guidelines on its website, you’ll generally email a book proposal, which offers an overview, target audience, table of contents, analysis of competing titles, author information, timeline, and sample chapter. If there’s no interest you won’t usually get a reply, but if there is you’ll be asked for the manuscript. It’s an easy process, there’s simply much competition and varying editor interests and needs, so you have to do it in volume. Keep finding houses and sending emails until you’re successful. From March to May 2018, I sent a proposal for Why America Needs Socialism to 91 publishers. Eight (about 9%) requested the full manuscript, and two (about 2%) wanted to publish it. The terms of the first offer were unfavorable (walk away if you have to), but by September, after seven months of searching, a home for the book was secured, Ig Publishing in New York City.

The same technique and persistence is required when seeking blurbs of praise for the back cover and promotional materials. You simply find ways to call or email a dozen or so other authors and prominent people, explain your book and publisher, and then four of them accept your manuscript and agree to write a sentence of praise if they like it (or write a foreword, or peer review it, or whatever you seek). It is very convenient for nonfiction authors that so many of the folks you’d want to review your book are university professors. You simply find Cornel West’s email address on Harvard’s faculty page. Similarly, you shotgun emails to publications when the book comes out and ask them to review it. I sent a query to 58 magazines, papers, journals, and websites I thought would be interested in reviewing Why America Needs Socialism, offering to send a copy. Seven (12%) asked for the book to do a review; two others invited me to write a piece on the work myself for their publications.

I didn’t keep such careful records of my Racism in Kansas City journey, but after I began submitting proposals it took three months to find a publisher who agreed to publish the work — temporarily. I made the mistake of working for 10 months with a publisher without a contract. At times, publishers will ask you to made revisions before signing a contract, a big gamble (that I wasn’t even really aware of at the time). This publisher backed out due to the national conversation on race sparked by Mike Brown’s death and subsequent events through late 2014 and early 2015, which was seemingly counter-intuitive for a publisher, but they were more used to tame local histories than what I had produced, a history of extreme violence and subjugation. So the search continued.

Writing a local story, at least a nonfiction work, certainly limits your house options. There are some, like the above, that are out-of-state that will take interest, but generally your best bet lies with the local presses. And unfortunately, there aren’t many of them where I reside. The University of Missouri Press was shutting down, the University Press of Kansas (KU) wanted me to make revisions before they would decide — and I wasn’t looking to repeat a mistake. I didn’t approach every Kansas City-area publisher, but rather, feeling the pressure of much wasted time, decided to stop looking for a house and instead to self-publish (with Mission Point Press, run by the former head of Kansas City Star Books).

A traditional publisher pays all the costs associated with the book and you get an advance and a small royalty from each copy sold. (With Ig Publishing, I gave up an advance for a larger royalty — a worthwhile option if the book sells well.) With self-publishing, everything is in reverse: you pay a “nontraditional publisher” to birth the book — editing, cover design, maybe marketing and distribution — and you keep most of the profit from each copy sold (not all, as someone is printing it). There’s also the option of skipping a nontraditional publisher altogether and doing everything yourself, working only with a printer. A traditional house is the big prize for a writer, because it offers that coveted validation — a firm accepted your piece instead of rejecting it, like it rejected all those other authors. It’s about prestige and pride, and not having to say “Well…” after someone calls you a published author. But self-publishing can give you more control over the final product, in some circumstances more money over time, and it works well for a local book (it’s Kansas City readers and bookstores that want a book on Kansas City, so I don’t have to worry about marketing and distribution in other cities).

The whole process is an incredible adventure: the intense learning process of researching and writing, the obsession, the hunt for and exhilaration over a publisher, the dance and give-and-take with editors who see needed changes (“murder your darlings”), the thousands of footnotes you format (kidding, it’s hell), finding key individuals to write a foreword or advanced praise, getting that first proof in the mail, releasing and marketing your work, winning coverage and reviews in the press, giving book talks and interviews, hearing a reader say how much what you created meant to him, learning your book is a classroom text, being cited by other authors or asked to give advanced praise yourself, being recognized by strangers, seeing your work in libraries and bookstores across the city, and the country, and even the world.

For more from the author, subscribe and follow or read his books.

Christianity and Socialism Both Inspired Murderous Governments and Tyrants. Should We Abandon Both?

It is often argued that because the ideas of Marx and socialistic thinkers were the ideologies of ruthless people like Stalin and states like the Soviet Union, such ideas are dangerous and must be abandoned. What’s interesting to consider is that the same could be said of Christianity and other belief systems held dear by many who make such arguments.

After all, Europe (and later the New World) was dominated by Christian states from the time of the late Roman Empire under Constantine and some 1,500 years thereafter, only weakening before secularism beginning in the 18th and 19th centuries. These states were ruled by Christian monarchs, often dictators with absolute power, many quite murderous indeed. Even when kings and queens were reined in by constitutions and power sharing with parliaments, the terrors continued. Nonbelievers, people of other faiths, and Christians that questioned or defied official doctrine, including many scientists, were exiled, imprisoned, tortured, maimed, or executed. It was a nasty business, from being sawed in half, groin to skull, to being burned alive. Wars against nations of other religions or other denominations of Christianity killed millions. This history was explored in When Christianity Was as Violent as Islam, so the reader is referred there for study. As hard as it may be for Christians to hear, these were governments and rulers that used indoctrination, fear, force, and murder against their own citizens to maintain and protect Christianity and its hold over nation-states. Kings and queens and officials at all levels of government believed fervently in Christianity and, as with religious leaders, weren’t afraid to mercilessly crush threats to it, no matter how small. If that sounds similar to what occurred in the Soviet Union and elsewhere with socialist ideology, it probably should.

One can imagine the protestations from the faithful. Something about how socialism led to more deaths, in a shorter timeframe, and in the modern age rather than more backward times. “So you see, socialism was way worse!” Perhaps the radical would then point out that, at least as of this moment, Christianity had a far longer reign of terror, about 1,500 years — while the first country calling itself “socialist” was only birthed a century ago. It might also be argued that there have been more oppressive states that called themselves Christian than called themselves socialist — recall that Christianity dominated Europe, the Americas, and other places (and with such a great length of time comes many new states). A full tally, actual careful study, would be necessary. Same for questions about “Well, the percentage of socialist nations that went bad is way higher than the percentage of Christian countries that went bad, therefore –” And on and on. The argument over what state ideology was worse seems somewhat pointless, however. Suppose it was conceded that socialism was indeed worse. That doesn’t erase the fact that these belief systems, with their tentacles around rulers and regimes, both inspired terrible crimes. That leaves the central question to consider: If we look at history and see that a belief system has caused great horrors, should we abandon that belief system and encourage others to do the same?

Here the Christian and the socialist may find some common ground, both supposing no. But the answer is more likely to be no for my belief system, yes for yours. Things then devolve into arguments over differences, real or perceived, between the ideologies. The Christian may focus on what we could call the beginning and the end of ideologies, a view that 1) the origins of a belief system and 2) the modern relationship to state power are what matter most to this question of whether a belief system that has caused much horror should be forgotten.

The discussion might go something like:

“Christianity’s founding texts call for love and peace, whereas Marx saw necessary a violent revolution against monarchs and capitalists!”

“Well, that didn’t seem to stop Christian governments and rulers from engaging in their own violence and oppression, did it?”

“It’s one thing to take something originally pure and twist it, do evil with it. But socialism started with a document approving of violence.”

“You know socialism existed before Marx’s writings, right? Before he left boyhood? He later refined and popularized it, but didn’t invent it (and many who advocated for it before him were Christians). And recall that the New Testament isn’t too kind to women, gays, and slaves, justifying much oppression and many atrocities throughout history. Also, wasn’t the U.S. birthed in violent revolution against the powerful? Marx’s writings and 1770s American writings like the Declaration and Paine’s works sound pretty similar, if you bother to read them. Calls for revolution are sometimes justified, even to you.”

And:

“Many Christians don’t want an officially Christian country anymore. Church and state can be separate; we just want religious freedom. But socialists want an officially socialist country. You can’t separate socialism from government. Not in the way we’ve separated Christianity from government.”

“True, that is a difference. Government structure, law, and services are integral to socialism.”

While the first point doesn’t have much significance, the second point is a good one, an interesting one. It highlights the fundamental difference between the ideologies. You can separate Christianity from government, or Islam from government, but you can’t do so with socialism (however defined), any more than you could separate monarchism or representative democracy from government. A reasonable person could perhaps argue that a belief system with past horrors should be put to rest if it cannot be separated from power. But surely it’s not a line as clear as that; it only widens the discussion. The reader may fully support representative democracy, but it has caused many terrors as well, from the election of the Nazis to the 3 million civilians the U.S. killed in Vietnam. Should belief in representative democracy be abandoned on those grounds? The reader may likewise support the military and patriotism, both difficult to separate from government, both with very dark histories in our own country and others. And so on. (Conversely, philosophies that can be separated from state power are still capable of great evil, such as free market capitalism, or Islamic and Christian terror sects.)

Perhaps the real question, then, is can ideologies, whether or not they can meaningfully exist outside the political system, successfully cleanse themselves of their sins, or, rather, separate the wheat from the chaff? Can we reject the more virulent strains of belief systems and the people who follow them, leaving only (or mostly) the better angels of their natures?

Christians rightly understand that Christianity can be divorced from violence and oppression, even if it wasn’t in certain times, places, and people — and isn’t in a few places and people today. They understand that the problems Christianity attempts to solve, the missions of the faith, could be addressed in many ways, some more ethical than others. If one’s concern is that souls in other lands are lost and must be saved, Christians could engage in bloody conquest and forced conversion, as of course happened in history, or instead peaceful missionary work. Different people have different ethics (especially in different times, societies, and institutions) and will go about addressing problems and goals differently. It’s that simple. Importantly, Christians also understand that one method doesn’t necessarily lead to the other. The slippery slope fallacy isn’t one you usually hear in this context: no Christian thinks peaceful missionary work automatically leads to violent, repressive methods of bringing people into the faith. They know that the things they care about — belief in Christ’s divinity and resurrection, a relationship with the deity, a right way of living based on scriptures — can be imparted to others without it leading to tyranny and mass murder. Despite an ugly history, we all know this to be so.

Socialism, with terrible things done in its name as well, is a similar story. The ideology had its proponents willing to use terror, but it had even more peaceful advocates, from those famous on the Left like Eugene Debs, Dorothy Day, and Bertrand Russell to those famous to all, documented in Why America Needs Socialism: The Argument from Martin Luther King, Helen Keller, Albert Einstein, and Other Great Thinkers. (And don’t forget the peaceful Christian Socialists!) The things socialists care about — workers owning and running their workplaces, universal government programs to meet human needs, prosperity for all, people’s control over government — can be fought for and implemented without violence and subjugation. (This of course leaves out the debate concerning what socialism is and how it differs from communism and other ideologies, but that has been handled elsewhere and it seems reasonable to put that aside, as we’re also excluding the discussion of what “true Christianity” is, whether true Christianity involves top-down oppression and terror or bottom-up peace and love, whether it’s Catholicism or a sect of Protestantism, etc.) The societal changes socialists push for have already been achieved, in ways large and small, without horrors all over the world, from worker cooperatives to systems of direct democracy to universal healthcare and education, public work programs guaranteeing jobs, and Universal Basic Income (see Why America Needs Socialism). These incredible reforms have occurred in democratic, free societies, with no signs of Stalinism on the horizon. The slippery slope fallacy is constantly applied to socialism and basically any progressive policy (remember, racial integration is communism), but it doesn’t have any more merit than when it is applied to Christianity. Those who insist that leaders and governments set out to implement these types of positive socialistic reforms but then everything slid into dictatorship and awfulness as a result basically have no understanding of history, they’re just completely divorced from historical knowledge. Generally, when you actually study how nations turned communist, you see that a Marxist group, party, or person already deeply authoritarian achieved power and then ruled, expectedly, in an authoritarian manner, implementing policies that sometimes resemble what modern socialists call for but often do not (for example, worker ownership of the workplace is incompatible with government ownership of the workplace; direct democratic decision-making is incompatible with authoritarian control; and so forth). It’s authoritarians who are most likely to use violence in the first place; anti-authoritarians generally try to find peaceful means of creating change, if possible. So not only do we see how the reforms socialists desire are being won around the world today without death and destruction, a serious study of history shows that those reforms don’t lead to such things, but rather it’s a matter of groups and persons with violent or oppressive tendencies gaining power and acting predictably, just like when a Christian or Christian group with violent and oppressive tendencies gains power, past or present. The missions of socialism, as with Christianity, can be achieved in ethical ways.

Knowing Christianity and socialism, despite brutal pasts, can operate in today’s world in positive, peaceful ways, knowing that ideologies, people, and societies can change over time for the better, one sees little reason to abandon either based solely on their histories. A Christian may reject socialism on its own merits, opposing, for example, worker ownership of workplaces (or, if thinking more of communism, government ownership of workplaces); likewise, a socialist may reject Christianity on its own merits, disliking, say, beliefs unsupported by quality evidence. But to reject an ideology because of its history of violence surely necessitates rejecting your own; and to give your own a pass because it can exist benignly surely necessitates extending the same generosity to others. Remember, dear reader, the words of Kwame Ture (Stokely Carmichael):

You don’t judge Christianity by Christians. You don’t judge socialism by socialists. You judge Christianity by its principles irrespective of Christians. You judge socialism by its principles irrespective of those who call themselves socialists. Where’s the confusion?

For more from the author, subscribe and follow or read his books.

Saving Dr. King and Others From the Capitalist “Memory Hole”

The socialist press around the world will mark January 18, 2021, with celebrations of Martin Luther King, Jr.’s fervent rejection of capitalism and resounding advocacy for socialism, in an attempt to rescue his political and economic philosophy from George Orwell’s “memory hole.” This was the chute in 1984 where embarrassing truths were sent to their destruction. Mainstream media outlets will remember Dr. King’s “I have a dream” speech, but forget that he also said, “We must see now that the evils of racism, economic exploitation, and militarism are all tied together.”

But Martin Luther King, Jr. Day is also a fine opportunity for the left press to note that King belongs to a pantheon of famous historical who were, to the surprise of many admirers, committed socialists. King questioned the “captains of industry” and their ownership over the workplace, the means of production (“Who owns the oil?… Who owns the iron ore?”), and believed “something is wrong with capitalism. There must be a better distribution of wealth, and maybe America must move toward a democratic socialism.” Other celebrated heroes believed the same and were likewise very public about their views – and, like King, their words and work in support of socialism, as they each understood it, have been erased from historical memory.

Orwell was sucked down a memory hole, too. Remembered today primarily for his critiques of the communist Soviet Union in 1984 and Animal Farm, he was a self-described democratic socialist who spent time in Spanish radical communities, saw capitalist society as “the robbers and the robbed,” and wrote that

Socialism is such elementary common sense that I am sometimes amazed that it has not established itself already. The world is a raft sailing through space with, potentially, plenty of provisions for everybody; the idea that we must all cooperate and see to it that everyone does his fair share of the work and gets his fair share of the provisions seems so blatantly obvious that one would say that no one could possibly fail to accept it unless he had some corrupt motive for clinging to the present system.

Helen Keller’s story ends in the popular imagination when she is a young girl, first learning to communicate through sign language and later speech and writing. But as an adult, Keller was a fiery radical, pushing for peace, disability rights, and socialism. She wrote, “It is the labor of the poor and ignorant that makes others refined and comfortable.” While capitalism is the few growing rich off the labor of the many, “socialism is the ideal cause.” Keller went on to write: “How did I become a socialist? By reading… If I ever contribute to the Socialist movement the book that I sometimes dream of, I know what I shall name it: Industrial Blindness and Social Deafness.”

The socialism of a certain famous physicist is often lost under the weight of gravity, space, and time. Albert Einstein insisted on “the establishment of a socialist economy,” criticizing how institutions function under capitalism, how “private capitalists inevitably control, directly or indirectly, the main sources of information (press, radio, education).” He continued: 

[The] crippling of individuals I consider the worst evil of capitalism. Our whole educational system suffers from this evil. An exaggerated competitive attitude is inculcated into the student, who is trained to worship acquisitive success as a preparation for his future career… The education of the individual [under socialism], in addition to promoting his own innate abilities, would attempt to develop in him a sense of responsibility for his fellow men in place of the glorification of power and success in our present society.

Mohandas Gandhi, with his commitment to nonviolent resistance and civil disobedience in British-occupied India, was an inspiration for King. But the two also shared a commitment to socialism. Gandhi connected these ideas, insisting that socialism must be built up from nonviolent noncooperation against the capitalists. “There would be no exploitation if people refuse to obey the exploiter. But self comes in and we hug the chains that bind us. This must cease.” He envisioned a unique socialism for India and a nonviolent pathway to bringing it about, writing, “This socialism is as pure as crystal. It, therefore, requires crystal-like means to achieve it.”

The list of famous historical figures goes on and on: Langston Hughes, Ella Baker, H.G. Wells, Ralph Waldo Emerson, Angela Davis, Pablo Picasso, Nelson Mandela. They ranged from democratic socialists to communists, but all believed we could do better than capitalism, that we could in fact build a better world. They agreed with King’s other dream.

“These are revolutionary times,” King declared. “All over the globe, men are revolting against old systems of exploitation and oppression, and out of the wombs of a frail world new systems of justice and equality are being born.”

Let socialists spend the 2021 Martin Luther King, Jr. Day excavating not only King’s radicalism, but the radicalism of so many like him.

This article first appeared in The Democratic Left: https://www.dsausa.org/democratic-left/saving-martin-luther-king-jr-and-others-from-the-capitalist-memory-hole/

For more from the author, subscribe and follow or read his books.

We Just Witnessed How Democracy Ends

In early December, a month after the election was called, after all “disputed” states had certified Biden’s victory (Georgia, Arizona, Nevada, Wisconsin, Michigan, and Pennsylvania), with some certifying a second time after recounts, after 40-odd lawsuits from Trump, Republican officials, and conservative voters had failed miserably in the American courts, the insanity continued: about 20 Republican states (with 106 Republican members of the House) sued to stop electors from casting their votes for Biden; only 27 of 249 Republican congresspersons would acknowledge Biden’s victory. Rightwing media played along. A GOP state legislator and plenty of ordinary citizens pushed for Trump to simply use the military to stay in power.

At this time, with his legal front collapsing, the president turned to Congress, the state legislatures, and the Electoral College. Trump actually pushed for the Georgia legislature to replace the state’s 16 electors (members of the 2020 Electoral College, who were set to be Biden supporters after Georgia certified Biden’s win weeks prior) with Trump supporters! Without any ruling from a court or state in support, absurd imaginings and lies about mass voter fraud were to be used to justify simply handing the state to Trump — a truly frightening attack on the democratic process. Officials in other battleground states got phone calls about what their legislatures could do to subvert election results as well (state secretaries later being asked to “recalculate” and told things like “I just want to find 11,780 votes”). And it was theoretically possible for this to work, if the right circumstance presented itself. ProPublica wrote that

the Trump side’s legislature theory has some basis in fact. Article II of the U.S. Constitution holds that “each State shall appoint, in such Manner as the Legislature thereof may direct, a Number of Electors” to vote for president as a member of the Electoral College. In the early days of the republic, some legislatures chose electors directly or vested that power in other state officials. Today, every state allocates presidential electors by popular vote…

As far as the Constitution is concerned, there’s nothing to stop a state legislature from reclaiming that power for itself, at least prospectively. Separately, a federal law, the Electoral Count Act of 1887, provides that whenever a state “has failed to make a choice” in a presidential election, electors can be chosen “in such a manner as the legislature of such State may direct.”

Putting aside how a battle between certified election results and misguided screams of election fraud might be construed as a “failure to make a choice” by a Trumpian judge somewhere, the door is open for state legislatures to return to the days of divorcing electors from the popular vote. The challenge, as this report went on to say, is that in these battleground states, the popular vote-elector connection “is enshrined in the state constitution, the state’s election code or both,” which means that change was only impossible in the moment because a party would need dominant political power in these states to change the constitutions and election codes — needing a GOP governor, control of or supermajorities in both houses of the legislature, even the passing of a citizens’ vote on the matter, depending on the state. Republican officials, if willing to pursue this (and true, not all would be), couldn’t act at that particular moment in history because success was a political impossibility. Wisconsin, Michigan, and Pennsylvania, for instance, had Democratic governors and enough legislators to prevent a supermajority veto override. But it isn’t difficult to envision a parallel universe or future election within our own reality where a couple states are red enough to reclaim the power to appoint electors and do so, returning someone like Trump to office and making voting in their states completely meaningless.

In the exact same vein, House Republicans laid plans to challenge and throw out electors in January. (Republicans even sued Mike Pence in a bizarre attempt to make the courts grant him the sole right to decide which electoral votes count! Rasmussen, the right-leaning polling institution, liked this idea, favorably lifting up a (false) quote by Stalin saying something evil in support of Pence throwing out votes.) This was theoretically possible, too. Per the procedures, if a House rep and senator together challenge a state’s slate of electors, the Congress as a whole must vote on whether to confirm or dismiss the electors. Like the state legislature intervention, this was sure to fail only due to fortunate political circumstances. The Independent wrote, “There’s no way that a majority of Congress would vote to throw out Biden’s electors. Democrats control the House, so that’s an impossibility. In the Senate, there are enough Republicans who have already acknowledged Biden’s win (Romney, Murkowski, Collins and Toomey, to name just a few) to vote with Democrats.” Would things have gone differently had the GOP controlled both houses?

All that would be needed for success after such acts are judges to go along with them. Given that such changes are not unconstitutional, final success is imaginable, whether in the lower courts or in the Supreme Court, where such things would surely end up. It’s encouraging to see, both recently and during Trump’s term, that the judicial system has remained strong, continuing to function within normal parameters while the rest of the nation went mad. In late 2020, Trump and rightwing efforts to have citizen votes disqualified and other disgusting moves based on fraud claims were tossed out of the courtrooms due to things like lack of something called “evidence.” Even the rightwing Supreme Court, with three Trump appointees, refused to get involved in and shot down Trump’s nonsense (much like Trump’s own Justice Department and Department of Homeland Security). Yet we waited with bated breath to see if this would be so. It could have gone the other way — votes thrown out despite the lack of evidence, such decisions upheld by higher courts, results overturned. That’s all it would have taken this time — forget changing how electors are chosen or Congress rejecting them! If QAnon types can make it into Congress, if people like Trump can receive such loyalty from the congresspersons who approve Supreme Court justices and other judges, if someone like Trump can win the White House and be in a position to nominate justices, the idea of the absurdity seeping into the judicial system doesn’t seem so far-fetched. Like other presidents, Trump appointed hundreds of federal judges. And if that seems possible — the courts tolerating bad cases brought before them — then the courts ruling that states can return to anti-democratic systems of old eras or tolerating a purge of rightful electors seems possible, too. Any course makes citizen voting a sham.

The only bulwark to the overturning of a fair election and the end of American democracy was basically luck, comprising, before January 6 at any rate: 1) a small group of Republican officials being unable to act, and/or 2) a small group of judges being unwilling to act. It isn’t that hard to imagine a different circumstance that would have allowed state legislators or Congress to terminate democracy and/or seen the Trumpian insanity infecting judges like it has voters and elected officials. In this sense, we simply got extremely lucky. And it’s worth reiterating that Number 1 needn’t even be in the picture — all you need is enough judges (and jurists) to go along with the foolishness Trump and the GOP brought into the courtroom and real democracy is finished.

(If interested to know exactly how many people would be required to unjustly hand a battleground state to the loser, the answer is 20. This includes a U.S. district judge and jury, a majority of an appellate court, and a majority of the Supreme Court. This number drops to just eight if the district court somehow sees a bench trial, without a jury. But at most, the sanity of just 20 people stands between democracy and chaos in each state. In this election, one state, any of the battleground states, would not have been enough to seize the Electoral College, you would have needed three of them. Meaning at most five Supreme Court justices and 45 judges and jurors. In this sense, this election was far more secure than some future election that hinges on one state.)

Then on January 6, we noticed that our luck was comprised of something else. It also included 3) a military that, like our justice system, hadn’t lost its mind yet. More on that momentarily.

When January 6 arrived, and it was time for Congress to count and confirm the Electoral College votes, GOP House reps and senators indeed came together to object to electors, forcing votes from both houses of Congress on elector acceptance. Then a Trumpian mob, sizably armed, overwhelmed the police and broke into the Capitol building to “Stop the Steal,” leaving five people dead and IEDs needing disarming — another little hint at what a coup, of a very different sort, might look and feel like. Though a few Republicans changed their minds, and plans to contest other states were scrapped, 147 Republican congresspersons still voted to sustain objections to the electors of Arizona and Pennsylvania! They sought to not confirm the electoral votes of disputed states until an “election audit” was conducted. Long after the courts (59 cases lost by then), the states (some Republican), and Trump’s own government departments had said the election was free and fair, and after they saw how Trump’s lies could lead directly to deadly violence, Republicans continued playing along, encouraging the continued belief in falsities and risking further chaos. They comprised 65% of GOP House members and 15% of GOP senators. This time, fortunately, there wasn’t enough congressional support to reject electoral votes. Perhaps next time there will be — and a judicial system willing to tolerate such a travesty.

Recent times have been a true education in how a democracy can implode. It can do so without democratic processes, requiring a dear leader spewing lies, enough of the populace to believe those lies, enough of the most devout to take violent action, and military involvement or inaction. If armed supporters storm and seize the Capitol and other places of power then it doesn’t really matter what the courts say, but this only ultimately works if the military does it or acquiesces to it. While the January 6 mob included active soldiers and veterans, this had no support from the branches, instead condemnation and protective response. This was ideal, but next time we may not be so fortunate. But the end of the great experiment can also happen through democratic processes. Democratic systems can eliminate democracy. Other free nations have seen democracy legislated away just as they have seen military coups. You need a dear leader spewing lies to justify acts that would keep him in charge, enough of the populace to believe those lies, enough of the dear leader’s party to go along with those lies and acts for power reasons (holding on to branches of government), and enough judges to tolerate such lies and approve, legitimize, such acts. We can count our lucky stars we did not see the last one this time, but it was frightening to witness the first three.

Trump’s conspiracy theories about voter fraud began long before the election (laying the groundwork to question the election’s integrity, only if he lost) and continued long after. Polls suggested eight or nine of every ten Trump voters believed Biden’s victory wasn’t legitimate; about half of Republicans agreed. So many Republican politicians stayed silent or played along with Trump’s voter fraud claims, cementing distrust in the democratic process and encouraging the spread of misinformation, which, like Trump’s actions, increased political division, the potential for violence, and the odds of overturning a fair election. As with voter suppression, gerrymandering, denying justice confirmation votes, and much else, it is clear that power is more important than democracy to many Republicans. Anything to keep the White House, Congress, the Supreme Court. You can’t stand up to the Madman and the Masses. The Masses adore the Madman, and you can’t lose their votes (or, if an earnest supporter, go against your dear leader). Some politicians may even fear for their safety.

It was frightening to realize that democracy really does rest, precariously, on the truth. On human recognition of reality, on sensible standards of evidence, on reason. It’s misinformation and gullibility that can end the democratic experiment, whether by coup or democratic mechanisms.

What would happen next? First, tens of millions of Americans would celebrate. They wouldn’t be cheering the literal end of democracy, they would be cheering its salvation, because to them fraud had been overcome. So a sizable portion of the population would exist in a delusional state, completely disconnected from reality, which could mean a relatively stable system, like other countries that drifted from democracy. Perhaps the nation simply continues on, in a new form where elections are shams — opening the door to further authoritarianism. Despite much earnest sentiment toward and celebration of democracy, there is a troubling popularity of authoritarianism among Trump voters and to a lesser extent Americans as a whole. Unless the rest of the nation became completely ungovernable, whether in the form of nationwide strikes and mass civil disobedience or the actual violence that the typically hyperbolic prophets of “civil war” predict, there may be few alternatives to a nation in a new form. Considering Congress would need high Republican support to remove a president, or considering Congress would be neutered by the military, an effective governmental response seems almost impossible.

We truly witnessed an incredible moment in U.S. history. It’s one thing to read about nations going over the cliff, and another to see the cliff approaching before your very eyes. Reflecting the rise of authoritarians elsewhere in history, Trump reached the highest office using demagoguery (demonizing Mexicans and illegal immigrants, Muslims, China, the media, and other existential threats) and nationalism (promising to crush these existential threats and restore American greatness). The prejudiced masses loved it. As president his worst policies not only acted upon his demagoguery, with crackdowns on all legal immigration, Muslim immigrants, and illegal immigrants, he also consistently gave the finger to democratic and legal processes, such as ordering people to ignore subpoenas, declaring a national emergency to bypass congressional power and get his wall built, obstruction of justice (even a Republican senator voted to convict him of this), and so on. Then, at the end, Trump sought to stay in office through lies and a backstabbing of democracy, the overturning of a fair vote. And even in all this, we were extremely lucky — not only that the judicial and military systems remained strong (it was interesting to see how unelected authorities can protect democracy, highlighting the importance of some unelected power in a system of checks and balances), but that Trump was always more doofus than dictator, without much of a political ideology beyond “me, me, me.” Next time we may not be so fortunate. America didn’t go over the cliff this time, but we must work to ensure we never approach it again.

For more from the author, subscribe and follow or read his books.

The Toolbox of Social Change

After reading one of my books, folks who aren’t involved in social movements often ask, in private or at public talks, “What can we do?” So distraught by horrors past and present, people feel helpless and overwhelmed, and want to know how we build that better world — how does one join a social movement, exactly? I often say it’s easy to feel powerless before all the daunting obstacles — and no matter how involved you get, you typically feel you’re not doing enough. Perhaps even the most famous activists and leaders felt that way. Fortunately, I continue, if you look at history it becomes clear that social change isn’t just about one person doing a lot. It’s about countless people doing just a little bit. Howard Zinn said, “Small acts, when multiplied by millions of people, can transform the world.” And he was right, as we’ve seen. Whatever challenges we face today, those who came before us faced even greater terrors — and they won, because growing numbers of ordinary people decided to act, decided to organize, to put pressure on the economically and politically powerful. I then list (some of) the tools in the toolbox of social change, which I have reproduced below so I can pass them along in written form.

The list roughly and imperfectly goes from smaller, less powerful tools to larger, more powerful ones. The first nine are largely done “together alone,” while the last nine are mostly in the realm of true organizing and collective action. Yet all are of extreme importance in building a more decent society. (It ignores, perhaps rightly, the sentiments of some comrades that there should be no participation in current electoral systems, instead favoring using all possible tools at one’s disposal.) This is in no way a comprehensive list (writing books is hopefully on this spectrum somewhere, alongside many other things), but it is enough to get the curious started.

 

Talk to people

Post on social media

Submit editorials / earn media attention

Sign petitions

Call / email / write the powerful

Donate to candidates

Donate to organizations

Vote for candidates

Vote for policy initiatives

Volunteer for candidates (phonebank / canvass / register or drive voters)

Volunteer for policy initiative campaigns (phonebank / canvass / register or drive voters)

Run for office

Join an organization

Launch a policy initiative campaign (from petition to ballot)

March / protest / picket (at a place of power)

Boycott (organized refusal to buy or participate)

Strike (organized refusal to return to work or school)

Sit-in / civil disobedience / disruption (organized, nonviolent refusal to leave a place of power, cooperate, or obey the law; acceptance of arrest)

For more from the author, subscribe and follow or read his books.

The Psychology of Pet Ownership

For years now, exhaustive psychological research and studies have concluded that a wealth of medical benefits exists for the individual who owns a pet. According to Abnormal Psychology (Comer, 2010), “social support of various kinds helps reduce or prevent depression. Indeed, the companionship and warmth of dogs and other pets have been found to prevent loneliness and isolation and, in turn, to help alleviate or prevent depression” (p. 260). Without companionship, people are far more likely fall into depression when life presents increased stress. An article in Natural Health summarizes the medical advantages of pet ownership by saying, “researchers have discovered that owning a pet can reduce blood pressure, heart rate, and cholesterol; lower triglyceride levels; lessen stress; result in fewer doctor visits; and alleviate depression” (Hynes, 2005). Additionally, Hynes explains, “Infants who live in a household with dogs are less likely to develop allergies later in life, not only to animals but also to other common allergens.”

While immune system adaptation explains allergy prevention, a pet’s gift of reducing depression is multilayered. One of the most important components is touch therapy. The physical contact of petting a cat or dog provides a calming effect, comforting the owner and fighting off stress. The New York Times reports pets “provide a socially acceptable outlet for the need for physical contact. Men have been observed to touch their pets as often and as lovingly as women do” (1982). Physical touch in infancy is vital to normal brain development, and the need for contact continues into adulthood as a way to ease tension, express love, and feel loved. 

Another aspect of this phenomenon is unconditional love. Pets can provide people with love that is difficult or sometimes impossible to find from another person. In the article Pets for Depression and Health, Alan Entin, PhD, says unconditional love explains everything. “When you are feeling down and out, the puppy just starts licking you, being with you, saying with his eyes, ‘You are the greatest.’ When an animal is giving you that kind of attention, you can’t help but respond by improving your mood and playing with it” (Doheny, 2010). Pets are often the only source of true unconditional love a man or woman can find, and the feeling of being adored improves mood and self-confidence.

Not everyone is a pet person, which is why owning a pet will not be efficacious for everyone. Indeed, people who are already so depressed they cannot even take care of themselves will not see improvements. However, those who do take on the responsibility of owning a cat, dog, or any other little creature, will see reduced depression simply because they are responsible for another living being’s life. In an article in Reader’s Digest, Dr. Yokoyama Akimitsu, head of Kyosai Tachikawa Hospital’s psychiatric unit, says pets help by “creating a feeling of being needed” (2000). This need, this calling to take care of the pet, will give the owner a sense of importance and purpose. It also provides a distraction from one’s life problems. These elements work in concert to battle depression. 

Owning a pet also results in increased exercise and social contact with people. According to Elizabeth Scott, M.S., in her 2007 article How Owning a Dog or Cat Reduces Stress, dog owners spend more time walking than non-owners in urban settings. Exercise is known to burn stress. Furthermore, Scott says, “When we’re out walking, having a dog with us can make us more approachable and give people a reason to stop and talk, thereby increasing the number of people we meet, giving us an opportunity to increase our network of friends and acquaintances, which also has great stress management benefits.” Increased exercise will also lead to an improved sense of well-being due to endorphins released in the brain, and better sleep.

Finally, owning a pet simply staves off loneliness. Scott says, “They could be the best antidote to loneliness. In fact, research shows that nursing home residents reported less loneliness when visited by dogs than when they spent time with other people” (2007). Just by being there for their owners, pets eliminate feelings of isolation and sadness. They can serve as companions and friends to anyone suffering from mild or moderate depression.

For more from the author, subscribe and follow or read his books.

References

Brody, J. E. (1982, August 11). Owning a Pet Can Have Therapeutic Value. In The New York Times. Retrieved December 13, 2010, from http://www.nytimes.com/1982/08/11/garden/owning-a-pet-can-have-therapeutic-value.html?scp=1&sq=1982%20pets&st=cse

Comer, R. J.  (2010). Abnormal Psychology (7th Ed.). New York: Worth Publishers

Doheny, K. (2010, August 18). Pets for Depression and Health. In WebMD. Retrieved December 13, 2010, from http://www.webmd.com/depression/recognizing-depression-symptoms/pets-depression

Hynes, A. (2005, March). The Healing Power of Animals. In CBS Money Watch. Retrieved December 13, 2010, from http://findarticles.com/p/articles/mi_m0NAH/is_3_35/ai_n9775602/

Scott, E. (2007, November 1). How Owning a Dog or Cat Can Reduce Stress. In About.com. Retrieved December 13, 2010, from http://stress.about.com/od/lowstresslifestyle/a/petsandstress.htm

Williams, M. (2000, August). Healing Power of Pets. In Reader’s Digest. Retrieved December 13, 2010, from http://www.drmartinwilliams.com/healingpets/healingpets.html

A Religious War

The Taiping Revolution was a devastating conflict, resulting in the deaths of tens of millions of people, between a growing Christian sect under Hong Xiuquan and the Qing Dynasty (1644-1911) government. While the political forces within Hong’s “God Worshipers” wanted to solve the internal turmoil in China, and certainly influenced events, the Taiping Rebellion was a religious war. It was more the influence of the West, not the problems at home, that prompted the violence. While many revolutions had occurred before this with no Christian influence, examining the viewpoint of God’s Worshipers and the viewpoint of Qing militia leader Zeng Guofan will make it exceedingly clear that without the influence of Western religion, the Taiping Rebellion never would have occurred. 

From the point of view of Hong Xiuquan, religion was at the heart of everything he did. The origins of his faith and his individual actions immediately after his conversion explain his later choices and those of his followers during the rebellion. According to Schoppa, Hong had a vision he was vanquishing demons throughout the universe, under orders from men whom Hong later determined to be God and Jesus Christ. Hong believed that Christ was his older brother and Hong was thus “God’s Chinese son” (71). Hong studied Liang Fa’s “Good Works to Exhort the Age,” which we examined during our discussion. Liang Fa emphasized that his conversion stemmed partly from the need to be pardoned of sin and partly from a desire to do good deeds to combat evil and eradicate it from his life (Cheng, Lestz 135). Reading Liang’s writings after the life-changing vision brought Hong to Christianity. It is essential to note that, as Schoppa puts it, “In his comprehension of the vision, Hong did not immediately see any political import” (71). All Hong was concerned about at this point was faith, not the Manchu overlords. He was so impassioned he would “antagonize his community by destroying statues of gods in the local temple” (Schoppa 71). What Hong would have done with his life had he not become a Christian is impossible to say. He had repeatedly failed the civil service examination; perhaps he would have had to take up farming like his father (Schoppa 71).

Instead, he formed the God Worshipping Society. According to Schoppa, certain groups that joined declared the demons in Hong’s vision were the Manchu, and had to be vanquished (72). It was outside influences that politicized Hong’s beliefs. Yet even through the politicization one will see that at the heart of the matter is religion. The very society Hong wished to create was based on Christian ideals. Equality of men and women led to both sexes receiving equal land in Hong’s 1853 land system, the faith’s sense of community led to family units with shared treasuries, and church was required on the Sabbath day and for wedding ceremonies (Schoppa 73). Christianity brought about the outlawing of much urban vice as well, such as drinking or adultery. One might argue that behind all these Christian ideological policies were long-held Confucian beliefs. As we saw in “Qian Yong on Popular Religion,” eradicating gambling, prostitution, drugs, etc. was just as important to the elites and literati (those who have passed the examination) as it was to Hong (Cheng, Lestz 129-131). While there were heavy indeed Confucian influences on Hong’s teachings (evidenced by their Ten Commandments and the proceeding odes found in “The Crisis Within”), Schoppa makes it clear that “the Taiping Revolution was a potent threat to the traditional Chinese Confucian system” because it provided people with a personal God rather than simply the force of nature, Heaven (75). The social policies that emerged from Hong’s Christian ideals, like family units and laws governing morality led Schoppa to declare, “It is little wonder that some Chinese…might have begun to feel their cultural identity and that of China threatened by the Heavenly Kingdom” (76). The point is, Hong never would have become a leader of the God Worshippers had Western Christianity not entered his life, and even after his growing group decided to overthrow the Manchu, the system of life they were fighting for and hoping to establish was founded on Christian beliefs. Just as Hong smashed down idols in his hometown after his conversion, so everywhere the God Worshippers advanced they destroyed Confucian relics, temples, and alters (Cheng, Lestz 148). The passion of Hong became the passion of all. 

It was also the opinion of the Manchu government that this was a religious war. As the God Worshippers grew in number, Schoppa writes, “The Qing government recognized the threat as serious: A Christian cult had militarized and was now forming an army” (72). Right away, the Manchu identified this as a religious rebellion. “It was the Taiping ideology and its political, social, and economic systems making up the Taiping Revolution that posed the most serious threat to the regime” (Schoppa 73). This new threat prompted the Qing to order Zeng Guofan to create militia and destroy the Taipings. “The Crisis Within” contains his “Proclamation Against the Bandits of Guangdong and Guangxi” from 1854. Aside from calling attention to the barbarism of the rebels, Zeng writes with disgust about Christianity and its “bogus” ruler and chief ministers. He mocks their sense of brotherhood, the teachings of Christ, and the New Testament (Cheng, Lestz 147). Zeng declares, “This is not just a crisis for our [Qing] dynasty, but the most extraordinary crisis of all time for the Confucian teachings, which is why our Confucius and Mencius are weeping bitterly in the nether world.” Then, in regards to the destruction of Confucian temples and statues, Zeng proclaims that the ghosts and spirits have been insulted and want revenge, and it is imperative that the Qing government enacts it (Cheng, Lestz 148). This rhetoric is not concerning politics and government, Manchu or anti-Manchu. Zeng makes it obvious what he aims to destroy and why. He views the rebellion as an affront to Confucianism. The Christians, he believes, must be struck down. 

With the leader’s life defined by Christianity, with a rebellious sect’s social structure based heavily on Christianity, with the continued destruction of Confucian works in the name of Christianity, and with the government’s aim to crush the rebellion in the name of Confucius and Mencius, can anyone rationally argue that the Taiping Rebellion was not a religious war? A consensus should now be reached! The rebellion’s brutality and devastation is a tragedy when one considers the similar teachings of both sides of the conflict, the Confucian call for peaceful mediation of conflicts and the Christian commandment not to kill. 

For more from the author, subscribe and follow or read his books.

Reference List

Pei-kai Cheng and Michael Lestz, and Jonathan D. Spence, eds., The Search for Modern China, (New York: W.W. Norton & Company, 1999), 128-149.

R. Keith Schoppa, Revolution and its Past (New Jersey: Prentice Hall, 2011), 71-76.

Designing a New Social Media Platform

In Delphi, Greece, μηδὲν ἄγαν (meden agan) was inscribed on the ancient Temple of Apollo — nothing in excess. Applying the famous principle to the design and structure of social media platforms could reduce a number of their negative effects: their addictive properties, online bullying, depression and lower self-worth, breakdowns in civility and their impact on political polarization, and so forth. Other problems, such as information privacy and the spread of misinformation (leading to all sorts of absurd beliefs, affecting human behaviors from advocacy to violence, with its own impact on polarization) will be more difficult to solve, and will involve proper management rather than UI changes (so they won’t be addressed here). The Social Dilemma, while mostly old news to anyone paying attention to such things, presents a good summary of the challenges and is worth a view for those wanting to begin an investigation.

A new, socially-conscious social media platform — we’ll call it “Delphi” for now — would be crafted to prevent such things to the extent possible, while attempting to preserve the more positive aspects of social media — the access to news and information, the sharing of ideas, exposure to differing views, the humor and entertainment, the preserved connections to people you like but just wouldn’t text or call or see. Because while breaking free and abandoning the platforms completely greatly improves well-being, the invention is as unlikely to disappear quickly as the telephone, so there should be some middle ground — moderation in all things, nothing in excess — between logging off for good and the more poisonous platforms we’re stuck with. People could then decide what works best for them. If you won’t break free, here’s at least something less harmful.

The new platform would do away with likes, comments, and shares. These features drive many of the addictive and depressive elements, as we all know; we obsessively jump back on to see how our engagement is going, and perhaps we can’t help but see this measurement as a measurement of our own self-worth — of our looks, intelligence, accomplishments, whatever the post “topic” might be. Comparing this metric to those of others, seeing how many more likes others get, can only worsen our perceptions of self, especially for young girls. Instagram is toying with removing public like counts, while still allowing users to see theirs in the back end, which is barely helpful. All three features should simply be abolished. With Delphi, one would post a status, photo, video, or link and simply have no idea how many friends saw it or reacted to it. Have you ever simply stopped checking your notifications on current platforms? It is quite freeing, in my experience. You know (suspect) people are seeing a post, but you have no clue how many or what their reactions are. There’s no racing back on to count the likes or reply to a compliment or battle a debater or be hurt by a bully. You’re simply content, as if you had painted a mural somewhere and walked away.

There are of course probable work-arounds here. Obviously, if someone posted a link I wanted to share, I could copy the address and post it myself. (There may be a benefit to forcing people to open a link before sharing it; maybe we’d be more likely to actually read more than the headline before passing the piece on.) This wouldn’t notify the original poster, who would only know (suspect) that I’d stolen the link if they saw my ensuing post. Likewise, there’s nothing to stop people from taking screenshots of posts or copy-pasting text and using such things in their own posts, with commentary. Unless we programmed the platform to detect and prevent this, or detect and hide such things from the original poster. But you get the idea: you usually won’t see any reaction to your content.

Delphi wouldn’t entirely forsake interaction, however. It would replace written communication and emoji reactions with face-to-face communication. There would in fact be one button to be clicked on someone’s post, the calendar button, which would allow someone to request a day, time, and place to meet up or do a built-in video call to chat about the post (a video call request could also be accepted immediately, like FaceTime). The poster could then choose whether to proceed. As everyone has likely noticed, we don’t speak to each other online the way we do in person. We’re generally nastier due to the Online Disinhibition Effect; the normal inhibitions, social cues, and consequences that keep us civil and empathetic in person largely don’t exist. We don’t see each other the same way, because we cannot see each other. Studies show that, compared to verbal communication, we tend to denigrate and dehumanize other people when reading their written disagreements, seeing them as less capable of feeling and reason, which can increase political polarization. We can’t hear tone or see facial expressions, the eyes most important of all, creating fertile ground for both unkindness and misunderstandings. So let’s get rid of all that, and force people to talk face-to-face. No comments or messenger or tags or laugh reacts. Not only can this reduce political divisions by placing people in optimal spaces for respectful, empathetic discourse, it can greatly reduce opportunities for bullying.

The goal is to only get notifications (preferably just in-app, not via your phone) for one thing: calendar requests. Perhaps there would also be invitations to events and the like, but that’s the general idea. This means far less time spent on the platform, which is key because light users of social media are far less impacted by the negative effects.

To this end, Delphi would also limit daily use to an hour or so, apart from video calls. No more mindless staring for four hours. Nothing in excess.

Much of the rest would be similar to what’s used today. We’d have profiles, pages, friends, a feed (the endless scroll problem is solved by the time limit). Abandoning the feed completely has benefits (returning to a world where you have to visit a profile or a page to see what’s happening), such as less depression-inducing peer comparison (look at how beautiful she is, how amazing his life is, and so on), but that could mean that one doesn’t really bother posting at all, knowing (suspecting) only a couple people will visit his or her profile. And one would also be less likely to be exposed to differing views if one has to seek them out. A feed may be necessary to keep some of the positive effects mentioned earlier. But perhaps going in the other direction could help — say, a feed just for pages and news, and a feed for friends, granting the ability to jump back and forth and ignore for a while so-and-so’s incredible trip to Greece.

For more from the author, subscribe and follow or read his books.

Faith and Science

It’s a bit odd how religious persons say things like “Science is for understanding the natural world, faith is for understanding the supernatural or spiritual world” as if these two methods of learning what is real are equally valid. They clearly are not.

“Science” basically means “testing.” You formulate a theory, devise a way to test it, and judge the results to see what’s true about the natural world. Now, it is true that some theories can’t be tested or haven’t been tested and are inappropriately presented as fact or likely to be fact. It’s also true that sometimes science is wrong — tests are flawed, good tests yield inaccurate results due to unexpected phenomena, or results are misjudged or misinterpreted. Yet over time, science grows more accurate. Tests are repeated over years, decades, and centuries, giving us further confidence in findings. New individuals administer such tests, weeding out biases. New tests are designed, looking at long-studied phenomena from different angles and in new ways. In these ways, 1) the ability to actually test ideas and 2) improved understanding over time, science helps us know what’s true.

Faith has neither of these things. First, it might be noted that when you hear the statement in the first paragraph, the speaker is typically talking about one faith, his or her own. Christian faith helps you understand the spiritual world, the true spiritual world. Hinduism, Scientology, Islam, and Buddhism won’t help you know the supernatural world, those are of course all false religions. In any case, we’ll assume a more open-minded stance, because some do believe that there is truth in all faiths, that all roads lead to Rome.

Where science is an ever-growing body of knowledge based on testing over time and into the modern age, faiths typically present more or less fixed bodies of ideas based on writings from comparatively primitive ancient cultures — desert tribes from the Middle East, for example, in the case of Islam, Christianity, and Judaism. Such writings describe higher powers, afterlives, the meaning of life, and so on. Ideas, supposed knowledge, of the supernatural world. Unfortunately, there is no way to test to see if any of these notions are true. The ideas could easily be man-made fictions. You may believe your Hindu or Christian or Islamic faith helps you know and understand the spiritual world, but, to put it bluntly, that spiritual world may not exist. There is no test to run to find out. (And no, Jesus being resurrected is not a “test,” nor are miracles, answered prayers, or feeling God speaking to you. The obvious problem with this poor counterargument is that these things cannot be tested for validity either. They could easily be human fictions and imaginings as well, as explained in detail elsewhere. Think about it. If someone doubts photosynthesis, you can teach him how to test to see if photosynthesis is real; there is no test you can use to show him a god or goddess is actually speaking to you, that it isn’t just in your head. “Try faith yourself, you’ll see the proof, believing is seeing” is the best a believer can say, possibly just drawing the fellow into human fictions and imaginings as well — there is no way to test to know otherwise.) This is in stark contrast to science; we can have confidence that the natural world exists, and we are able to put ideas to the test to actually see what’s true or false, or most likely to be so.

Linked with the lack of “knowledge” verification or falsifiability, of course, is simply the fact that ideas about the spiritual world cannot grow more accurate over time. Ancient scriptures aren’t typically added to. (In modern times, at least.) Texts are of course reinterpreted, gods are reimagined, ways the faithful think they should live change. For most American Christians for a long time, God was fine with the enslavement of blacks and the bible was used to justify it, without difficulty given what’s in it. Today things are quite different. Religions may change as societies do, and the faithful may feel they gain more knowledge by studying the scriptures more deeply, but no one will ever discover that we only spend 1,000 years in heaven, not eternity. Christianity won’t change when someone announces, after much research, that God has a couple wives up in heaven. Newly discovered ancient writings won’t become holy scripture. As stated, it’s all a fairly fixed set of ideas about the supernatural realm. The immutable nature of religious “knowledge” is of course celebrated by the faithful — everything we need to know about the spiritual world was written thousands of years ago, we don’t need more than what God gave us or any improved accuracy, everything’s accurate and must be preserved. (This is in contrast to scientists, who can make history, really make names for themselves, by disproving long-held scientific theories; there’s a personal incentive not to preserve doctrine but to blow it up.) But the supposed knowledge and its assumed accuracy are untestable and could easily be false, and there’s no process of gaining more knowledge or improving accuracy over time to really hammer out if these things are false or true. Imagine if science was like this — no way to know if germ theory is correct, no centuries spent gaining more information and developing better and better medicines. In its ability to test ideas and improve understanding as time goes on, science, unlike faith, is an actual method of learning what is real. (All this should not be surprising, given that “faith” is often defined, by believers and nonbelievers alike, as “belief without proof.”)

In sum, science is a useful tool for gaining objective knowledge about the natural world. Faith is simply hoped to be a tool for gaining what could easily be imagined knowledge about an imagined spiritual realm. These things are hardly the same.

For more from the author, subscribe and follow or read his books.

Merit Pay

“Too many supporters of my party have resisted the idea of rewarding excellence in teaching with extra pay, even though we know it can make a difference in the classroom,” President Barack Obama said in March 2009. The statement foreshadowed the appearance of teacher merit pay in Obama’s “Race to the Top” education initiative, which grants federal funds to top performing schools. Performance, of course, is based on standardized testing, and in the flawed Race to the Top, so are teacher salaries. Teacher pay could rise and fall with student test scores.

Rhetoric concerning higher teacher salaries is a good thing. Proponents of merit pay say meager teacher salaries are an injustice, and such a pay system is needed to alleviate the nation’s teacher shortage. However, is linking pay to test scores the best way to “reward excellence”? Do we know, without question, it “can make a difference in the classroom”? The answers, respectively, are no and no. Merit pay is an inefficient and potentially counterproductive way to improve education in American public schools. It fails to motivate teachers to better themselves or remain in the profession, it encourages unhealthy teacher competition and dishonest conduct, and it does not serve well certain groups, like special education students.

Educator Alfie Kohn, author of the brilliant Punished by Rewards, wrote an article in 2003 entitled “The Folly of Merit Pay.” He writes, “No controlled scientific study has ever found a long-term enhancement of the quality of work as a result of any incentive system.” Merit pay simply does not work. It has been implemented here and there for decades, but is always abandoned. A good teacher is intrinsically motivated: he teaches because he enjoys it. She teaches because it betters society. He teaches because it is personally fulfilling. Advocates of merit pay ignore such motivation, but Kohn declares, “Researchers have demonstrated repeatedly that the use of such extrinsic inducements often reduces intrinsic motivation. The more that people are rewarded, the more they tend to lose interest in whatever they had to do to get the reward.” Extra cash sounds great, but it is destructive to the inner passions of quality teachers.

Teachers generally rank salaries below too much standardization and unfavorable accountability on their lists of grievances (Kohn, 2003). Educators leave the profession because they are being choked by federal standards and control, and politicians believe linking pay to such problems is a viable solution? Professionals also generally oppose merit pay, disliking its competitive nature. Professor and historian Diane Ravitch writes an incentive “gets everyone thinking about what is good for himself or herself and leads to forgetting about the goals of the organization. It incentivizes short-term thinking and discourages long-term thinking” (Strauss, 2011). Teaching students should not be a game, with big prizes for the winners.

Further, at issue is the distorted view of students performance pay perpetuates. Bill Raabe of the National Education Association says, “We all must be wary of any system that creates a climate where students are viewed as part of the pay equation, rather than young people who deserve a high quality education” (Rosales, 2009). In the current environment of high-stakes tests (which do not really evaluate the quality of teaching at all), merit pay is just another way to encourage educators to “teach to the test,” or worse: cheating. The nation has already seen public school teachers under so much pressure they resort to modifying their students’ scores in order to save their salaries or their jobs.

It is clear that merit pay does not serve young learners, but this is especially true in the case of special education students. The Individuals with Disabilities Education Act (IDEA) requires states that accept federal funding to provide individual educational services to all children with disabilities. While the preeminence of “inclusion” of SPED children in regular classrooms is appropriate, the students are also included in the accountability statues of No Child Left Behind. SPED students are required to meet “adequate yearly progress” (AYP) standards based on high-stakes tests in reading, math, and science, like other students. While some youths with “significant cognitive disabilities” (undefined by federal law) can take alternate assessments, there is a cap on how many students can do so (Yell, Katsiyannas, & Shiner, 2006, p. 35-36). Most special education students must be included in standardized tests.

The abilities and the needs of special education students are too diverse to be put in the box that is a standardized test. SPED students are essentially being asked to perform at their chronological grade level, and for some students that is simply not possible. How does that fit in with a Free Appropriate Public Education, the education program the IDEA guarantees, that focuses on “individualized” plans for the “unique needs” of the student? It does not. Progress is individual, not standardized. Further, linking teacher pay to this unreasonable accountability only makes matters worse. Performance pay will likely punish special education instructors. Each year, SPED students may make steady progress (be it academic, cognitive, social, emotional, etc.), but teachers will see their salaries stagnate or slashed because such gains do not meet federal or state benchmarks. Such an uphill battle will discourage men and women from entering the special education field, meaning fewer quality instructors to serve students with disabilities.

When a school defines the quality of teaching by how well students perform on one test once a year, everyone loses. When pay is in the equation, it’s worse. Obama deserves credit for beginning to phase out NCLB, but merit pay is no way to make public schools more effective. If politicians want to pay good teachers better and weed out poor teachers, their efforts would be better directed at raising salaries across the board and reforming tenure.

For more from the author, subscribe and follow or read his books.

References

Kohn, A. (2003). The Folly of Merit Pay. Retrieved February 19, 2012 from https://www.alfiekohn.org/article/folly-merit-pay/.

Rosales, J. (2009). Pay Based on Test Scores? Retrieved February 19, 2012 from http://www.nea.org/home/36780.html.

Strauss, V. (2011). Ravitch: Why Merit Pay for Teachers Doesn’t Work. Retrieved February 19, 2012 from http://www.washingtonpost.com/blogs/answer-sheet/post/ravitch-why-merit-pay-for-teachers-doesnt-work/2011/03/29/AFn5w9yB_blog.html.

Yell, M. L., Katsiyannas, A., Shiner, J. G. (2006). The No Child Left Behind Act, Adequate Yearly Progress, and Students with Disabilities. Teaching Exceptional Children, 38 (4), 32-39.

Should We Talk About How Trauma Affects Police Behavior?

In the discussion of police brutality, generally speaking, one camp calls for sweeping, radical, even terminal changes to policing in order to end beatings and killings of civilians, while the other camp stresses that police officers have extremely dangerous, high-stress jobs and, while mistakes do occur at times, certain changes will only make things more dangerous for cops and for the public at large. There’s some talking past each other here, but perhaps one of the more significant things that is missed or simply isn’t much discussed is how these ideas are connected: of course people who go through trauma might be more likely to snap and murder someone for no reason at all.

A couple clarifications here. First, many on the Left will have little sympathy for the police no matter how traumatized someone might be by seeing dead bodies, blood and brains splattered about, raped children, and beaten wives, or by being shot at or otherwise attacked. After all, individuals who join police forces do so by choice, participate (whether aware of it or not) in an oppressive system that ensures the constant harassment and mistreatment of people of color, and so on. For some of my comrades, talking about how officer trauma might contribute to police brutality would be a major faux pas, offering excuses or a sympathetic ear to the other side in a rather uncomfortable way. Yet if police trauma does exist, and if it does contribute in some way to police brutality, it makes sense to think about it, discuss it, and figure out what to do about it. Sympathy isn’t required. Second, it should be clarified that acknowledging trauma as a possible cause of police violence doesn’t mean other causes, such as racism, machismo and power, poor training and use of force procedures, age, a dearth of education, complete lack of punishment, and so forth don’t exist and have devastating effects on society. (Another one is the human tendency to mistakenly see things you’re watching for. If you’re speeding and watching for cops, every other car begins to look like a cop. If you’re watching for guns or threatening movements from someone you’ve pulled over…) Finally, a discussion like this one isn’t meant to distract or deflect from the terrible trauma that victims of police violence live with for the rest of their lives. If there is a way we can stop one trauma from leading to another, we should pursue it.

We know officers’ experiences contribute to PTSD and other serious psychological and physiological problems. “Research has indicated that by the time police officers put on their uniform and begin general patrol, their stress-related cardiovascular reactivity is already elevated,” and this is followed by, generally speaking, “at least 900 potentially traumatic incidents over the course of their career.” Some officers will have bigger problems, if they came from the military and were traumatized in the bloodbath of war. Extreme stress and PTSD can lead to aggression and exaggerated startle response and recklessness; in police officers it’s been shown to lead to less control in decision-making “due to heightened arousal to threats, inability to screen out interfering information, or the inability to keep attention.” Academics in The Huffington Post and Psychology Today have connected occupational trauma to brutality, as have former officers on fervent pro-cop sites (for example, could reforms addressing trauma “reduce the number of inappropriate decisions some officers make? If we are concerned about the dysfunctional actions of some cops, is it possible that some of the fault lies with the rest of us who ignore the trauma that officers go through?”). More research would be valuable, but it’s a safe bet police trauma contributes to police brutality. (A connection also exists, by the way, between officer stress and violence against their romantic partners.)

This writer doesn’t have too much more to say on the matter — it simply seems important to connect the two ideas mentioned in the first paragraph, especially for those of us who care about justice and about encouraging others of very different views to care as well. “True, the police have dangerous jobs, but do you see how the extreme stress that most officers experience might make police brutality a serious problem? Perhaps there are other factors, too. Perhaps there are societal changes we can make that would address both officer PTSD or safety and police brutality against ordinary people.” It could be a way to build a bridge or find a sliver of common ground.

How to actually address such trauma will range wildly, of course, from the reactionary, though valid, sentiments from police departments about the need for more mental healthcare to the radical (“Radical simply means grasping things at the root,” Angela Davis) idea that we “Abolish the Police.” After all, no police means no police trauma. And no police brutality. Convincing people that trauma contributes to brutality seems far easier than agreement on how to solve these things.

This is a bit of an aside, but I’m still determining where I personally fall when it comes to what to specifically do about the police. I firmly believe that broad changes are needed concerning: who responds to certain nonviolent calls (it need not be quasi-soldiers, at least not as first responders); the allocation of resources, with reform devoting huge sums into addressing the root causes of crime, namely poverty, instead of into policing and other initiatives that only address the symptoms; the qualifications, education, training, evaluation, use of force procedures, and weaponry of those who respond to violent calls; what an individual can be pulled over or confronted or arrested for, just serious changes to law and policy; who investigates police misconduct (not the departments) and how abusive officers are punished, beginning with termination and blacklisting and ending with prison sentences; and much more. These things, perhaps combined with better mental healthcare and therapy, reduced hours, increased leave, shorter careers, and so forth for those facing traumatic situations, can reduce both the trauma and violence. (Although I don’t recall the specific incident, in the news a few years ago there was a report about how the officer who killed an unarmed black man in the evening had witnessed a murder or suicide that morning; taking him off duty seems like it would have been an obvious thing to do.) But I do suspect that modern societies will always have some traumatic situations and need individuals to enter them, whether it’s the police or something resembling the police. Perhaps more personal study is needed. I recently asked of my acquaintances:

I haven’t studied #PoliceAbolition or #PrisonAbolition theory with any depth. Currently, it seems likely to me that future human societies — more decent ones, with prosperity for all, unarmed response teams, restorative justice, and more — would still require some persons or groups authorized to use force against others in circumstances where de-escalation fails, and require some persons to be separated against their will from the general population, for the sake of its safety, during rehabilitation. These scenarios seem likely to be far rarer when we radically transform social conditions and societal policies, but not disappear completely. Can anyone recommend abolitionist literature that either 1) specifically makes the case that such circumstances would never occur and thus such force requirements are void, or 2) that argues such circumstances would indeed occur but specifically lays out how such requirements could be handled (force could be used) by alternative people or institutions without, over time, devolving back into something close to today’s police and prisons.

My mind may change as I go through some of the recommended readings, but as it stands I wonder if the number of individuals authorized to use force, their trauma, and their brutality can only be greatly reduced, rather than eradicated completely. While a better human society is possible and will be won, a perfect one may be out of reach.

For more from the author, subscribe and follow or read his books.

On Student Teaching

I am now two weeks from concluding my first student teaching placement (Visitation School), and my classroom management skills are still being refined. After observing for five days, slowly beginning my integration into a leadership role, I took over completely from my cooperating teacher. While excited to start, initially I had a couple days where I found one 6th grade class (my homeroom) difficult to control. There were times when other classes stepped out of line, naturally, but the consistency with which my homeroom became noisy and rowdy was discouraging.

“They’re your homeroom,” my cooperating teacher reminded me. “They feel more at home in your classroom, and will try to get away with more.”

There were a few instances where students took someone else’s property, or wrote notes to classmates, but the side chatter was the major offense. I would be attempting to teach and each table would have at least someone making conversation, which obviously distracts both those who wish to pay attention and those who don’t care. I would ask them to refocus and quiet themselves, which would work for but a few precious moments. There was one day I remember I felt very much as if the students were controlling me, rather than the other way around, and I made the mistake of hesitating when I could have doled out consequences. I spoke to my cooperating teacher about it during our feedback session, and she emphasized to me that I needed to prove to the students my willingness to enforce the policies, that I have the same authority as any other teacher in the building.

At Visitation, their classroom management system revolves around “tallies,” one of which equals three laps at recess before one can begin play. My homeroom deserved a tally the day I hesitated. I needed to come up with a concrete, consistent way of disciplining disruptive behavior. So I went home and developed a simple system I had thought about a long time ago: behavior management based on soccer. I cut out and laminated a yellow card and a red card. The next day, I sat each class down in the hall before they entered the room, and told them the yellow card would be shown to them as a warning, the red card as tallies. These could be given individually or as a class, and, like soccer, a red card could be given without a yellow card.

The students were surprisingly excited about this. Perhaps turning punishment into a game intrigued them; regardless, it made me wonder if this would work. But it seemed discussing the expectations I had of them, and the enforcement of such expectations, helped a good deal. Further, I was able to overcome my hesitation that day and dole out consequences for inappropriate behavior. My homeroom I gave a yellow card and then a red card, and they walked laps the next day.

My cooperating teacher noted the system would be effective because it was visual for the students. I also found that it allowed me to easily maintain emotional control; instead of raising my voice, I simply raised a card in my hand, and the class refocused. Its visibility allowed me to say nothing at all.

While containing a different purpose and practice, this system draws important elements from the Do It Again system educator Doug Lemov describes, including no administrative follow-up and logical consequences, but most significantly group accountability (Lemov, 2010, p. 192). It holds an entire class responsible for individual actions, and “builds incentives for individuals to behave positively since it makes them accountable to their peers as well as their teacher” (p. 192). Indeed, my classes almost immediately started regulating themselves, keeping themselves accountable for following my expectations (telling each other to be quiet and settle down, for instance, before I had to say anything).

Lemov would perhaps frown upon the yellow card, and point to the behavioral management technique called No Warning (p. 199). He suggests teachers:

  • Act early. Try to see the favor you are doing kids in catching off-task behavior early and using a minor intervention of consequence to prevent a major consequence later.
  • Act reliably. Be predictably consistent, sufficient to take the variable of how you will react out of the equation and focus students on the action that precipitated your response.
  • Act proportionately. Start small when the misbehavior is small; don’t go nuclear unless the situation is nuclear.

I have tried to follow these guidelines to the best of my ability, but Lemov would say the warning is not taking action, only telling students “a certain amount of disobedience will not only be tolerated but is expected” (p. 200). He would say students will get away with what they can until they are warned, and will only refocus and cease their side conversations afterwards. Lemov makes a valid point, and I have indeed seen this happen to a degree. As a whole, however, the system has been effective, and most of my classes do not at all take advantage of their warning. Knowing they can receive a consequence without a warning has helped, perhaps. After a month of using the cards, I have given my homeroom a red card three times. In my other five classes combined during the same period, there have been two yellows and only one red. I have issued a few individual yellows, but no reds.

Perhaps it is counterproductive to have a warning, but I personally feel that since the primary focus of the system is on group accountability, I need to give talkative students a chance to correct their behavior before consequences are doled out for the entire class. Sometimes a reminder is necessary, the reminder that their actions affect their classmates and that they need to refocus. I do not want to punish the students who are not being disruptive along with those who are without issuing some sort of warning that they are on thin ice.

***

During my two student teaching placements this semester, I greatly enjoyed getting to know my students. It was one of the more rewarding aspects of teaching. Introducing myself and my interests in detail on the first day I arrived proved to be an excellent start; I told them I liked history, soccer, drawing, reading, etc. Building relationships was easy, as students seemed fascinated by me and had an endless array of questions about who I was and where I came from.

Art is something I used to connect with students. At both my schools, the first students I got to know were the budding artists, as I was able to observe them sketching in the corners of their notebooks and later ask to see their work. There was one girl at my first placement who drew a new breed of horse on the homeroom whiteboard each morning; a boy at my second placement was drawing incredible fantasy figures every spare second he had. I was the same way when I was their age, so naturally I struck up conversations about their pictures. I tried to take advantage of such an interest by asking students to draw posters of Hindu gods or sketch images next to vocabulary words to aid recall. Not everyone likes to draw, but I like to encourage the skill and at least provide them an opportunity to try. Beyond this, I would use what novels students had with them to learn about their fascinations and engage them, and many were excited I knew The Hunger Games, The Hobbit, and The Lord of the Rings. We would discuss our favorite characters and compare such fiction to recent films.

For all my students, I strove to engage them each day with positive behavior, including greeting them by name at the door, drawing with and for them, laughing and joking with them, maintaining a high level of interest in what students were telling me (even if they rambled aimlessly, as they had the tendency to do) and even twice playing soccer with them at recess. The Catholic community of my first placement also provided the chance to worship and pray with my kids, an experience I will not forget.

One of my successes was remaining emotionally cool, giving students a sense of calm, confidence, and control about me. Marzano (2007) writes, “It is important to keep in mind that emotional objectivity does not imply being impersonal with or cool towards students. Rather, it involves keeping a type of emotional distance from the ups and downs of classroom life and not taking students’ outbursts or even students’ direct acts of disobedience personally” (p. 152). Even when I was feeling control slipping away from me, I did my best to be calm, keep my voice low, and correct students in a respectful manner that reminded them they had expectations they needed to meet. Lemov (2010) agrees, writing, “An emotionally constant teacher earns students’ trust in part by having them know he is always under control. Most of all, he knows success is in the long run about a student’s consistent relationship with productive behaviors” (p. 219). Building positive relationships required mutual respect and trust, and emotional constancy was key.

Another technique I emphasized was the demonstration of my passion for social studies, to prove to them the gravity of my personal investment in their success. One lesson from my first placement covered the persecution of Anne Hutchinson in Puritan America; we connected it to modern sexism, such as discrimination against women in terms of wage earnings. Another lesson was about racism, how it originated as a justification for African slavery and how the election of Barack Obama brought forth a surge of openly racist sentiment from part of the U.S. citizenry. I told them repeatedly that we studied history to become dissenters and activists, people who would rise up and destroy sexism and racism. I told them I had a personal stake in their understanding of such material, a personal stake in their future, because they were the ones responsible for changing our society in positive ways. Being the next generation, ending social injustices would soon be up to them.

Marzano (2007) says, “Arguably the quality of the relationships teachers have with students is the keystone of effective management and perhaps even the entirety of teaching” (p. 149). In my observation experiences, I saw burnt out and bitter teachers, who focused their efforts on authoritative control and left positive relationship-building on the sideline. The lack of strong relationships usually meant more chaotic classrooms and more disruptive behavior. As my career begins, I plan to make my stake in student success and my compassion for each person obvious, and stay in the habit.

For more from the author, subscribe and follow or read his books. 

References

Lemov, D. (2010). Teach like a champion: 49 techniques that put students on the path to college. San Francisco, CA: Jossey-Bass.

Marzano, R. (2007). The art and science of teaching. Alexandria, VA: Association for Supervision and Curriculum Development.

Cop Car Explodes, Police Pepper Spray Passenger in Moving Vehicle During Plaza Protests

The events of 10:00pm to midnight on May 30, 2020 on Kansas City’s Plaza — protests and unrest following the police killing of George Floyd in Minneapolis — included the following. 

The police, in riot gear and gas masks, blockaded the intersections along Ward Parkway, refusing to allow newcomers, additional protesters, to move deeper into the Plaza, angering a small but growing crowd. “Let us through!” Journalists likewise were not allowed to enter. From the vantage point at the blockade, it was clear a gathering of protesters was locked in a standoff with police up around 47th and Wyandotte Street. The sound of helicopters, sirens, police radios and bullhorns, and protesters’ shouts clashed in the air. Sharp pops. The protesters inside fled west as one, as police dispersed tear gas. Much concern was voiced from the crowd at the barrier.

After a time, an explosion rocked the Plaza. “Shit!” exclaimed members of the crowd, among variations — and even the police could not help but turn their heads away from the masses and look. It appeared a parked car, near where the standoff occurred, had been firebombed. The press later indicated it was a police car. “It’s going down, boy,” someone said. Flames and smoke rose high, and shortly thereafter fire fighters arrived. Meanwhile, a man, tall and skinny, yelled at the police at the barrier, saying he was a veteran who fought for the rights the police trample upon — “You’re a fucking disgrace.” Two women likewise unleashed their anger.

Walking west along Ward Parkway, in an attempt to follow the group of runners from afar, revealed a bridal shop window smashed. Some jokes from observers about black people wanting to get married tonight — though there did not appear to be anything looted. A young woman and man huddled together nearby, the woman distraught over the scene. Soon the pair entered the store through the front door, quickly followed by a shouting cop. “She owns the place, man, it’s all right,” the observers said. The pair echoed this, and the cop recommended finding someone to board up the window. Various other storefronts were boarded up, in advance, along the street.

“I’m just trying to get to my fucking car,” a passerby said to an acquaintance, realizing he could not enter the parking garage due to the blockades. In the street, gas canisters, COVID-19 masks, abandoned signs, water bottles, graffiti. Another broken storefront window, more graffiti. A fire department vehicle with a smashed windshield. A black woman thanking a cop for being out tonight doing his job.

Reaching Broadway, where one could finally turn north, showed a few people arrested and sitting on the pavement outside the Capital Grille at the feet of the police. They did not seem a part of the fleeing protesters, and may have been taken out of their cars, which were along the street, doors open. Moving north, one met the protesters, now all scattered and disjointed, many moving south but some further west and some simply hanging out here and there. The faint sting of tear gas infected the eyes. Strangers made sure one was all right.

“H&M!” a man hollered triumphantly, a valuable bundle in his hands, before three cops on bikes appeared from nowhere, sirens blasting. The man and several other looters sprinted south down Broadway, pursued.

The central Plaza secured, the main confrontation point became the blockade where the crowd witnessed the car explosion, Ward Parkway and Wyandotte. The group grew considerably, to a few hundred, swelled by the protesters that had fled the tear gas a block north. It was young, diverse. The ranks of police were reinforced as well.

Protesters gathered in Ward Parkway, signs held high: “I Can’t Breathe,” “Black Lives Matter.” A few cars zipped around wildly in circles, as if to emphasize the protesters’ control of the street. A white car with four or five people in it pulled up and distributed water, while also providing the tunes. A dance circle formed for a time, while both sides held their ground. Skateboards, scooters shot by. A more festive atmosphere. A chant began — no justice, no peace. But mostly individuals had their say — calls for an end to police killings and abuse.

Eventually the police ordered the protesters to clear the streets and return to the sidewalks or face arrest. The street was full of people, but most were already there. The police seemed to select one individual to make an example of, and surged toward a white man with a sign, arresting him. Their orders ignored, the police pressed forward. Someone threw a water bottle at them. The police shook their gas cans ominously. “Scary ass motherfuckers,” a young woman said. Another woman was arrested. A man hollered, “The police started as slave-catchers! Not much has changed.” “You don’t have to do what your superiors say,” someone called out. Some taunted the black officers, the so-labeled “Uncle Toms.”

The police surged forward, pepper spray raised. A protester threw a brick or rock at them as everyone scrambled in retreat, by foot, scooter, or vehicle. The white car that had delivered water was in trouble, needing to back up toward the police in order to get out of its space and flee. Several officers walked up to the vehicle menacingly. “They’re going, they’re going!” shouted protesters. “Leave them alone!” An officer sprayed into the face of someone in the back seat as the vehicle backed up and lurched forward, the driver clearly panicked.

After pushing their line forward, the police then retreated back to their original position. The crowd then began moving forward, back to theirs.

The police announced that gas would be used if the crowd did not disperse, which the crowd had no interest in doing. The hiss of gas pierced the night air as cans were thrown, grey smoke billowing and streaking behind them. Pandemonium. Screams and shouts as all turned and ran, except for one brave soul who threw a can back. The tear gas burned, blinded. The police, marching forward, were quickly obscured, swallowed by smoke and distance, as the protesters splintered into three masses and fled east, south, and west.

The tear gas appeared to end the Plaza protest — by midnight the crowd had not reformed. However, a woman, leaning out the passenger window of a car moving down Ward Parkway, called out, “We’re going to Westport!”

The time is 3:40am on Sunday, May 31, 2020. Three of the four officers involved in George Floyd’s death have yet to be arrested.

For more from the author, subscribe and follow or read his books.

Capitalism and Coronavirus

A collection of thoughts on capitalist society during the 2020 COVID-19 outbreak:

On Necessity

The coronavirus makes clear more than ever why America needs socialism.

  • Many people don’t have paid sick leave and can’t go without a paycheck, so they go to work sick with the virus, spreading it. Workers should own their workplaces so they can decide for themselves whether they get paid sick leave.
  • Businesses are closing, leaving workers to rot, with no income but plenty of bills to pay. People forced to go into work have to figure out how to pay for childcare, since schools are closed. Kids are hungry because they rely on school for meals. We need a Universal Basic Income.
  • Without health insurance, lots of people won’t get tested or treated because they can’t afford it. There will be more people infected. There will be many senseless, avoidable deaths. We need universal healthcare, medical care for all people.
  • The bold steps needed to address this crisis won’t be taken, even if the majority of Americans want it to be so, because our representatives serve the interests of wealthy and corporate funders. We need participatory democracy, where the people have decision-making power.

This virus shines a glaring, painful light at the stupidities of free market capitalism, which is at this very moment encouraging the spread of a deadly disease and spelling financial ruin for ordinary people.

The current crisis screams for the need to build a new world.

On Purity

Imagine a deadly virus (this one or far worse) in a truly free market society:

  • Many businesses (and perhaps schools, all private) choosing to stay open to make profits, spreading the contagion. No closure orders.
  • As other businesses choose to close, and workers everywhere refuse to work, paychecks and jobs vanish, with no government unemployment or stimulus checks to help. Aid from nonprofits and foundations, donations from individuals and businesses, is all a hopeless drop in the ocean relative to the need.
  • No bailouts and stimulus funds for businesses. Small and large companies alike collapsing — worsening unemployment. Monopolization increases faster.
  • Infected persons dying because they can’t afford testing, treatment, or healthcare coverage (think the U.S.) in general. Healthcare providers have to profit, there are no free lunches — there’s no government aid on its way. Restricted access to healthcare for citizens, through low income or job (benefit) loss, means a faster spread of the virus.
  • Would a government devoted to a fully free market society issue stay-at-home orders? If not, more people out and about, a wider spread.

A truly free market would make any pandemic a thousand times worse. A higher body count, a worse economic disaster.

On Distribution

Grocery stores are currently reminding us how slowly the law of supply-and-demand can function.

On Redistribution

In theory, seizing all wealth from the rich and redistributing it to the masses may be the only way to prevent societal collapse during a pandemic (whether this one or a far deadlier one).

80% of Americans possess less than 15% of the wealth in this country, just drastic inequality. If a pandemic leads to mass closings of workplaces and the eradication of jobs, the State must step in to support the people and subsidize incomes. Without this, people lose access to food, water, housing, everything, and disaster ensues. However, in such a situation, government revenues will fall — less individual and corporate income to tax, sales tax revenue dwindling as people buy less, and so on. It is conceivable that the State, during a plague lasting years, would eventually lack the funds it needs. Solutions like borrowing from other nations might prove impossible, if the pandemic is global and other nations are experiencing the same shortfalls. The only solution may be to tax the rich (and wealthy, non-essential corporations) out of existence, allowing the State to continue supporting people.

(This may only stave off disaster, however. There will be diminishing returns if taxes on essential companies and landlords are too low. State money would be given to people, who would give it to a businesses, which would only give small portions back to the State. The situation would likely then require appropriating most or even all of the revenue received from businesses that are still operating, and sending it back to the consumers.)

On Insanity

A pandemic causing people to lose their healthcare (via job or income loss)… Insane.

On Collapse

During the COVID-19 crisis, we’ve seen jokes about how prosperous corporations suddenly on the verge of bankruptcy really should have been more careful with their money — buying less avocado toast, for instance. Having funds set aside for emergencies, taking on less debt, etc. Then they wouldn’t have gone from prosperous to desperate after mere weeks of fewer customers.

But businesses keeping next to nothing in the bank is inherent to capitalism. This is not exclusively the case, as some firms do see the wisdom of keeping cash reserves for hard times and large corporations do grow rich enough and monopolize markets enough to focus on stockpiling cash, but it is a general trend of the system. In the frenzied competition of the market, keeping money stored away is generally a competitive disadvantage. Every extra dime must be poured back into the business to keep growing, keep gaining market share, keep displacing competitors. If you’re not injecting everything back into the business, you risk falling behind and being crushed by the firms that are.

“It can’t wait,” John Steinbeck wrote in The Grapes of Wrath. “It’ll die… When the monster stops growing, it dies. It can’t stay one size.”

The competition that pushes firms forward in ordinary times can be their downfall in times of economic crisis.

On Outside Factors

The COVID-19 pandemic highlights the fact that poverty is caused by many factors beyond one’s control. For example, unemployment as a direct result of a deadly virus and government action. Perhaps being unemployed has something to do with the current availability of jobs, the needs of capitalists in the moment, rather than ordinary people’s laziness and sloth.

On Socialized Medicine

The vaccine is a lovely example of how socialized medicine works (in other democracies and our own, with Medicare/Medicaid).

Companies create healthcare treatments people need, hospitals and clinics get them (usually they purchase them, rather than governments doing so and distributing), citizens have many options of providers to visit to get the treatments and thus make that choice, and the bill is sent straight to the government — the tax wealth of a nation ensures everyone has access to the care they need for a healthy, full life. This service is hugely popular in other nations and is often taken for granted.

Jokes about limited supplies and wait lists are about to expire (soon there will be enough vaccines for all), but that’s super instructive too. (We’ll put aside the fact that universal healthcare systems in other nations, while not perfect, don’t actually struggle with limited supplies or wait lists any more than the U.S., if you bother to do comparative research; again, these systems are far more popular in polls than our own, which would be odd if they were so terrible.) When treatments are limited, it makes sense to us to give them to the most vulnerable first. The rest of us can wait, give the vaccine to seniors first: we all recognize that as a more moral system than, say, those with enough money or the right job (with an insurer who won’t drop your ass to save a buck) get the treatment, everyone else can rot and die (the free market healthcare system). Treatments won’t always be limited, but when they are, providers (it’s not usually governments, but them too in crises) should prioritize by need, not wealth. That’s more ethical with the vaccine…why wouldn’t it be so with all forms of life-saving care?

On UBI

With Americans getting a taste of checks from the government, UBI’s future is bright.

For more from the author, subscribe and follow or read his books.

Bernie Will Win Iowa

Predicting the future isn’t something I make a habit of. It is a perilous activity, always involving a strong chance of being wrong and looking the fool. Yet sometimes, here and there, conditions unfold around us in a way that gives one enough confidence to hazard a prediction. I believe that Bernie Sanders will win Iowa today.

First, consider that Bernie is at the top of the polls. Polls aren’t always reliable predictors, and he’s neck-and-neck with an opponent in some of them, but it’s a good sign.

Second, Bernie raised the most money in Q4 of 2019 by far, a solid $10 million more than the second-place candidate, Pete Buttigieg. He has more individual donations at this stage than any candidate in American history, has raised the most overall in this campaign, and is among the top spenders in Iowa. (These analyses exclude billionaire self-funders Bloomberg and Steyer, who have little real support.) As with a rise in the polls, he has momentum like no one else.

Third, Bernie is the only candidate in this race who was campaigning in Iowa in 2016, which means more voter touches and repeat voter touches. This is Round 2 for him, an advantage — everyone else is in Round 1.

Next, don’t forget, Iowa in 2016 was nearly a tie between Bernie and Hillary Clinton. It was the closest result in the state’s caucus history; Hillary won just 0.3% more delegate equivalents. It’s probably safe to say Bernie is more well-known today, four years later — if he could tie then, he can win now.

Fifth, in Iowa in 2016, there were essentially two voting blocs: the Hillary Bloc and the Bernie Bloc. (There was a third but insignificant candidate.) These are the people who actually show up to caucus — what will they do now? I look at the Bernie Bloc as probably remaining mostly intact. He may lose some voters to Warren or others, as this field has more progressive options than last time, but I think his supporters’ fanatical passion and other voters’ interest in the most progressive candidate will mostly keep the Bloc together. The Hillary Bloc, of course, will be split between the many other candidates — leaving Bernie the victor. (Even if there is much higher turnout than in 2016, I expect the multitude of candidates to aid Bernie — and many of the new voters will go to him, especially if they’re young. An historic youth turnout is expected, and they mostly back Sanders.)

This last one is simply anecdotal. All candidates have devoted campaigners helping them. But I must say it. The best activists I know are on the case. They’ve put their Kansas City lives on hold and are in Iowa right now. The Kansas City Left has Bernie’s back, and I believe in them.

To victory, friends.

For more from the author, subscribe and follow or read his books.

The Enduring Stupidity of the Electoral College

To any sensible person, the Electoral College is a severely flawed method of electing our president. Most importantly, it is a system in which the less popular candidate — the person with fewer votes — can win the White House. That absurdity would be enough to throw the Electoral College out and simply use a popular vote to determine the winner. Yet there is more.

It is a system where your vote becomes meaningless, giving no aid to your chosen candidate, if you’re in your state’s political minority; where small states have disproportionate power to determine the winner; where white voters have disproportionate decision-making power compared to voters of color; and where electors, who are supposed to represent the majority of voters in each state, can change their minds and vote for whomever they please. Not even its origins are pure, as slavery and the desire to keep voting power away from ordinary people were factors in its design.

Let’s consider these problems in detail. We’ll also look at the threadbare attempts to justify them.

The votes of the political minority become worthless, leading to a victor with fewer votes than the loser

When we vote in presidential elections, we’re not actually voting for the candidates. We’re voting on whether to award decision-making power to Democratic or Republican electors. 538 people will cast their votes and the candidate who receives a majority of 270 votes will win. The electors are chosen by the political parties at state conventions, through committees, or by the presidential candidates. It depends on the state. The electors could be anyone, but are usually involved with the parties or are close allies. In 2016, for instance, electors included Bill Clinton and Donald Trump, Jr. Since they are chosen for their loyalty, they typically (though not always, as we will see) vote for the party that chose them.

The central problem with this system is that most all states are all-or-nothing when electors are awarded. (Only a couple states, Maine and Nebraska, have acted on this unfairness and divided up electors based on their popular votes.) As a candidate, winning by a single citizen vote grants you all the electors from the state. 

Imagine you live in Missouri. Let’s say in 2020 you vote Republican, but the Democratic candidate wins the state; the majority of Missourians voted Blue. All of Missouri’s 10 electors are then awarded to the Democratic candidate. When that happens, your vote does absolutely nothing to help your chosen candidate win the White House. It has no value. Only the votes of the political majority in the state influence who wins, by securing electors. It’s as if you never voted at all — it might as well have been 100% of Missourians voting Blue. As a Republican, wouldn’t you rather have your vote matter as much as all the Democratic votes in Missouri? For instance, 1 million Republican votes pushing the Republican candidate toward victory alongside the, say, 1.5 million Democratic votes pushing the Democratic candidate forward? Versus zero electors for the Republican candidate and 10 electors for the Democrat?

In terms of real contribution to a candidate’s victory, the outcomes can be broken down, and compared to a popular vote, in this way:

State Electoral College victor: contribution (electors)
State Electoral College loser: no contribution (no electors)

State popular vote victor: contribution (votes)
State popular vote loser: contribution (votes)

Under a popular vote, however, your vote won’t become meaningless if you’re in the political minority in your state. It will offer an actual contribution to your favored candidate. It will be worth the same as the vote of someone in the political majority. The Electoral College simply does not award equal value to each vote (see more examples below), whereas the popular vote does, by allowing the votes of the political minority to influence the final outcome. That’s better for voters, as it gives votes equal power. It’s also better for candidates, as the loser in each state would actually get something for his or her efforts. He or she would keep the earned votes, moving forward in his or her popular vote count. Instead of getting zero electors — no progress at all.

But why, one may ask, does this really matter? When it comes to determining who wins a state and gets its electors, all votes are of equal value. The majority wins, earning the right to give all the electors to its chosen candidate. How exactly is this unfair?

It’s unfair because, when all the states operate under such a system, it can lead to the candidate with fewer votes winning the White House. It’s a winner-take-all distribution of electors, each state’s political minority votes are ignored — but those votes can add up. 66 million Americans may choose the politician you support, but the other candidate may win with just 63 million votes. That’s what happened in 2016. It also happened in the race of 2000, as well as in 1876 and 1888. It simply isn’t fair or just for a candidate with fewer votes to win. It is mathematically possible, in fact, to win just 21.8% of the popular vote and win the presidency. While very unlikely, it is possible. That would mean, for example, a winner with 28 million votes and a loser with 101 million! This is absurd and unfair on its face. The candidate with the most votes should be the victor, as is the case with every other political race in the United States, and as is standard practice among the majority of the world’s democracies.

The lack of fairness and unequal value of citizen votes go deeper, however.

Small states and white power

Under the Electoral College, your vote is worth less in big states. For instance, Texas, with 28.7 million people and 38 electors, has one elector for every 755,000 people. But Wyoming, with 578,000 people and 3 electors, has one elector for every 193,000 people. In other words, each Wyoming voter has a bigger influence over who wins the presidency than each Texas voter. 4% of the U.S. population, for instance, in small states, has 8% of the electors. Why not 4%, to keep votes equal? (For those who think all this was the intent of the Founders, to give more power to smaller states, we’ll address that later on.)

To make things even, Texas would need many more electors. As would other big states. You have to look at changing population data and frequently adjust electors, as the government is supposed to do based on the census and House representation — it just doesn’t do it very well. It would be better to do away with the Electoral College entirely, because under a popular vote the vote of someone from Wyoming would be precisely equal to the vote of a Texan. Each would be one vote out of the 130 million or so cast. No adjustments needed.

It also just so happens that less populous states tend to be very white, and more populous states more diverse, meaning disproportionate white decision-making power overall.

Relatedly, it’s important to note that the political minority in each state, which will become inconsequential to the presidential race, is sometimes dominated by racial minorities, or at least most voters of color will belong to it. As Bob Wing writes, because “in almost every election white Republicans out-vote [blacks, most all Democrats] in every Southern state and every border state except Maryland,” the “Electoral College result was the same as if African Americans in the South had not voted at all.”

Faithless electors

After state residents vote for electors, the electors can essentially vote for whomever they want, in many states at least. “There are 32 states (plus the District of Columbia) that require electors to vote for a pledged candidate. Most of those states (19 plus DC) nonetheless do not provide for any penalty or any mechanism to prevent the deviant vote from counting as cast. Four states provide a penalty of some sort for a deviant vote, and 11 states provide for the vote to be canceled and the elector replaced…”

Now, electors are chosen specifically because of their loyalty, and “faithless electors” are extremely rare, but that doesn’t mean they will always vote for the candidate you elected them to vote for. There have been 85 electors in U.S. history that abstained or changed their vote on a whim. Sometimes for racist reasons, on accident, etc. Even more changed their votes after a candidate died — perhaps the voters would have liked to select another option themselves. Even if rare, all this should not be possible or legal. It is yet another way the Electoral College has built-in unfairness — imagine the will of a state’s political majority being ignored.

(All this used to be worse, in fact. Early on, some state legislatures appointed electors, meaning whatever party controlled a legislature simply selected people who would pick its favored presidential candidate. How voters cast their ballots did not matter.)

* * *

Won’t a popular vote give too much power to big states and cities?

Let’s turn now to the arguments against a popular vote, usually heard from conservatives. A common one is that big states, or big cities, will “have too much power.” Rural areas and less populous states and towns will supposedly have less.

This misunderstands power. States don’t vote. Cities don’t vote. People do. If we’re speaking solely about power, about influence, where you live does not matter. The vote of someone in Eudora, Kansas, is worth the same as someone in New York, New York.

This argument is typically posited by those who think that because some big, populous states like California and New York are liberal, this will mean liberal rule. (Conservative Texas, the second-most populous state, and sometimes-conservative swing states like Florida [third-most populous] and Pennsylvania [fifth-most populous] are ignored.) Likewise, because a majority of Americans today live in cities, and cities tend to be more liberal than small towns, this will result in the same. The concern for rural America and small states is really a concern for Republican power.

But obviously, in a direct election each person’s vote is of equal weight and importanceregardless and independent of where you live. 63% of Americans live in cities, so it is true that most voters will be living and voting in cities, but it cannot be said the small town voter has a weaker voice than the city dweller. Their votes have identical sway over who will be president. In the same way, a voter in a populous coastal state has no more influence than one in Arkansas.

No conservative looks with dismay at the direct election of his Democratic governor or congresswoman and says, “She only won because the small towns don’t have a voice. We have to find a way to diminish the power of the big cities!” No one complains that X area has too many people and too many liberals and argues some system should fix this. No one cries, “Tyranny of the majority! Mob rule!” They say, “She got the most votes, seems fair.” Why? Because one understands that the vote of the rural citizen is worth the same as the vote of an urban citizen, but if there happens to be more people living in cities in your state, or if there are more liberals in your state, so be it. That’s the freedom to live where you wish, believe what you wish, and have a vote worth the same as everyone else’s.

Think about the popular vote in past elections. About half of Americans vote Republican, about half vote Democrat. One candidate gets a few hundred thousand or few million more. It will be exactly the same if the popular vote determined the winner rather than the Electoral College — where you live is irrelevant. What matters is the final vote tally.

It’s not enough to simply complain that the United States is too liberal. And therefore we must preserve the Electoral College. That’s really what this argument boils down to. It’s not an argument at all. Unfair structures can’t be justified because they serve one political party. Whoever can win the most American votes should be president, no matter what party they come from.

But won’t candidates only pander to big states and cities?

This is a different question, and it has merit. It is true that where candidates campaign will change with the implementation of a popular vote. Conservatives warn that candidates will spend most of their time in the big cities and big states, and ignore rural places. This is likely true, as candidates (of both parties) will want to reach as many voters as possible in the time they have to garner support.

Yet this carries no weight as an argument against a popular vote, because the Electoral College has a very similar problem. Candidates focus their attention on swing states.

There’s a reason Democrats don’t typically campaign very hard in conservative Texas and Republicans don’t campaign hard in liberal California. Instead, they campaign in states that are more evenly divided ideologically, states that sometimes go Blue and sometimes go Red. They focus also on swing states with a decent number of electors. The majority of campaign events are in just six states. Unless you live in one of these places, like Ohio, Florida, or Pennsylvania, your vote isn’t as vital to victory and your state won’t get as much pandering. The voters in swing states are vastly more important, their votes much more valuable than elsewhere.

How candidates focusing on a handful of swing states might be so much better than candidates focusing on more populous areas is never explained by Electoral College supporters. It seems like a fair trade, but with a popular vote we also get the candidate with the most support always winning, votes of equal worth, and no higher-ups to ignore the will of the people.

However, with a significant number of Americans still living outside big cities, attention will likely still be paid to rural voters — especially, one might assume, by the Republican candidate. Nearly 40% of the nation living in small towns and small states isn’t something wisely ignored. Wherever the parties shift most of their attention, there is every reason to think Blue candidates will want to solidify their win by courting Blue voters in small towns and states, and Red candidates will want to ensure theirs by courting Red voters in big cities and states. Even if the rural voting bloc didn’t matter and couldn’t sway the election (it would and could), one might ask how a handful of big states and cities alone determining the outcome of the election is so much worse than a few swing states doing the same in the Electoral College system.

Likewise, the fear that a president, plotting reelection, will better serve the interests of big states and cities seems about as reasonable as fear that he or she would better serve the interests of the swing states today. One is hardly riskier than the other.

But didn’t the Founders see good reason for the Electoral College?

First, it’s important to note that invoking the Founding Fathers doesn’t automatically justified flawed governmental systems. The Founders were not perfect, and many of the policies and institutions they decreed in the Constitution are now gone.

Even before the Constitution, the Founders’ Articles of Confederation were scrapped after just seven years. Later, the Twelfth Amendment got rid of a system where the losing presidential candidate automatically became vice president — a reform of the Electoral College. Our senators were elected by the state legislatures, not we the people, until 1913 (Amendment 17 overturned clauses from Article 1, Section 3 of the Constitution). Only in 1856 did the last state, North Carolina, do away with property requirements to vote for members of the House of Representatives, allowing the poor to participate. The Three-Fifths Compromise (the Enumeration Clause of the Constitution), which valued slaves less than full people for political representation purposes, is gone, and today people of color, women, and people without property can vote thanks to various amendments. There were no term limits for the president until 1951 (Amendment 22) — apparently an executive without term limits didn’t give the Founders nightmares of tyranny.

The Founders were very concerned about keeping political power away from ordinary people, who might take away their riches and privileges. They wanted the wealthy few, like themselves, to make the decisions. See How the Founding Fathers Protected Their Own Wealth and Power.

The Electoral College, at its heart, was a compromise between Congress selecting the president and the citizenry doing so. The people would choose the people to choose the president. Alexander Hamilton wrote that the “sense of the people should operate in the choice of the person to whom so important a trust was to be confided.” He thought “a small number of persons, selected by their fellow-citizens from the general mass, will be most likely to possess the information and discernment requisite to such complicated investigations.”

Yet the Founders did not anticipate that states would pass winner-take-all elector policies, and some wanted it abolished. The Constitution and its writers did not establish such a mechanism. States did, and only after the Constitution, which established the Electoral College, was written. In 1789, only three states had such laws, according to the Institute for Research on Presidential Elections. It wasn’t until 1836 that every state (save one, which held out until after the Civil War) adopted a winner-take-all law; they sought more attention from candidates by offering all electors to the victor, they wanted their chosen sons to win more electors, and so forth. Before (and alongside) the winner-take-all laws, states were divided into districts and the people in each district would elect an elector (meaning a state’s electors could be divided up among candidates). Alternatively, state legislatures would choose the electors, meaning citizens did not vote for the president in any way, even indirectly! James Madison wrote that “the district mode was mostly, if not exclusively in view when the Constitution was framed and adopted; & was exchanged for the general ticket [winner-take-all] & the legislative election” later on. He suggested a Constitutional amendment (“The election of Presidential Electors by districts, is an amendment very proper to be brought forward…”) and Hamilton drafted it.

Still, among Founders and states, it was an anti-democratic era. Some Americans prefer more democratic systems, and don’t cling to tradition — especially tradition as awful and unfair as the Electoral College — for its own sake. Some want positive changes to the way government functions and broadened democratic participation, to improve upon and make better what the Founders started, as we have so many times before.

Now, it’s often posited that the Founding Fathers established the Electoral College to make sure small states had more power to determine who won the White House. As we saw above, votes in smaller states are worth more than in big ones.

Even if the argument that “we need the Electoral College so small states can actually help choose the president” made sense in a bygone era where people viewed themselves as Virginians or New Yorkers, not Americans (but rather as part of an alliance called the United States), it makes no sense today. People now see themselves as simply Americans — as American citizens together choosing an American president. Why should where you live determine the power of your vote? Why not simply have everyone’s vote be equal?

More significantly, it cannot be said that strengthening smaller states was a serious concern to the Founders at the Constitutional Convention. They seemed to accept that smaller states would simply have fewer voters and thus less influence. Legal historian Paul Finkleman writes that

in all the debates over the executive at the Constitutional Convention, this issue [of giving more power to small states] never came up. Indeed, the opposite argument received more attention. At one point the Convention considered allowing the state governors to choose the president but backed away from this in part because it would allow the small states to choose one of their own.

In other words, they weren’t looking out for the little guy. Political scientist George C. Edwards III calls this whole idea a “myth,” stressing: “Remember what the country looked like in 1787: The important division was between states that relied on slavery and those that didn’t, not between large and small states.”

Slavery’s influence

The Electoral College is also an echo of white supremacy and slavery.

As the Constitution was formed in the late 1780s, Southern politicians and slave-owners at the Convention had a problem: Northerners were going to get more seats in the House of Representatives (which were to be determined by population) if blacks weren’t counted as people. Southern states had sizable populations, but large portions were disenfranchised slaves and freemen (South Carolina, for instance, was nearly 50% black).

This prompted slave-owners, most of whom considered blacks by nature somewhere between animals and whites, to push for slaves to be counted as fully human for political purposes. They needed blacks for greater representative power for Southern states. Northern states, also seeking an advantaged position, opposed counting slaves as people. This odd reversal brought about the Three-Fifths Compromise most of us know, which determined an African American would be worth three-fifths of a person.

The Electoral College was largely a solution to the same problem. True, as we saw, it served to keep power out of the hands of ordinary people and in the hands of the elites, but race and slavery unquestionably influenced its inception. As the Electoral College Primer put it, Southerners feared “the loss in relative influence of the South because of its large nonvoting slave population.” They were afraid the direct election of the president would put them at a numerical disadvantage. To put it bluntly, Southerners were upset their states didn’t have more white people. A popular vote had to be avoided.

For example, Hugh Williamson of North Carolina remarked at the Convention, during debate on a popular election of the president: “The people will be sure to vote for some man in their own State, and the largest State will be sure to succede [sic]. This will not be Virga. however. Her slaves will have no suffrage.” Williamson saw that states with high populations had an advantage in choosing the president. But a great number of people in Virginia were slaves. Would this mean that Virginia and other slave states didn’t have the numbers of whites to affect the presidential election as much as the large Northern states?

The writer of the Constitution, slave-owner and future American president James Madison, thought so. He said that

There was one difficulty however of a serious nature attending an immediate choice by the people. The right of suffrage was much more diffusive in the Northern than the Southern States; and the latter could have no influence in the election on the score of the Negroes. The substitution of electors obviated this difficulty…

The question for Southerners was: How could one make the total population count for something, even though much of the population couldn’t vote? How could black bodies be used to increase Southern political power? Counting slaves helped put more Southerners in the House of Representatives, and now counting them in the apportionment of electors would help put more Southerners in the White House.

Thus, Southerners pushed for the Electoral College. The number of electors would be based on how many members of Congress each state possessed — which recall was affected by counting a black American as three-fifths of a person. Each state would have one elector per representative in the House, plus two for the state’s two senators (today we have 435 + 100 + 3 for D.C. = 538). In this way, the number of electors was still based on population (not the whole population, though, as blacks were not counted as full persons), even though a massive part of the America population in 1787 could not vote. The greater a state’s population, the more House reps it had, and thus the more electors it had. Southern electoral power was secure.

This worked out pretty well for the racists. “For 32 of the Constitution’s first 36 years, a white slaveholding Virginian occupied the presidency,” notes Akhil Reed Amar. The advantage didn’t go unnoticed. Massachusetts congressman Samuel Thatcher complained in 1803, “The representation of slaves adds thirteen members to this House in the present Congress, and eighteen Electors of President and Vice President at the next election.”

Tyrants and imbeciles

At times, it’s suggested that the electors serve an important function: if the people select a dangerous or unqualified candidate — like an authoritarian or a fool — to be the party nominee, the electors can pick someone else and save the nation. Hamilton said, “The process of election affords a moral certainty, that the office of President will never fall to the lot of any man who is not in an eminent degree endowed with the requisite qualifications.”

Obviously, looking at Donald Trump, the Electoral College is just as likely to put an immoral doofus in the White House than keep one out. Trump may not fit that description to you, but some day a candidate may come along who does. And since the electors are chosen for their loyalty, they are unlikely to stop such a candidate, even if they have the power to be faithless. We might as well simply let the people decide.

It is a strange thing indeed that some people insist a popular vote will lead to dictatorship, ignoring the majority of the world’s democracies that directly elect their executive officer. They have not plunged into totalitarianism. Popular vote simply doesn’t get rid of checks and balances, co-equal branches, a constitution, the rule of law, and other aspects of free societies. These things are not incompatible.

France has had direct elections since 1965 (de Gaulle). Finland since 1994 (Ahtisaari). Portugal since 1918 (Pais). Poland since 1990 (Wałęsa). Why aren’t these nations run by despots by now? Why do even conservative institutes rank nations like Ireland, Finland, and Austria higher up on a “Human Freedom Index” than the United States? How is this possible, if direct elections of the executive lead to tyranny?

There are many factors that cause dictatorship and ruin, but simply giving the White House to whomever gets the most votes is not necessarily one of them.

Modern motives

We close by stating the obvious. There remains strong support for the Electoral College among conservatives because it has recently aided Republican candidates like Bush (2000) and Trump (2016). If the GOP lost presidential elections due to the Electoral College after winning the popular vote, like the other party does, they’d perhaps see its unfair nature.

The popular vote, in an increasingly diverse, liberal country, doesn’t serve conservative interests. Republicans have won the popular vote just once since and including the 1992 election. Conservatives are worried that if the Electoral College vanishes and each citizen has a vote of equal power, their days are numbered. Better to preserve an outdated, anti-democratic system than benefits you than reform your platform and policies to change people’s minds about you and strengthen your support. True, the popular vote may serve Democratic interests. Fairness serves Democratic interests. But, unlike unfairness, which Republicans seek to preserve, fairness is what’s right. Giving the candidate with the most votes the presidency is what’s right.

For more from the author, subscribe and follow or read his books.

An Absurd, Fragile President Has Revealed an Absurd, Fragile American System

The FBI investigation into Donald Trump, one of the most ludicrous and deplorable men to ever sit in the Oval Office, was a valuable lesson in just how precariously justice balances on the edge of a knife in the United States. The ease with which any president could obstruct or eliminate accountability for his or her misdeeds should frighten all persons regardless of political ideology.

Let’s consider the methods of the madness, keeping in mind that whether or not a specific president like Trump is innocent of crimes or misconduct, it’s smart to have effective mechanisms in place to bring to justice later executives that are guilty. The stupidity of the system could be used by a president of any political party. This must be rectified.

A president can fire those investigating him — and replace them with allies who could shut everything down

The fact the above statement can be written truthfully about an advanced democracy, as opposed to some totalitarian regime, is insane. Trump of course did fire those looking into his actions, and replaced them with supporters.

The FBI (not the Democrats) launched an investigation into Trump and his associates concerning possible collusion with Russia during the 2016 election and obstruction of justice, obviously justified given his and their suspicious behavior, some of which was connected to actual criminal activity, at least among Trump’s associates who are now felons. Trump fired James Comey, the FBI director, who was overseeing the investigation. Both Trump and his attorney Rudy Giuliani publicly indicated the firing was motivated by the Russia investigation; Comey testified Trump asked him to end the FBI’s look into Trump ally Michael Flynn, though not the overall Russia inquiry.

The power to remove the FBI director could be used to slow down an investigation (or shut it down, if the acting FBI director is loyal to the president, which Andrew McCabe was not), but more importantly a president can then nominate a new FBI director, perhaps someone more loyal, meaning corrupt. (Christopher Wray, Trump’s pick, worked for a law firm that did business with Trump’s business trust, but does not seem a selected devotee like the individuals you will see below, perhaps because by the time his installment came around the investigation was in the hands of Special Counsel Robert Mueller.) The Senate must confirm the nomination, but that isn’t entirely reassuring. The majority party could push through a loyalist, to the dismay of the minority party, and that’s it. Despite this being a rarity, as FBI directors are typically overwhelmingly confirmed, it’s possible. A new director could then end the inquiry.

Further, the president can fire the attorney general, the FBI director’s boss. The head of the Justice Department, this person has ultimate power over investigations into the president — at least until he or she is removed by said president. Trump made clear he was upset with Attorney General Jeff Sessions for recusing himself from overseeing the Russia inquiry because Sessions could have discontinued it. It was reported Trump even asked Sessions to reverse this decision! Sessions recused himself less than a month after taking office, a couple months before Comey was fired. For less than a month, Sessions could have ended it all.

Deputy Attorney General Rod Rosenstein, luckily no Trump lackey, was in charge after Sessions stepped away from the matter. It was Rosenstein who appointed Robert Mueller special counsel and had him take over the FBI investigation, after the nation was so alarmed by Comey’s dismissal. Rosenstein had authority over Mueller and the case (dodging a bullet when Trump tried to order Mueller’s firing but was rebuked by his White House lawyer; Trump could have rescinded statutes that said only the A.G. could fire the special counsel, with an explosive court battle over constitutionality surely following) until Trump fired Sessions and installed loyalist Matt Whitaker as Acting Attorney General. Whitaker is a man who

defended Donald Trump Jr.’s decision to meet with a Russian operative promising dirt on Hillary Clinton. He opposed the appointment of a special counsel to investigate Russian election interference (“Hollow calls for independent prosecutors are just craven attempts to score cheap political points and serve the public in no measurable way.”) Whitaker has called on Rod Rosenstein to curb Mueller’s investigation, and specifically declared Trump’s finances (which include dealings with Russia) off-limits. He has urged Trump’s lawyers not to cooperate with Mueller’s “lynch mob.”

And he has publicly mused that a way to curb Mueller’s power might be to deprive him of resources. “I could see a scenario,” he said on CNN last year, “where Jeff Sessions is replaced, it would [be a] recess appointment and that attorney general doesn’t fire Bob Mueller but he just reduces his budget to so low that his investigation grinds to almost a halt.”

Whitaker required no confirmation from the Senate. Like an official attorney general, he could have ended the inquiry and fired Robert Mueller if he saw “good cause” to do so, or effectively crippled the investigation by limiting its resources or scope. That did not occur, but it’s not hard to imagine Whitaker parroting Trump’s wild accusations of Mueller’s conflicts of interest, or whipping up some bullshit of his own to justify axing the special counsel. The same can be said of Bill Barr, who replaced Whitaker. Barr, who did need Senate confirmation, was also a Trump ally, severely endangering the rule of law:

In the Spring of 2017, Barr penned an op-ed supporting the President’s firing Comey. “Comey’s removal simply has no relevance to the integrity of the Russian investigation as it moves ahead,” he wrote. In June 2017, Barr told The Hill that the obstruction investigation was “asinine” and warned that Mueller risked “taking on the look of an entirely political operation to overthrow the president.” That same month, Barr met with Trump about becoming the president’s personal defense lawyer for the Mueller investigation, before turning down the overture for that job.

In late 2017, Barr wrote to the New York Times supporting the President’s call for further investigations of his past political opponent, Hillary Clinton. “I have long believed that the predicate for investigating the uranium deal, as well as the foundation, is far stronger than any basis for investigating so-called ‘collusion,’” he wrote to the New York Times’ Peter Baker, suggesting that the Uranium One conspiracy theory (which had by that time been repeatedly debunked) had more grounding than the Mueller investigation (which had not). Before Trump nominated him to be attorney general, Barr also notoriously wrote an unsolicited 19-page advisory memo to Rod Rosenstein criticizing the obstruction component of Mueller’s investigation as “fatally misconceived.” The memo’s criticisms proceeded from Barr’s long-held and extreme, absolutist view of executive power, and the memo’s reasoning has been skewered by an ideologically diverse group of legal observers.

What happy circumstances, Trump being able to shuffle the investigation into his own actions to his first hand-picked attorney general (confirmation to recusal: February 8 to March 2, 2017), an acting FBI director (even if not an ally, the act itself is disruptive), a hand-picked acting attorney general, and a second hand-picked attorney general. Imagine police detectives are investigating a suspect but he’s their boss’ boss. That’s a rare advantage.

The nation held its breath with each change, and upon reflection it seems almost miraculous Mueller’s investigation concluded at all. Some may see this as a testament to the strength of the system, but it all could have easily gone the other way. There were no guarantees. What if Sessions hadn’t recused himself? What if he’d shut down the investigation? What if Comey, McCabe, or Rosenstein had been friendlier to Trump? What if Whitaker or Barr had blown the whole thing up? Yes, political battles, court battles, to continue the inquiry would have raged — but there are no guarantees they would have succeeded.

Tradition, political and public pressure…these mechanisms aren’t worthless, but they hardly seem as valuable as structural, legal changes to save us from having to simply hope the pursuit of justice doesn’t collapse at the command of the accused or his or her political allies. We can strip the president of any and all power over the Justice Department workers investigating him or her, temporarily placing A.G.s under congressional authority, and eradicate similar conflicts of interest.

The Department of Justice can keep its findings secret

Current affairs highlighted this problem as well. When Mueller submitted his finished report to Bill Barr, the attorney general was only legally required to submit a summary of Mueller’s findings to Congress. He did not need to provide the full report or full details to the House and Senate, much less to the public. He didn’t even need to release the summary to the public!

This is absurd, obviously setting up the possibility that a puppet attorney general might not tell the whole story in the summary to protect the president. Members of Mueller’s team are currently saying to the press that Barr’s four-page summary is too rosy, leaving out damaging information about Trump. The summary says Mueller found no collusion (at least, no illegal conspiring or coordinating), and that Barr, Rosenstein, and other department officials agreed there wasn’t enough evidence of obstruction of justice. But one shouldn’t be forced to give a Trump ally like Barr the benefit of the doubt; one should be able to see the evidence to determine if he faithfully expressed Mueller’s findings and hear detailed arguments as to how he and others reached a verdict on obstruction. Barr is promising a redacted version of the report will be available this month. He did not have to do this — we again simply had to hope Barr would give us more. Just as we must hope he can be pressured into giving Congress the full, unedited report. This must instead be required by law, and the public is at least owed a redacted version. Hope is unacceptable. It would also be wise to find a more independent, bipartisan or nonpartisan way to rule on obstruction if the special counsel declines to do so — perhaps done in a court of law, rather than a Trump lackey’s office.

The way of doing things now is simply a mess. What if an A.G. is untruthful in his summary? Or wants only Congress to see it, not the public? What if she declines to release a redacted version? What if the full report is never seen beyond the investigators and their Justice Department superiors, appointed supporters of the president being investigated? What if a ruling on obstruction is politically motivated?

We don’t know if the president can be subpoenaed to testify

While the Supreme Court has established that the president can be subpoenaed, or forced, to turn over materials (such as Nixon and his secret White House recordings), it hasn’t specifically ruled on whether the president must testify before Congress, a special counsel, or a grand jury if called to do so. While the president, like any other citizen, has Fifth Amendment rights (he can’t be “compelled in any criminal case to be a witness against himself,” risking self-incrimination), we do need to know if the executive can be called as a witness, and under what circumstances. Mueller chose not to subpoena Trump’s testimony because it would lead to a long legal battle. That’s what unanswered questions and constitutional crises produce.

We have yet to figure out if a sitting president can be indicted

If the executive commits a crime, can he or she be charged for it while in office? Can the president go to trial, be prosecuted, sentenced, imprisoned? We simply do not know. The Office of Legal Counsel at the Justice Department says no, but there is fierce debate over whether it’s constitutional or not, and the Supreme Court has never ruled on the matter.

There’s been much worry lately, due to Trump’s many legal perilsover this possible “constitutional crisis” arising, a crisis of our own design, having delayed creating laws for this sort of thing for centuries. For now, the trend is to follow Justice Department policy, rather helpful for a president who’s actually committed a felony. The president can avoid prosecution and punishment until leaving office or even avoid it entirely if the statute of limitations runs out before the president’s term is over!

“Don’t fret, Congress can impeach a president who seems to have committed a crime. Out of office, a trial can commence.” That is of little comfort, given the high bar for impeachment. Bitter partisanship could easily prevent the impeachment of a president, no matter how obvious or vile the misdeeds. It’s not a sure thing.

The country needs to rule on this issue, at the least eliminating statutes of limitations for presidents, at most allowing criminal proceedings to occur while the president is in office.

We don’t know if a president can self-pardon

Trump, like the blustering authoritarian he is, declared he had the “absolute right” to pardon himself. But the U.S. has not figured this out either. It’s also a matter of intense debate, without constitutional clarity or judicial precedent. A sensible society might make it clear that the executive is not above the law — he or she cannot simply commit crimes with impunity, cannot self-pardon. Instead, we must wait for a crisis to force us to decide on this issue. And, it should be emphasized, the impeachment of a president who pardoned him- or herself would not be satisfactory. Crimes warrant consequences beyond “You don’t get to be president anymore.”

Subpoenas can be optional

If you declined to show up in court after being issued a subpoena, you would be held in contempt. You’d be fined or jailed because you broke the law. It’s supposed to work a similar way when congressional committees issue subpoenas, instructing people to come testify or produce evidence. It is illegal to ignore a subpoena from Congress. Yet Trump has ordered allies like Carl Kline and Don McGahn to do just that, vowing to “fight all the subpoenas.” Leading Republican legislators like Lindsey Graham and Jim Jordan encouraged Donald Trump Jr. to ignore his subpoena. Barr waved away his subpoena to give Congress the full Mueller report. Various other officials have ignored their summonses as well.

When an individual does this, the congressional committee and then the whole house of Congress (either the Senate or the House of Representatives, not both) must vote on holding the individual in contempt.

Which means that the seriousness of a subpoena depends upon the majority party in a house of Congress. If it’s not in the interest of, say, a Republican Senate to hold a Republican official in contempt after he refused to answer a subpoena in an investigation (maybe of a Republican president), then that’s that. There is no consequence for breaking the law and ignoring the order to appear or provide evidence. As long as you’re on the side of the chamber majority, you can throw the summons from the committee in the trash. (This isn’t the case with Trump, as the Democrats control the House and are thus able to convict someone of contempt, but the utter disregard for subpoenas Trump and others showed raised the question of what happens next, revealing this absurd system to this writer and others. If a chamber does convict someone of contempt, there are a few options going forward to jail or fine said person, one of which has a similar debilitating partisan wrench.) Perhaps we should construct a system, perhaps by giving committees more control over conviction and enforcement or handing things over to the judicial system earlier, where breaking the law has consequences no matter who has majority power, to prevent that behavior and allow investigations to actually operate.

For more from the author, subscribe and follow or read his books.

Headlines From the United States of Atheism

Alternatively titled: If You Get Why Government Favoring and Promoting Atheism or Another Faith is Unacceptable, You Get Why the Same is True For Christianity. Lest the following satire be misunderstood, let’s state this plainly. All people have the right to believe what they like, and promote it — unless you’re on the clock as a public employee or trying to use public institutions. Government needs to be neutral on matters of faith, not favoring or promoting one or any religion, nor atheism. To be neutral isn’t to back or advocate for disbelief, unlike events described in these fictional, hopefully thought-provoking, headlines. It’s simply to acknowledge that this is a country and government for all people, not just those of the Judeo-Christian tradition. Not all students, customers, or constituents are Christians or even people of faith. Freedom from religion is just as important as freedom of religion, which is why church and State are kept separate. If you wouldn’t want government used to push atheism, Islam, and so forth in any way, whether through its employees, institutions, laws, or creations, then you get it.

Florida Bill Requires Public Schools to Offer Elective on Atheist Classics. Why No Required Electives For Holy Books?

“God is Dead” to Appear on U.S. Currency Next Year

Christian Student Refuses to Stand For Pledge, Objecting to “One Nation, Godless” Line

Supreme Court Yet to Rule on Whether Refusing Service to Christians Based on Religious Belief is Discrimination

Why Does the Law Still Say You Can’t Be Fired For Being Gay, But Can For Being a Person of Faith?

Coach Lectures Players About How God is Fictional and Can’t Help Them Before Every Game

Little-known Last Verse of National Anthem Reads: “And Faith is a Bust” 

Believers Bewildered as to Why Students Are Studying Science and Evolution in Religion Class  

Sam Harris One of Six Selected to Lead Traditional Refutations of God’s Existence at Presidential Inauguration 

Lawmakers Want “The God Delusion” as This State’s Official Book 

Christians Fight to be Allowed to Give Invocations at the Legislature Too; Many Americans Wonder Why Invocations Are Necessary

Bonus: Headlines From the United States of Allah

Oklahoma Legislature Votes to Install Qu’ran Monument on Capitol Grounds 

States Are Requiring or Allowing “Allahu Akbar” to be Displayed in Public Schools

Muslims Object to Removal of Big Crescent and Star From Public Park

With U.S. Supreme Court Oblivious, Alabama Ditches Rule That Death Row Inmates Can Only Have Imam With Them at the End

For more from the author, subscribe and follow or read his books.