‘Obi-Wan Kenobi’ Is Peak Lazy Writing

The Obi-Wan Kenobi finale is out, and the show can be awarded a 6/10, perhaps 6.5. This is not a dreadful score, but it isn’t favorable either. I give abysmal films or shows with no redeeming qualities a 1 or 2, though this is extremely rare; bad or mediocre ones earn a 3-5; a 6 is watchable and even enjoyable but not that great, a 7 is a straight-up good production, an 8 is great, and a 9-10 is rare masterpiece or perfection territory. The ranking encompasses everything: was it an interesting, original, sensible story? Do you care about what happens to the characters, whether evil or heroic or neutral? Was the acting, music, pacing, special effects, cinematography, and editing competent? Was the dialogue intelligent or was it painful and cliché? Did they foolishly attempt a CGI human face? And so on.

Understanding anyone’s judgement of a Star Wars film or show requires knowing how it compares to the others, so consider the following rankings, which have changed here and there over the years but typically not by much. I judge A New Hope and The Empire Strikes Back to be 10s. Return of the Jedi earns a 9, primarily for the ridiculous “plan” to save Han Solo from Jabba the Hutt that involves everyone getting captured, and for recycling a destroy-the-Death-Star climax. The Mandalorian (seasons 1-2), The Force Awakens, and Solo hover at about 7 for me. Solo is often unpopular, but I think I enjoyed its original, small-scale, train-robbery Western kind of story, which preceded The Mandalorian. The Force Awakens created highly lovable characters, but lost most of its points for simply remaking A New Hope. Rogue One is a 6 (bland characters, save one droid), The Last Jedi (review here) is a 5, Revenge of the Sith a 4.5, and The Phantom Menace, Attack of the Clones, and The Rise of Skywalker earn 4s if I’m in a pleasant mood, usually 3.5s. It’s an odd feeling, giving roughly the same rank to the prequels and sequels. They’re both bad for such different reasons. The former had creative, new stories, and there’s a certain innocence about them — but mostly dismal dialogue, acting, and characters (Obi-Wan Kenobi was, in Episodes II and III, a welcome exception). The sequels, at least in the beginning, had highly likable characters, good lines, and solid acting, but were largely dull copy-pastes of the original films. One trilogy had good ideas and bad execution, the other bad ideas and competent execution. One can consult Red Letter Media at its hilarious Mr. Plinkett reviews of the prequels and sequels to fully understand why I find them so awful.

Kenobi was actually hovering at nearly a 7 for me until the end of episode three. Ewan McGregor, as always, is wonderful, little Leia is cute enough, Vader is hell-bent on revenge — here are characters we can care about. The pace was slow and thoughtful, a small-scale kidnapping/rescue story. If you could ignore the fact that Leia doesn’t seem to know Kenobi personally in A New Hope, and that a Vader-Kenobi showdown now somewhat undermines the importance of their fight in that film, things were as watchable and worthwhile as a Mandalorian episode. Some lines and acting weren’t perfect, but a plot was forming nicely. I have become increasingly burnt out of and bored by Star Wars, between the bad productions and it just having nothing new to say (rebels v. empire, Sith v. Jedi, blasters and lightsabers, over and over and over again), but maybe we’d have a 7 on our hands by the end.

Then the stupid awakened.

At the end of part three, Vader lights a big fire in the desert, and Force-pulls Kenobi through it. He then puts out the fire with the Force for some reason. Soon a woman and a droid rescue Kenobi by shooting into the fuel Vader had used, starting a slightly-bigger-fire between protagonist and antagonist. Vader is now helpless to stop the slow-moving droid from picking up Kenobi and lumbering away. He doesn’t walk around the fire (this would have taken five seconds, it’s truly not that big). He doesn’t put out the flames as he did before (I guess 30% more fire is just too much for him). He doesn’t Force-pull Kenobi back to him again. He just stares stupidly as the object of all his rage, who he obsessively wants to torture and kill, gets slowly carried off (we don’t actually see the departure, as that would have highlighted the absurdity; the show cuts).

This is astonishingly bad writing. It’s so bad one frantically tries to justify it. Oh, Vader let him escape, all part of the plan. This of course makes no sense (they’ve been looking for Kenobi for ten years, so him evading a second capture is a massive possibility; it’s established that Vader’s goal is to find him and enact revenge, not enjoy the thrill of the hunt; and it’s never hinted at before or confirmed later that this was intentional). The simpler explanation is probably the correct one: it’s just braindead scene construction. Vader and Kenobi have to be separated, after all. Otherwise Kenobi’s history and the show’s over. There’s a thousand better ways to rescue Kenobi here, but if you’re an idiot you won’t even think of them — of if you don’t care, and don’t respect the audience, you won’t bother. (It’s very much like in The Force Awakens when Rey and Kylo are dueling and the ground beneath them splits apart, as the planet is crumbling, creating a chasm that can conveniently stop the fight — only it’s a million times worse. Now, compare all this to Luke and Vader needing to be separated in Empire. Rather than being caught or killed, Luke lets go of the tower with the only hand he has left and chooses to fall to his death. That’s a good separation. It’s driven by a character with agency and morals. It’s not a convenient Act of God or a suddenly neutered character, someone who doesn’t do what he just did a minute ago for no reason.)

Bad writing is when characters begin following the script, rather than the story being powered by the motivations of the characters. Had the characters’ wants, needs, decisions, actions, and abilities determined the course of events — like in real life — Vader would have put out the flames a second time, he and his twenty stormtroopers would have easily handled one droid and one human rescuer, and Obi-Wan would have been toast. But I guess Disney gave Vader the script. “Oh, I can’t kill him now, there’s three more episodes of this thing, plus A New Hope.” So he stood there staring through the flames like an imbecile.

Anyone who doubts this was bad writing simply needs to continue watching the show. Because the eighth grader crafting the story continues to sacrifice character realism at the altar of the screenplay.

In episode five, Vader uses the Force to stop a transport in mid-air. He slams it on the ground and tears off its doors to get to Kenobi. But surprise, it was a decoy! A second transport right next to this one takes off and blasts away. Vader is dumbfounded. Why does he not use the Force to stop this one? “Well, it was like 40 meters farther away.” “Well, he was surprised, see. And they got out of there quick.” OK, I guess. All this time I thought Vader was supposed to be powerful. It’s crucial to have limits to Force powers, and all abilities, but this is a pretty fine line between doable and impossible. “I can run a mile, but 1.1 will fucking kill me.” It’s strange fanboys would wildly orgasm over Vader’s awesome power to wrench a ship from the air and then excuse his impotence. Either we’re seeing real fire-size and ship-distance challenges Vader can’t meet or the writing here is just sub-par. There are other, more realistic ways to get out of this jam. At least when Kenobi and Leia had to escape the bad guys in the prior episode, snow speeders came along and shot at the baddies (though don’t get me started on how three people fit into a snow speeder cockpit designed for one).

But that’s not even the worst of it. Minutes later two characters violate their motivations. In this episode, it is revealed Third Sister Reva is out to kill Vader, a smart twist and good character development. She attempts to assassinate him, but he runs her through with a lightsaber. Then the Grand Inquisitor, who Reva had run through in an earlier episode, appears. (How did he survive this? You think the show is going to bother to say? Of course it doesn’t. The writers don’t care. Alas, lightsabers suddenly seem far less intimidating.) Vader and the Grand Inquisitor decide to leave her “in the gutter.” They do not finish the kill, they simply walk away. Darth Vader, who snaps necks when you lose ships on radar or accidentally alert the enemy to your presence, doesn’t kill someone who tried to assassinate him! The Grand Inquisitor essentially was assassinated by Reva — wouldn’t he want some revenge for being stabbed through the stomach and out the spine with a lightsaber? “Oh, they’re just leaving her to die” — no. The Grand Inquisitor didn’t die, remember? He and Vader do, it just happened. To be kabobbed in this universe isn’t necessarily fatal (naturally, Reva survives, again without explanation). Is it all just a master plan to inspire Reva to go do or be something? Or is it bad writing, with Reva needing to be shown mercy by Sith types because she’s still in the show?

Happily, the Kenobi finale was strong. It was emotional and sweet, and earns a ranking similar to the first couple episodes. Consternation arose, of course, when Vader buries Kenobi under a mountain of rocks and then walks away! Wouldn’t you want to make sure he’s dead? Can’t you feel his presence when he’s close by and alive? Fortunately, this was not the end of their battle. Kenobi breaks out and attacks Vader. This time their separation makes sense given character traits — Kenobi wounds Vader and, being a good person who never wanted to kill his old apprentice, walks away. Similarly, Reva over on Tatooine tries to kill Luke (though it’s not fully clear why — she’s been left for dead by Vader, then finds out Luke and Obi-Wan have some sort of relationship, so she decides to kill the boy to…hurt Obi-Wan? Please Vader because she hurt Obi-Wan or killed a Force-sensitive child?) Luke escapes death not from some stupid deus ex machina or Reva acting insane. Though Reva appears to be untroubled by torturing Leia earlier on, a real missed opportunity by the filmmakers, we at least understand that as a youngling who was almost slaughtered by a Sith that she might hesitate to do the same to Luke.

In conclusion, series that blast the story in a direction that requires characters, in out-of-character ways, to go along with it will always suffer. As another example, The Walking Dead, in addition to forgetting to have a main character after a while and in general overstaying its welcome, was eventually infected with this. (There’s no real reason for all the main characters to cram into an RV to get Maggie to medical care in season 6, leaving their town defenseless; but the writers wanted them to all be captured by Negan for an exciting who-did-he-kill cliffhanger. There’s no reason Carl doesn’t gun Negan down when he has the chance in season 7, as he planned to do, right after proving his grit by massacring Negan’s guards; but Negan is supposed to be in future episodes.) Obviously, other Star Wars outings have terrible writing (and are worse overall productions), from Anakin and Padmé’s love confession dialogue or sand analysis in Attack of the Clones…to The Rise of Skywalker‘s convenient finding of McGuffins that conveniently reveal crucial information…to the creatively bankrupt plagiarism of the sequels. But I do not believe I have ever seen a show like Kenobi, one that puts heroes in a jam — a dramatic height, a climax — and so lazily and carelessly gets them out of it.

For more from the author, subscribe and follow or read his books.

Did U.S. Policing Evolve from Slave Patrols? Well…Sort Of

How American Policing Started with Carolina Slave Catchers” and similar headlines need asterisks. There are big elements of truth in them, but also a betrayal of the nuance found in the historical scholarship on which they are based. There is also the problem of lack of context, which perhaps inappropriately electrifies meaning. American policing starting with slave patrols is a powerful idea, but does it become less so when, for example, we study what policing looked like around the globe — and in the American colonies — before slave patrols were first formed in the early 18th century?

Obviously, permanent city forces tasked with enforcing laws and maintaining order have existed around the world since ancient times. There was a police unit in Rome established by the first emperor, China had its own forms of policing long before Western influence, and so on. As human communities grew larger, more complex systems (more personnel, permanent bodies, compensation, training, weaponry) were deemed necessary to prevent crime and capture criminals.

Small bands and villages could use simpler means to address wrongdoing. In traditional societies, which were kin-based, chiefs, councils, or the entire community ran the show, one of unwritten laws and intimate mediation or justice procedures. Larger villages and towns where non-kin lived and worked together typically established groups of men to keep order; for example, “among the first public police forces established in colonial North America were the watchmen organized in Boston in 1631 and in New Amsterdam (later New York City) in 1647. Although watchmen were paid a fee in both Boston and New York, most officers in colonial America did not receive a salary but were paid by private citizens, as were their English counterparts.” There were also constables and sheriffs in the 1630s. True, American society has virtually always been a slave society, but similar groups were formed elsewhere before the African slave trade began under the Portuguese in the 16th century. There were “patrolmen, sergeants and constables” on six-month contracts in Italy in the 14th and 15th centuries. There were sheriffs, constables, and coroners (who investigated deaths) in England in medieval times. Before the 1500s, armed men paid (whether by individuals or government) to prevent and respond to trouble in cities had been around in the West for about 4,500 years — as well as in China, African states, and elsewhere (India, Japan, Palestine, Persia, Egypt, the Islamic caliphates, and so on).

This is not to build a straw man. One might retort: “The argument is that modern policing has its roots in slave patrols.” Or “…modern, American policing…” Indeed, that is often the way it is framed, with the “modern” institution having its “origins” in the patrolling groups that began in the first decade of the 1700s.

But the historians cited to support this argument are actually more interested in showing how slave patrols were one (historically overlooked) influence among many influences on the formation of American police departments — and had the greatest impact on those in the South. A more accurate claim would be that “modern Southern police departments have roots in slave patrols.” This can be made more accurate still, but we will return to that shortly.

Crime historian Gary Potter of Eastern Kentucky University has a popular 2013 writing that contains a paragraph on this topic, a good place to kick things off:

In the Southern states the development of American policing followed a different path. The genesis of the modern police organization in the South is the “Slave Patrol” (Platt 1982). The first formal slave patrol was created in the Carolina colonies in 1704 (Reichel 1992). Slave patrols had three primary functions: (1) to chase down, apprehend, and return to their owners, runaway slaves; (2) to provide a form of organized terror to deter slave revolts; and, (3) to maintain a form of discipline for slave-workers who were subject to summary justice, outside of the law, if they violated any plantation rules. Following the Civil War, these vigilante-style organizations evolved in[to] modern Southern police departments primarily as a means of controlling freed slaves who were now laborers working in an agricultural caste system, and enforcing “Jim Crow” segregation laws, designed to deny freed slaves equal rights and access to the political system.

Here the South is differentiated from the rest of the nation — it “followed a different path.” This echoes others, such as the oft-cited Phillip Reichel, criminologist from the University of Northern Colorado. His important 1988 work argued slave patrols were a “transitional,” evolutionary step toward modern policing. For example, “Unlike the watches, constables, and sheriffs who had some nonpolicing duties, the slave patrols operated solely for the enforcement of colonial and State laws.” But that was not to say other factors beyond the South, beyond patrols, also molded the modern institution. It’s simply that “the existence of these patrols shows that important events occurred in the rural South before and concurrently with events in the urban North that are more typically cited in examples of the evolution of policing in the United States.” In his 1992 paper, “The Misplaced Emphasis on Urbanization and Police Development,” Reichel again seeks to show not that slave patrols were the sole root of U.S. policing, but that they need to be included in the discussion:

Histories of the development of American law enforcement have traditionally shown an urban‐North bias. Typically ignored are events in the colonial and ante‐bellum South where law enforcement structures developed prior to and concurrently with those in the North. The presence of rural Southern precursors to formal police organizations suggests urbanization is not a sufficient explanation for why modern police developed. The argument presented here is that police structures developed out of a desire by citizens to protect themselves and their property. Viewing the development of police in this manner avoids reference to a specific variable (e.g., urbanization) which cannot explain developments in all locations. In some places the perceived need to protect persons and property may have arisen as an aspect of urbanization, but in others that same need was in response to conditions not at all related to urbanization. 

In other words, different areas of the nation had different conditions that drove the development of an increasingly complex law enforcement system. A common denominator beyond the obvious protection of the person, Reichel argues, was protection of property, whether slaves in the South or mercantile/industrial interests in the North, unique needs Potter explores as well.

Historian Sally Hadden of Western Michigan University, cited frequently in articles as well, is likewise measured. Her seminal Slave Patrols: Law and Violence in Virginia and the Carolinas makes clear that Southern police continued tactics of expired slave patrols (such as “the beat,” a patrol area) and their purpose, the control of black bodies. But, given that Hadden is a serious historian and that her work focuses on a few Southern states, one would be hard-pressed to find a statement that positions patrols as the progenitor of contemporary policing in the U.S. (In addition, the Klan receives as much attention, if not more, as a descendant of patrols.) Written in 2001, she is complaining, like other scholars, that “most works in the history of crime have focused their attention on New England, and left the American south virtually untouched.” She even somewhat cautions against the connections many articles make today between patrol violence and 21st century police violence (how one might affect the other, rather than both simply being effects of racism, is for an article of its own):

Many people I have talked with have jumped to the conclusion that patrolling violence of an earlier century explains why some modern-day policemen, today, have violent confrontations with African Americans. But while a legacy of hate-filled relations has made it difficult for many African Americans to trust the police, their maltreatment in the seventeenth, eighteenth, or nineteenth centuries should not carry all the blame. We may seek the roots of racial fears in an earlier period, but that history does not displace our responsibility to change and improve the era in which we live. After all, the complex police and racial problems that our country continues to experience in the present day are, in many cases, the results of failings and misunderstandings in our own time. To blame the 1991 beating of Rodney King by police in Los Angeles on slave patrollers dead nearly two hundred years is to miss the point. My purpose in writing this text is a historical one, an inquiry into the earliest period of both Southern law enforcement and Southern race-based violence. Although the conclusions below may provide insight into the historical reasons for the pattern of racially targeted law enforcement that persists to the current day, it remains for us to cope with our inheritance from this earlier world without overlooking our present-day obligation to create a less fearful future.

It may be worthwhile now to nail down exactly what modern policing having roots in slave patrols means. First, when the patrols ended after the Confederate defeat, other policing entities took up or continued the work of white supremacist oppression. Alongside the Ku Klux Klan, law enforcement would conduct the terrors. As a writer for TIME put it, after the Civil War “many local sheriffs functioned in a way analogous to the earlier slave patrols, enforcing segregation and the disenfranchisement of freed slaves.” An article on the National Law Enforcement Officers Memorial Fund (!) website phrased it: “After the Civil War, Southern police departments often carried over aspects of the patrols. These included systematic surveillance, the enforcement of curfews…” Second, individuals involved in slave patrols were also involved in the other forms of policing: “In the South, the former slave patrols became the core of the new police departments.” Patrollers became policemen, as Hadden shows. Before this, there is no doubt there was crossover between slave patrol membership and the three other forms of policing in colonial America, sheriffs, constables, and watchmen. Third, patrols, as Reichel noted, had no non-policing duties, plus other differences like beats, steps toward contemporary police departments (though they weren’t always bigger; patrols had three to six men, like Boston’s early night watch). Clearly, slave patrols had a huge influence on the modern city police forces of the South that formed in the 1850s, 1860s, and later. (Before this, even the term “police” appears to have been applied to all four types of law enforcement, including patrols, though not universally — in the words of “a former slave: the police ‘were for white folks. Patteroles were for niggers.'” But after the war, Hadden writes in the final paragraph of her book, many blacks saw little difference “between the brutality of slave patrols, white Southern policemen, or the Klan.”)

Notice that the above are largely framed as post-war developments. Before the war, patrols, sheriffs, constables, and watchmen worked together, with plenty of personnel crossover, to mercilessly crush slaves. But it was mostly after the war that the “modern” police departments appeared in the South, with patrols as foundations. Here comes a potential complication. The free North was the first to form modern departments, and did so before the war: “It was not until the 1830s that the idea of a centralized municipal police department first emerged in the United States. In 1838, the city of Boston established the first American police force, followed by New York City in 1845, Albany, NY and Chicago in 1851, New Orleans and Cincinnati in 1853, Philadelphia in 1855, and Newark, NJ and Baltimore in 1857” (New Orleans and Baltimore were in slave states, Newark in a semi-slave state). This development was due to growth (these were among the largest U.S. cities), disorder and riots, industrialization and business interests and labor conflict, and indeed “troublesome” immigrants and minorities, among other factors.

That point is raised by conservatives to suggest that if Northern cities first established the police departments we know today, how can one say slave patrols had an influence? A tempting counter might be: these states hadn’t been free for long. Slavery in New York didn’t end until 1827. While that is true, the North did not have patrols. “None of the sources I used indicated that Northern states used slave patrols,” Reichel told me in an email, after I searched in vain for evidence they did. Northern sheriffs, constables, and watchmen enforced the racial hierarchy, of course, but slave patrols were a Southern phenomenon. One can rightly argue that patrol practices in the South influenced police forces in the North, but that’s not quite the strong “root” we see when studying Southern developments.

This is why boldly emphasizing that modern departments in Southern states originated with patrols is somewhat tricky. It’s true enough. But who would doubt that Southern cities would have had police departments anyway? This goes back to where we began: policing is thousands of years old, and as cities grow and technology and societies change, more sophisticated policing systems arise. The North developed them here first, without slave patrols as foundations. Even if the slave South had never birthed patrols, its system of sheriffs, constables, and watchmen would surely not have lasted forever — eventually larger police forces would have appeared as they did in the North, as they did in Rome, as they did wherever communities exploded around the globe throughout human history. New Orleans went from 27,000 residents in 1820 to 116,000 in 1850! Then 216,000 by 1880. System changes were inevitable.

Consider that during the 18th and early 19th centuries, more focused, larger, tax-funded policing was developing outside the United States, in nations without slave patrols, nations both among and outside the Euro-American slave societies. In 1666, France began building the first modern Western police institution, with a Lieutenant General of Police paid from the treasury and overseeing 20 districts in Paris — by “1788 Paris had one police officer for every 193 inhabitants.” The French system inspired Prussia (Germany) and other governments. There was Australia (1790), Scotland (1800), Portuguese Brazil (1809), Ireland (1822), and especially England (1829), whose London Metropolitan Police Department was the major model for the United States (as well as Canada’s 1834 squad in Toronto). Outside the West, there were (and always had been, as we saw) evolving police forces: “By the eighteenth century both Imperial China and Mughal India, for example, had developed policing structures and systems that were in many ways similar to those in Europe,” before European armies smothered most of the globe. Seventeenth, eighteenth, and nineteenth century Japan, one of the few nations to stave off European imperialism and involuntary influence, was essentially a police state. A similar escapee was Korea, with its podocheong force beginning in the 15th century. As much as some fellow radicals would like the West to take full credit for the police, this ignores the historical contributions (or, if one despises that phrasing, developments) of Eastern civilizations and others elsewhere. Like the North, the South was bound to follow the rest of the world.

It also feels like phrasing that credits patrols as the origin of Southern departments ignores the other three policing types that existed concurrently (and in the North were enough to form a foundation for the first modern institutions, later copied in the South). Sheriffs, constables, and watchmen were roots as well, even if one sees patrols as the dominant one. (Wondering if the latter had replaced the three former, which would have strengthened the case of the patrols as the singular foundation of Southern law enforcement, I asked Sally Hadden. She cautioned against any “sweeping statement.” She continued: “There were sheriffs, definitely, in every [Southern] county. In cities, there were sometimes constables and watchmen, but watchmen were usually replaced by patrols — but not always.”) Though all were instruments of white supremacy, they were not all the same, and only one is now in the headlines. In their existence and distinctiveness, they all must receive at least some credit as the roots of Southern institutions — as our historians know, most happenings have many causes, not one.

“Many modern Southern police departments largely have roots in slave patrols but would have arisen regardless” is probably the most accurate conclusion. Harder to fit in a headline or on a protest sign, but the nuanced truth often is.

For more from the author, subscribe and follow or read his books.

The Nativity Stories in Luke and Matthew Aren’t Contradictory — But the Differences Are Bizarre

In The Bible is Rife with Contradictions and Changes, we saw myriad examples of different biblical accounts of the same event that cannot all be true — they contradict each other. But we also saw how other discrepancies aren’t contradictions if you use your imagination. The following example was too long to examine in that already-massive writing, so we will do so now.

It’s interesting that while the authors of both Matthew and Luke have Jesus born in Bethlehem and then settle down in Nazareth, the two stories are dramatically different, in that neither mentions the major events of the other. For example, the gift-bearing Magi arrive, King Herod kills children, and Jesus’ family flees to Egypt in Matthew, but Luke doesn’t bother mentioning any of it. Luke has the ludicrous census (everyone in the Roman Empire returning to the city of their ancestors, creating mass chaos, when the point of a census is to see where people live currently), the full inn, the shepherds, and the manger, but Matthew doesn’t.

These stories can be successfully jammed together. But it takes work. In Matthew 2:8-15, Joseph, Mary, and Jesus are in Bethlehem but escape to Egypt to avoid Herod’s slaughter. Before fleeing, the family seems settled in the town: they are in a “house” (2:11) beneath the fabled star, and Herod “gave orders to kill all the boys in Bethlehem and its vicinity who were two years old and under, in accordance with the time he had learned from the Magi” visitors concerning when the star appeared (2:16, 2:7). This is a bit confusing, as all boys from born-today to nearly three years old is a big range for someone who knows an “exact time” (2:7). But it suggests that Jesus may have been born a year or two ago, the star was over his home since his birth, and the Magi had a long journey to find him. Many Christian sites will tell you Jesus was about two when the wise men arrived. In any event, when Herod gives this order, the family travels to Egypt and remains there until he dies, then they go to Nazareth (2:23).

In Luke 2:16-39, after Jesus is born in Bethlehem the family goes to Jerusalem “when the time came for the purification rites required by the Law of Moses” (2:22). This references the rites outlined in Leviticus 12 (before going to Jerusalem, Jesus is circumcised after eight days in Luke 2:21, in accordance with Leviticus 12:3). At the temple they sacrifice two birds (Luke 2:24), following Leviticus 12:1-8 — when a woman has a son she does this after thirty-three days to be made “clean.” Then, “When Joseph and Mary had done everything required by the Law of the Lord, they returned to Galilee to their own town of Nazareth” (Luke 2:39). Here they simply go to Nazareth when Jesus is about a month old. No mention of a flight to Egypt, no fear for their lives — everything seems rather normal. “When the time came for the purification rites” certainly suggests they did not somehow occur early or late.

So the mystery is: when did the family move to Nazareth?

Both stories get the family to the town, which they must do because while a prophesy said the messiah would be born in Bethlehem, Jesus was a Nazarene. But the paths there are unique, and you have to either build a mega-narrative to make it work — a larger story that is not in the bible, one you must invent to make divergent stories fit together — or reinterpret the bible in a way different than the aforementioned sites.

In this case, Option 1 is to say that when Luke 2:39 says they headed for Nazareth, this is where the entire story in Mathew is left out. They actually go back to Bethlehem, have the grand adventure to Egypt, and then go to Nazareth much later. This is a serious twist of the author’s writing; you have to declare the gospel doesn’t mean what it says, that narrative time words like “when” are meaningless (in the aforementioned article I wrote of us having to imagine “the bible breaks out of chronological patterns at our convenience”).

Option 2 is that they go to Nazareth after the rites as stated. Then at some point they go back to Bethlehem, have the Matthew adventure, and end up back in Nazareth. Maybe they were visiting relatives. Maybe they moved back to Bethlehem — after Herod dies it seems as if the family’s first thought is to go back there. Matthew 2:22-23: “But when [Joseph] heard that Archelaus was reigning in Judea in place of his father Herod, he was afraid to go there. Having been warned in a dream, he withdrew to the district of Galilee, and he went and lived in a town called Nazareth.” So perhaps it’s best to suppose they went to Nazareth after the temple, moved back to Bethlehem, hid in Egypt, and went again to Nazareth. Luke of course doesn’t mention any of this either; the family heads to Nazareth after the temple rites and the narrative jumps to when Jesus is twelve (2:39-42).

Option 3 is that Jesus’ birth, the Magi visit, Herod’s killing spree, the family’s flight, Herod’s death, and the family’s return all occur in the space of a month. This of course disregards and reinterprets any hints that Jesus was about two years old. But it allows the family to have Matthew’s adventure and make it back to Jerusalem for the scheduled rites (which Matthew doesn’t mention), then go to Nazareth. One also must conclude that 1) the Magi didn’t have to travel very far, if the star appeared when Jesus was born, or 2) that the star appeared to guide them long before Jesus was born (interpret Matthew 2:1-2 how you will). It’s still odd that the only thing Luke records between birth and the temple is a circumcision, but Option 3, as rushed as it is, may be the best bet. That’s up to each reader to decide, for it’s all a matter of imagination.

Luke’s silence is worth pausing to consider. The Bible is Rife with Contradictions and Changes outlined the ramifications of one gospel not including a major event of another:

Believers typically insist that when a gospel doesn’t mention a miracle, speech, or story it’s because it’s covered in another. (When the gospels tell the same stories it’s “evidence” of validity, when they don’t it’s no big deal.) This line only works from the perspective of a later gospel: Luke was written after Matthew, so it’s fine if Luke doesn’t mention the flight to Egypt to save baby Jesus from Herod. Matthew already covered that. But from the viewpoint of an earlier text this begins to break down. It becomes: “No need to mention this miracle, someone else will do that eventually.” So whoever wrote Mark [the first gospel] ignored one of the biggest miracles in the life of Jesus, proof of his divine origins [the virgin birth story]? Or did the author, supposedly a disciple, not know about it? Or did gospel writers conspire and coordinate: “You cover this, I’ll cover that later.” Is it just one big miracle, with God ensuring that what was unknown or ignored (for whatever reason, maybe the questionable “writing to different audiences” theory) by one author would eventually make it into a gospel? That will satisfy most believers, but an enormous possibility hasn’t been mentioned. Perhaps the story of Jesus was simply being embellished — expanding over time, like so many other tales and legends (see Why God Almost Certainly Does Not Exist).

In truth, it is debatable whether Matthew came before Luke. Both were written around AD 80-90, so scholars disagree over which came first. If Matthew came first, Luke could perhaps be excused for leaving out the hunt for Jesus and journey to Egypt, as surprising as that might be. If Luke came first, it’s likely the author of Matthew concocted a new tale, making Jesus’ birth story far more dramatic and, happily, fulfilling a prophesy (Matthew 2:15: “And so was fulfilled what the Lord had said through the prophet: ‘Out of Egypt I called my son'”). If they were written about the same time and independently, with the creators not having read each other’s work, they were likewise two very different stories.

Regardless of order and why the versions are different, one must decide how to best make the two tales fit — writers not meaning what they write, the holy family moving back and forth a bunch, or Jesus not being two when the Magi arrived with gold, frankincense, and myrrh.

For more from the author, subscribe and follow or read his books.

Like a Square Circle, Is God-Given Inherent Value a Contradiction?

Can human beings have inherent value without the existence of God? The religious often say no. God, in creating you, gives you value. Without him, you have no intrinsic worth. (Despite some inevitable objectors, this writing will use “inherent” and “intrinsic” value interchangeably, as that is fairly common with this topic. Both suggest some kind of immutable importance of a thing “in its own right,” “for its own sake,” “in and of itself,” completely independent of a valuer.) Without a creator, all that’s left is you assigning worth to yourself or others doing so; these sentiments are conditional, they can be revoked (you may commit suicide, seeing yourself of no further worth, for example); they may be instrumental, there being some use for me assigning you value, such as my own happiness; therefore, such value cannot be intrinsic — it is extrinsic. We only have inherent importance — unchangeable, for its own sake — if lovingly created by God in his own image.

The problem is perhaps already gnawing at your faculties. God giving a person inherent value appears contradictory. While one can argue that an imagined higher power has such divine love for an individual that his or her worth would never be revoked, and that God does not create us for any use for himself (somewhat debatable), the very idea that inherent value can be bestowed by another being doesn’t make sense. Inherent means it’s not bestowed. Worth caused by God is extrinsic by definition. God is a valuer, and intrinsic value must exist independently of valuers.

As a member of the Ethical Society of St. Louis put it:

+All human life has intrinsic value

-So we all [have] value even if God does not exist, right?

+No, God’s Love is what bestows value onto His creations. W/o God, everything is meaningless.

-So human life has *extrinsic* value then, right?

+No. All human life has intrinsic value.

That’s well phrased. If we think about what inherent value means (something worth something in and of itself), to have it humans would need to have it even if they were the only things to ever have existed.

If all this seems outrageous, it may be because God-given value is often thought of differently than self- or human-given value; it is seen as some magical force or aura or entity, the way believers view the soul or consciousness. It’s a feature of the body — if “removed [a person] would cease to be human life,” as a Christian blogger once wrote! When one considers one’s own value or that of a friend, family member, lover, home, money, or parrot, it’s typically not a fantastical property but rather a simple mark of importance, more in line with the actual definition of value. This human being has importance, she’s worth something. Yes, that’s the discussion on value: God giving you importance, others giving you importance, giving yourself importance. It’s not a physical or spiritual characteristic. A prerequisite to meaningful debate is agreeing on what you’re talking about, having some consistency and coherence. There’s no point in arguing “No person can have an inherent mystical trait without God!” That’s as obvious as it is circular, akin to saying you can’t have heaven without God. You’re not saying anything at all. If we instead use “importance,” there’s no circular reasoning and the meaning can simply be applied across the board. “No person can have inherent importance without God” is a statement that can be analyzed by all parties operating with the same language.

No discourse is possible without shared acceptance of meaning. One Christian writer showcased this, remarking:

Philosopher C. I. Lewis defines intrinsic value as “that which is good in itself or good for its own sake.” This category of value certainly elevates the worth of creation beyond its usefulness to humans, but it creates significant problems at the same time.

To have intrinsic value, an object would need to have value if nothing else existed. For example, if a tree has intrinsic value, then it would be valuable if it were floating in space before the creation of the world and—if this were possible—without the presence of God. Lewis, an atheist, argues that nothing has intrinsic value, because there must always be someone to ascribe value to an object. Christians, recognizing the eternal existence of the Triune God in perpetual communion[,] will recognize that God fills the category of intrinsic value quite well.

What happened here is baffling. The excerpt essentially ends with “And that ‘someone’ is God! God can ascribe us value! Intrinsic value does exist!” right after showing an understanding (at least, an understanding of the opposing argument) that for a tree or human being to possess inherent value it must do so if it were the only thing in existence, if neither God nor anything else existed! Intrinsic value, to be real, must exist even if God does not, the atheist posits, holding up a dictionary. “Intrinsic value exists because God does, he imbues it,” the believer says, either ignoring the meaning of intrinsic and the implied contradiction (as William Lane Craig once did), or not noticing or understanding them. Without reaching shared definitions, we just talk past each other.

In this case, it is hard to say whether the problem is lack of understanding or the construction of straw men. This is true on two levels. First, the quote doesn’t actually represent what Lewis wrote on in the 1940s. He in fact believed human experiences had intrinsic value, that objects could have inherent value, sought to differentiate and define these terms in unique ways, and wasn’t making an argument about deities (see here and here if interested). However, in this quote Lewis is made to represent a typical atheist. What we’re seeing is how the believer sees an argument (not Lewis’) coming from the other side. This is helpful enough. Let’s therefore proceed as if the Lewis character (we’ll call him Louis to give more respect to the actual philosopher) is a typical atheist offering a typical atheist argument: nothing has intrinsic value. Now that we are pretending the Christian writer is addressing something someone (Louis) actually posited, probably something the writer has heard atheists say, let’s examine how the atheist position is misunderstood or twisted in the content itself.

The believer sees accurately, in Sentences 1/2, that the atheist thinks intrinsic value, to be true, must be true without the existence of a deity. So far so good. Then in Sentence 3 everything goes completely off the rails. Yes, Louis the Typical Atheist believes intrinsic value is impossible…because by definition it’s an importance that must exist independently of all valuers, including God. God’s exclusion was made clear in Sentences 1/2. It’s as if the Christian writer notices no connection between the ideas in Sentences 1/2 and Sentence 3. The first and second sentences are immediately forgotten, and therefore the atheist position is missed or misconstrued. It falsely becomes an argument that there simply isn’t “someone” around to “ascribe” intrinsic value! As if all Louis was saying was “God doesn’t exist, so there’s no one to ascribe inherent worth.” How easy to refute, all one has to say is “Actually, God does exist, so there is someone around!” (Sentence 4). That is not the atheist argument — it is that the phrase “intrinsic value” doesn’t make any coherent sense: it’s an importance that could only exist independently of all valuers, including God, and therefore cannot exist. Can a tree be important if it was the only thing that existed, with no one to consider it important? If your answer is no, you agree with skeptics that intrinsic value is impossible and a useless phrase. Let’s think more on this.

The reader is likely coming to see that importance vested by God is not inherent or intrinsic. Not unless one wants to throw out the meaning of words. A thing’s intrinsic value or importance cannot come from outside, by definition. It cannot be given or created or valued by another thing, otherwise it’s extrinsic. So what does this mean for the discussion? Well, as stated, it means we’re speaking nonsense. If God can’t by definition grant an individual intrinsic value, nor other outsiders like friends and family, nor even yourself (remember, you are a valuer, and your inherent value must exist independently of your judgement), then intrinsic value cannot exist. It’s like talking about a square circle. Inherent importance isn’t coherent in the same way inherent desirability isn’t coherent, as Matt Dillahunty once said. You need an agent to desire or value; these are not natural realities like color or gravity, they are mere concepts that cannot exist on their own.

To be fair, the religious are not alone in making this mistake. Not all atheists deny inherent value; they instead base it in human existence, uniqueness, rationality, etc. Most secular and religious belief systems base intrinsic value on something. Yet the point stands. Importance cannot be a natural characteristic, it must be connoted by an agent, a thinker. The two sides are on equal footing here. If the religious wish to continue to use — misuse — inherent value as something God imbues, then they should admit anyone can imbue inherent value. Anyone can decree a human being has natural, irrevocable importance in and of itself for whatever reason. But it would be less contradictory language, holding true to meaning, to say God assigns simple value, by creating and loving us, in the same way humans assign value, by creating and loving ourselves, because of our uniqueness, and so forth.

“But if there’s no inherent value then there’s no reason to be moral! We’ll all kill each other!” We need not waste much ink on this. If we don’t need imaginary objective moral standards to have rational, effective ethics, we certainly don’t need nonsensical inherent value. If gods aren’t necessary to explain the existence of morality; and if we’re bright enough to know we should believe something is true because there’s evidence for it, not because there would be bad consequences if we did not believe (the argumentum ad consequentiam fallacy); and if relativistic morality and objective morality in practice have shown themselves to be comparably awful and comparably good; then there is little reason to worry. Rational, functioning morality does not need “inherent” values created and imbued by supernatural beings. It just needs values, and humans can generate plenty of those on their own.

For more from the author, subscribe and follow or read his books.

Purpose, Intersectionality, and History

This paper posits that primary sources meant for public consumption best allow the historian to understand how intersections between race and gender were used, consciously or not, to advocate for social attitudes and public policy in the United States and the English colonies before it. This is not to say utilization can never be gleaned from sources meant to remain largely unseen, nor that public ones will always prove helpful; the nature of sources simply creates a general rule. Public sources like narratives and films typically offer arguments.[1] Diaries and letters to friends tend to lack them. A public creation had a unique purpose and audience, unlikely to exist in the first place without an intention to persuade, and with that intention came more attention to intersectionality, whether in a positive (liberatory) or negative (oppressive) manner.

An intersection between race and gender traditionally refers to an overlap in challenges: a woman of color, for instance, will face oppressive norms targeting both women and people of color, whereas a white woman will only face one of these. Here the meaning will include this but is expanded slightly to reflect how the term has grown beyond academic circles. In cultural and justice movement parlance, it has become near-synonymous with solidarity, in recognition of overlapping oppressions (“True feminism is intersectional,” “If we fight sexism we must fight racism too, as these work together against women of color,” and so on). Therefore “intersectionality” has a negative and positive connotation: multiple identities plagued by multiple societal assaults, but also the coming together of those who wish to address this, who declare the struggle of others to be their own. We will therefore consider intersectionality as oppressive and liberatory developments, intimately intertwined, relating to women of color.

Salt of the Earth, the 1954 film in which the wives of striking Mexican American workers ensure a victory over a zinc mining company by taking over the picket line, is intersectional at its core.[2] Meant for a public audience, it uses overlapping categorical challenges to argue for gender and racial (as well as class) liberation. The film was created by blacklisted Hollywood professionals alongside the strikers and picketers on which the story is based (those of the 1950-1951 labor struggle at Empire Zinc in Hanover, New Mexico) to push back against American dogma of the era: normalized sexism, racism, exploitation of workers, and the equation of any efforts to address such problems with communism.[3] Many scenes highlight the brutality or absurdity of these injustices, with workers dying in unsafe conditions, police beating Ramon Quintero for talking back “to a white man,” and women being laughed at when they declare they will cover the picket line, only to amaze when they ferociously battle police.[4]

Intersectionality is sometimes shown not told, with the protagonist Esperanza Quintero facing the full brunt of both womanhood and miserable class conditions in the company-owned town (exploitation of workers includes that of their families). She does not receive racist abuse herself, but, as a Mexican American woman whose husband does, the implication is clear enough. She shares the burdens of racism with men, and those of exploitation — with women’s oppression a unique, additional yoke. In the most explicit expositional instance of intersectionality, Esperanza castigates Ramon for wanting to keep her in her place, arguing that is precisely like the “Anglos” wanting to put “dirty Mexicans” in theirs.[5] Sexism is as despicable as racism, the audience is told, and therefore if you fight the latter you must also fight the former. The creators of Salt of the Earth use intersectionality to argue for equality for women by strategically tapping into preexisting anti-racist sentiment: the men of the movie understand that bigotry against Mexican Americans is wrong from the start, and this is gradually extended to women. The audience — Americans in general, unions, the labor movement — must do the same.

A similar public source to consider is Toni Morrison’s 1987 novel Beloved. Like Salt of the Earth, Beloved is historical fiction. Characters and events are invented, but it is based on a historical happening: in 1850s Ohio, a formerly enslaved woman named Margaret Garner killed one of her children and attempted to kill the rest to prevent their enslavement.[6] One could perhaps argue Salt of the Earth, though fiction, is a primary source for the 1950-1951 Hanover strike, given its Hanover co-creators; it is clearly a primary source for 1954 and its hegemonic American values and activist counterculture — historians can examine a source as an event and what the source says about an earlier event.[7] Beloved cannot be considered a primary source of the Garner case, being written about 130 years later, but is a primary source of the late 1980s. Therefore, any overall argument or comments on intersectionality reflect and reveal the thinking of Morrison’s time.

In her later foreword, Morrison writes of another inspiration for her novel, her feeling of intense freedom after leaving her job to pursue her writing passions.[8] She explains:

I think now it was the shock of liberation that drew my thoughts to what “free” could possibly mean to women. In the eighties, the debate was still roiling: equal pay, equal treatment, access to professions, schools…and choice without stigma. To marry or not. To have children or not. Inevitably these thoughts led me to the different history of black women in this country—a history in which marriage was discouraged, impossible, or illegal; in which birthing children was required, but “having” them, being responsible for them—being, in other words, their parent—was as out of the question as freedom.[9]

This illuminates both Morrison’s purpose and how intersectionality forms its foundation. “Free” meant something different to women in 1987, she suggests, than to men. Men may have understood women’s true freedom as equal rights and access, but did they understand it also to mean, as women did, freedom from judgment, freedom not only to make choices but to live by them without shame? Morrison then turns to intersectionality: black women were forced to live by a different, harsher set of rules. This was a comment on slavery, but it is implied on the same page that the multiple challenges of multiple identities marked the 1980s as well: a black woman’s story, Garner’s case, must “relate…to contemporary issues about freedom, responsibility, and women’s ‘place.’”[10] In Beloved, Sethe (representing Garner) consistently saw the world differently than her lover Paul D, from what was on her back to whether killing Beloved was justified, love, resistance.[11] To a formerly enslaved black woman and mother, the act set Beloved free; to a formerly enslaved man, it was a horrific crime.[12] Sethe saw choice as freedom, and if Paul D saw the act as a choice that could not be made, if he offered only stigma, then freedom could not exist either. Recognizing the unique challenges and perspectives of black women and mothers, Morrison urges readers of the 1980s to do the same, to graft a conception of true freedom onto personal attitudes and public policy.

Moving beyond historical fiction, let us examine a nonfiction text from the era of the Salem witch trials to observe how Native American women were even more vulnerable to accusation than white women. Whereas Beloved and Salt of the Earth make conscious moves against intersectional oppression, the following work, wittingly or not, solidified it. Boston clergyman Cotton Mather’s A Brand Pluck’d Out of the Burning (1693) begins by recounting how Mercy Short, an allegedly possessed servant girl, was once captured by “cruel and Bloody Indians.”[13] This seemingly out of place opening establishes a tacit connection between indigenous people and the witchcraft plaguing Salem. This link is made more explicit later in the work, when Mather writes that someone executed at Salem testified “Indian sagamores” had been present at witch meetings to organize “the methods of ruining New England,” and that Mercy Short, in a possessed state, revealed the same, adding Native Americans at such meetings held a book of “Idolatrous Devotions.”[14] Mather, and others, believed indigenous peoples were involved in the Devil’s work. Further, several other afflicted women and girls had survived Native American attacks, further connecting the terrors.[15]

This placed women like Tituba, a Native American slave, in peril. Women were the primary victims of the witch hunts.[16] Tituba’s race was an added vulnerability (as was, admittedly, a pre-hysteria association, deserved or not, of Tituba with magic).[17] She was accused and pressured into naming other women as witches, then imprisoned (she later recanted).[18] A Brand Pluck’d Out of the Burning was intended to describe Short’s tribulation, as well as offer some remedies,[19] but also to explain its cause. Native Americans, it told its Puritan readers, were heavily involved in the Devil’s work, likely helping create other cross-categorical consequences for native women who came after Tituba. The text both described and maintained a troubling intersection in the New England colonies.

A captivity narrative from the previous decade, Mary Rowlandson’s The Sovereignty and Goodness of God, likewise encouraged intersectional oppression. This source is a bit different than A Brand Pluck’d Out of the Burning because it is a first-hand account of one’s own experience; Mather’s work is largely a second-hand account of Short’s experience (compare “…shee still imagined herself in a desolate cellar” to the first-person language of Rowlandson[20]). Rowlandson was an Englishwoman from Massachusetts held captive for three months by the Narragansett, Nipmuc, and Wompanoag during King Philip’s War (1675-1676).[21] Her 1682 account of this event both characterized Native Americans as animals and carefully defined a woman’s proper place — encouraging racism against some, patriarchy against others, and the full weight of both for Native American women. To Rowlandson, native peoples were “dogs,” “beasts,” “merciless and cruel,” creatures of great “savageness and brutishness.”[22] They were “Heathens” of “foul looks,” whose land was unadulterated “wilderness.”[23] Native society was animalistic, a contrast to white Puritan civilization.[24]

Rowlandson reinforced ideas of true womanhood by downplaying the power of Weetamoo, the female Pocassett Wompanoag chief, whose community leadership, possession of vast land and servants, and engagement in diplomacy and war violated Rowlandson’s understanding of a woman’s proper role in society.[25] Weetamoo’s authority was well-known by the English.[26] Yet Rowlandson put her in a box, suggesting her authority was an act, never acknowledging her as a chief (unlike Native American men), and emphasizing her daily tasks to implicitly question her status.[27] Rowaldson ignored the fact that Weetamoo’s “work” was a key part of tribal diplomacy, attempted to portray her own servitude as unto a male chief rather than Weetamoo (giving possessions first to him), and later labeled Weetamoo an arrogant, “proud gossip” — meaning, historian Lisa Brooks notes, “in English colonial idiom, a woman who does not adhere to her position as a wife.”[28] The signals to her English readers were clear: indigenous people were savages and a woman’s place was in the domestic, not the public, sphere. If Weetamoo’s power was common knowledge, the audience would be led to an inevitable conclusion: a Native American woman was inferior twofold, an animal divorced from true womanhood.

As we have seen, public documents make a case for or against norms of domination that impact women of color in unique, conjoining ways. But sources meant to remain private are often less useful for historians seeking to understand intersectionality — as mentioned in the introduction, with less intention to persuade comes less bold or rarer pronouncements, whether oppressive or liberatory. Consider the diary of Martha Ballard, written 1785-1812. Ballard, a midwife who delivered over eight hundred infants in Hallowell, Maine, left a daily record of her work, home, and social life.[29] The diary does have some liberatory implications for women, subverting ideas of men being the exclusive important actors in the medical and economic spheres.[30] But its purpose was solely for Ballard — keeping track of payments, weather patterns, and so on.[31] There was little need to comment on a woman’s place, and even less was said about race. Though there do exist some laments over the burdens of her work, mentions of delivering black babies, and notice of a black female doctor, intersectionality is beyond Ballard’s gaze, or at least beyond the purpose of her text.[32]

Similarly, private letters often lack argument. True, an audience of one is more likely to involve persuasion than an audience of none, but still less likely than a mass audience. And without much of an audience, ideas need not be fully fleshed out nor, at times, addressed at all. Intersectional knowledge can be assumed, ignored as inappropriate given the context, and so on. For instance, take a letter abolitionist and women’s rights activist Sarah Grimké wrote to Sarah Douglass of the Philadelphia Female Anti-Slavery Society on February 22, 1837.[33] Grimké expressed sympathy for Douglass, a black activist, on account of race: “I feel deeply for thee in thy sufferings on account of the cruel and unchristian prejudice…”[34] But while patriarchal norms and restrictions lay near the surface, with Grimké describing the explicitly “female prayer meetings” and gatherings of “the ladies” where her early work was often contained, she made no comment on Douglass’ dual challenge of black womanhood.[35] The letter was a report of Grimké’s meetings, with no intention to persuade. Perhaps she felt it off-topic to broach womanhood and intersectionality. Perhaps she believed it too obvious to mention — or that it would undercut or distract from her extension of sympathy toward Douglass and the unique challenges of racism (“Yes, you alone face racial prejudice, but do we not both face gender oppression?”). On the one hand, the letter could seem surprising: how could Grimké, who along with her sister Angelina were pushing for both women’s equality and abolition for blacks at this time, not have discussed womanhood, race, and their interplays with a black female organizer like Douglass?[36] On the other, this is not surprising at all: this was a private letter with a limited purpose. It likely would have looked quite different had it been a public letter meant for a mass audience.

In sum, this paper offered a general view of how the historian can find and explore intersectionality, whether women of color facing overlapping challenges or the emancipatory mindsets and methods needed to address them. Purpose and audience categorized the most and least useful sources for such an endeavor. Public-intended sources like films, novels, secondary narratives, first-person narratives, and more (autobiographies, memoirs, public photographs and art, articles, public letters) show how intersectionality was utilized, advancing regressive or progressive attitudes and causes. Types of sources meant to remain private like diaries, personal letters, and so on (private photographs and art, some legal and government documents) often have no argument and are less helpful. From here, a future writing could explore the exceptions that of course exist. More ambitiously, another might attempt to examine the effectiveness of each type of source in producing oppressive or liberatory change: does the visual-auditory stimulation of film or the inner thoughts in memoirs evoke emotions and reactions that best facilitate attitudes and action? Is seeing the intimate perspectives of multiple characters in a novel of historical fiction most powerful, or that of one thinker in an autobiography, who was at least a real person? Or is a straightforward narrative, the writer detached, lurking in the background as far away as possible, just as effective as more personal sources in pushing readers to hold back or stand with women of color? The historian would require extensive knowledge of the historical reactions to the (many) sources considered (D.W. Griffith’s Birth of a Nation famously sparked riots — can such incidents be quantified? Was this more likely to occur due to films than photographs?) and perhaps a co-author from the field of psychology to test (admittedly present-day) human reactions to various types of sources scientifically to bolster the case.

For more from the author, subscribe and follow or read his books.


[1] Mary Lynn Rampolla, A Pocket Guide to Writing in History, 10th ed. (Boston: Bedford/St. Martin’s, 2020), 14.

[2] Salt of the Earth, directed by Herbert Biberman (1954; Independent Productions Corporation).

[3] Carl R. Weinberg, “‘Salt of the Earth’: Labor, Film, and the Cold War,” Organization of American Historians Magazine of History 24, no. 4 (October 2010): 41-45.

  Benjamin Balthaser, “Cold War Re-Visions: Representation and Resistance in the Unseen Salt of the Earth,” American Quarterly 60, no. 2 (June 2008): 347-371.

[4] Salt of the Earth, Biberman.

[5] Ibid.

[6] Toni Morrison, Beloved (New York: Vintage Books, 2004), xvii.

[7] Kathleen Kennedy (lecture, Missouri State University, April 26, 2022).

[8] Morrison, Beloved, xvi.

[9] Ibid, xvi-xvii.

[10] Ibid., xvii.

[11] Ibid., 20, 25; 181, 193-195. To Sethe, her back was adorned with “her chokecherry tree”; Paul D noted “a revolting clump of scars.” This should be interpreted as Sethe distancing herself from the trauma of the whip, reframing and disempowering horrific mutilation through positive language. Paul D simply saw the terrors of slavery engraved on the body. Here Morrison subtly considers a former slave’s psychological self-preservation. When Sethe admitted to killing Beloved, she was unapologetic to Paul D — “I stopped him [the slavemaster]… I took and put my babies where they’d be safe” — but he was horrified, first denying the truth, then feeling a “roaring” in his head, then telling Sethe she loved her children too much. Then, like her sons and the townspeople at large, Paul D rejected Sethe, leaving her.

[12] Ibid., 193-195.

[13] Cotton Mather, A Brand Pluck’d Out of the Burning, in George Lincoln Burr, Narratives of the New England Witch Trials (Mineola, New York: Dover Publications, 2012), 259.

[14] Ibid, 281-282.

[15] Richard Godbeer, The Salem Witch Hunt: A Brief History with Documents (New York: Bedford/St. Martin’s, 2018), 83.

[16] Michael J. Salevouris and Conal Furay, The Methods and Skills of History (Hoboken, NJ: Wiley-Blackwell, 2015), 211.

[17] Godbeer, Salem, 83.

[18] Ibid., 83-84.

[19] Burr, Narratives, 255-258.

[20] Ibid., 262.

[21] Mary Rowlandson, The Sovereignty and Goodness of God by Mary Rowlandson with Related Documents, ed. Neal Salisbury (Boston: Bedford Books, 2018).

[22] Ibid., 76-77, 113-114.

[23] Ibid., 100, 76.

[24] This was the typical imperialist view. See Kirsten Fischer, “The Imperial Gaze: Native American, African American, and Colonial Women in European Eyes,” in A Companion to American Women’s History, ed. Nancy A. Hewitt (Malden MA: Blackwell Publishing, 2002), 3-11.

[25] Lisa Brooks, Our Beloved Kin: A New History of King Philip’s War (New Haven: Yale University Press, 2018), chapter one.

[26] Ibid., 264.

[27] Ibid.

   Rowlandson, Sovereignty, 81, 103.

[28] Brooks, Our Beloved Kin, 264, 270.

[29] Laurel Thatcher Ulrich, A Midwife’s Tale: The Life of Martha Ballard, Based on Her Diary, 1785-1812 (New York: Vintage Books, 1999).

[30] Ibid., 28-30.

[31] Ibid., 168, 262-263.

[32] Ibid., 225-226, 97, 53.

[33] Sarah Grimké, “Letter to Sarah Douglass,” in Kathryn Kish Sklar, Women’s Rights Emerges within the Antislavery Movement, 1830-1870 (New York: Bedford/St. Martin’s, 2019), 94-95.

[34] Ibid., 95.

[35] Ibid., 94.

[36] Ibid., 84-148.

U.S. Segregation Could Have Lasted into the 1990s — South Africa’s Did

The 1960s were not that long ago. Many blacks who endured Jim Crow are still alive — as are many of the whites who kept blacks out of the swimming pool. When we think about history, we often see developments as natural — segregation was always going to fall in 1968, wasn’t it? Humanity was evolving, and had finally reached its stage of shedding legal racial separation and discrimination. That never could have continued into the 1970s, 80s, and 90s. We were, finally, too civilized for that.

South Africa provides some perspective. It was brutally ruled by a small minority of white colonizers for centuries, first the Dutch (1652-1815) and then the British (1815-1910). The population was enslaved until 1834. White rule continued from 1910 to 1992, after Britain made the nation a dominion (self-governing yet remaining part of the empire; full independence was voted for by whites in 1960). The era known as apartheid was from 1948-1992, when harsher discriminatory laws and strict “apartness” began, but it is important to know how bad things were before this:

Scores of laws and regulations separated the population into distinct groups, ensuring white South Africans access to education, higher-paying jobs, natural resources, and property while denying such things to the black South African population, Indians, and people of mixed race. Between union in 1910 and 1948, a variety of whites-only political parties governed South Africa… The agreement that created the Union denied black South Africans the right to vote… Regulations set aside an increasing amount of the most fertile land for white farmers and forced most of the black South African population to live in areas known as reserves. Occupying the least fertile and least desirable land and lacking industries or other developments, the reserves were difficult places to make a living. The bad conditions on the reserves and policies such as a requirement that taxes be paid in cash drove many black South Africans—particularly men—to farms and cities in search of employment opportunities.

With blacks pushing into cities and for their civil rights, the government began “implementing the apartheid system to segregate the country’s races and guarantee the dominance of the white minority.” Apartheid was the solidification of segregation into law. Legislation segregated public facilities like buses, stores, restaurants, hospitals, parks, and beaches. Further, one of the

…most significant acts in terms of forming the basis of the apartheid system was the Group Areas Act of 1950. It established residential and business sections in urban areas for each race, and members of other races were barred from living, operating businesses, or owning land in them—which led to thousands of Coloureds, Blacks, and Indians being removed from areas classified for white occupation… [The government] set aside more than 80 percent of South Africa’s land for the white minority. To help enforce the segregation of the races and prevent Blacks from encroaching on white areas, the government strengthened the existing “pass” laws, which required nonwhites to carry documents authorizing their presence in restricted areas…

Separate educational standards were established for nonwhites. The Bantu Education Act (1953) provided for the creation of state-run schools, which Black children were required to attend, with the goal of training the children for the manual labour and menial jobs that the government deemed suitable for those of their race. The Extension of University Education Act (1959) largely prohibited established universities from accepting nonwhite students…

[In addition,] the Prohibition of Mixed Marriages Act (1949) and the Immorality Amendment Act (1950) prohibited interracial marriage or sex…

The created conditions were predictable: “While whites generally lived well, Indians, Coloureds, and especially Blacks suffered from widespread poverty, malnutrition, and disease.”

Then, in 1970, blacks lost their citizenship entirely.

Apartheid ended only in the early 1990s due to decades of organizing, protest, civil disobedience, riots, and violence. Lives were lost and laws were changed — through struggle and strife, most explosively in the 1970s and 80s, a better world was built. The same happened in the U.S. in the 1950s and 60s. But our civil rights struggle and final victory could easily have occurred later as well. The whites of South Africa fighting to maintain apartheid all the way until the 1990s were not fundamentally different human beings than American whites of the same era. They may have held more despicable views on average, been more stuck in the segregationist mindset, but they were not different creatures. Varying views come from unique national histories, different societal developments — different circumstances. Had the American civil rights battle unfolded differently, we could have seen Jim Crow persist past the fall of the Berlin Wall. Such a statement feels like an attack on sanity because history feels natural — surely it was impossible for events to unfold in other ways — and due to nationalism, Americans thinking themselves better, more fundamentally good and civilized, than people of other nations. Don’t tell them that other countries ended slavery, gave women the right to vote, and so on before the United States (and most, while rife with racism and exclusion, did not codify segregation into law as America did; black Americans migrated to France in the 19th and 20th centuries for refuge, with Richard Wright declaring there to be “more freedom in one square block of Paris than in the entire United States”). If one puts aside the glorification of country and myths of human difference and acknowledges that American history and circumstances could have gone differently, the disturbing images begin to appear: discos keeping out people of color, invading Vietnam with a segregated army, Blockbusters with “Whites Only” signs.

For more from the author, subscribe and follow or read his books.

‘Beloved’ as History

In one sense, fiction can present (or represent, a better term) history as an autobiography might, exploring the inner thoughts and emotions of a survivor or witness. In another, fiction is more like a standard nonfiction work, its omniscient gaze shifting from person to person, revealing that which a single individual cannot know and experience, but not looking within, at the personal. Toni Morrison’s 1987 Beloved exemplifies the synthesis of these two commonalities: the true, unique power of fiction is the ability to explore the inner experiences of multiple persons. While only “historically true in essence,” as Morrison put it, the novel offers a history of slavery and its persistent trauma for the characters Sethe, Paul D, Denver, Beloved, and more.[1] It is posited here that Morrison believed the history of enslavement could be more fully understood through representations of the personal experiences of diverse impacted persons. This is the source of Beloved’s power.

One way to approach this is to consider different perspectives of the same event or those similar. To Sethe, her back was adorned with “her chokecherry tree”; Paul D noted “a revolting clump of scars.”[2] This should be interpreted as Sethe distancing herself from the trauma of the whip, reframing and disempowering horrific mutilation through positive language. Paul D simply saw the terrors of slavery engraved on the body. Here Morrison subtly considers a former slave’s psychological self-preservation. As another example, both Sethe and Paul D experienced sexual assault. Slaveowners and guards, respectively, forced milk from Sethe’s breasts and forced Paul D to perform oral sex.[3] Out of fear, “Paul D retched — vomiting up nothing at all. An observing guard smashed his shoulder with the rifle…”[4] “They held me down and took it,” Sethe thought mournfully, “Milk that belonged to my baby.”[5] Slavery was a violation of personhood, an attack on motherhood and manhood alike. Morrison’s characters experienced intense pain and shame over these things; here the author draws attention to not only the pervasive sexual abuse inherent to American slavery but also how it could take different forms, with different meanings, for women and men. Finally, consider how Sethe killed her infant to save the child from slavery.[6] Years later, Sethe was unapologetic to Paul D — “I stopped him [the slavemaster]… I took and put my babies where they’d be safe” — but he was horrified, first denying the truth, then feeling a “roaring” in his head, then telling Sethe she loved her children too much.[7] Then, like her sons and the townspeople at large, Paul D rejected Sethe, leaving her.[8] This suggests varying views on the meaning of freedom — death can be true freedom or the absence of it, or perhaps whether true freedom is determining one’s own fate — as well as ethics and resistance and love; a formerly enslaved woman and mother may judge differently than a formerly enslaved man, among others.[9]

Through the use of fiction, Morrison can offer diverse intimate perspectives, emotions, and experiences of former slaves, allowing for a more holistic understanding of the history of enslavement. This is accomplished through both a standard literary narrative and, in several later chapters, streams of consciousness from Sethe, Denver, Beloved, and an amalgamation of the three.[10] Indeed, Sethe and Paul D’s varying meanings and observations here are a small selection from an intensely complex work with several other prominent characters. There is much more to explore. It is also the case that in reimagining and representing experiences, Morrison attempts to make history personal and comprehensible for the reader, to transmit the emotions of slavery from page to body.[11] Can history be understood, she asks, if we do not experience it ourselves, in at least a sense? In other words, Beloved is history as “personal experience” — former slaves’ and the reader’s.[12]

For more from the author, subscribe and follow or read his books.


[1] Toni Morrison, Beloved (New York: Vintage Books, 2004), xvii.

[2] Ibid., 20, 25.

[3] Ibid., 19-20, 127.

[4] Ibid., 127.

[5] Ibid., 236.

[6] Ibid., 174-177.

[7] Ibid., 181, 193-194.

[8] Ibid., 194-195.

[9] Morrison alludes, in her foreword, to wanting to explore what freedom meant to women: ibid., xvi-xvii.

[10] Ibid., 236-256.

[11] Morrison writes that to begin the book she wanted the reader to feel kidnapped, as Africans or sold/caught slaves experienced: ibid., xviii-xix. 

[12] Ibid., xix.

The MAIN Reasons to Abolish Student Debt

Do you favor acronyms as much as you do a more decent society? Then here are the MAIN reasons to abolish student debt:

M – Most other wealthy democracies offer free (tax-funded) college, just like public schools; the U.S. should have done the same decades ago.

A – All positive social change and new government programs are “unfair” to those who came before and couldn’t enjoy them; that’s how time works.

I – Immense economic stimulus: money spent on debt repayment is money unspent in the market, so end the waste and boost the economy by trillions.

N – Neighbors are hurting from inflation, with skyrocketing costs of houses, rent, food, gas, and more, with no corresponding explosion of wages; what does Lincoln’s “government for the people” mean if not one that makes lives a little better?

For more from the author, subscribe and follow or read his books.

‘Salt of the Earth’: Liberal or Leftist?

Labor historian Carl R. Weinberg argues that the Cold War was fought at a cultural level, films being one weapon to influence American perspectives on matters of class and labor, gender, and race.[1] He considers scenes from Salt of the Earth, the 1954 picture in which the wives of striking Mexican American workers ensure a victory over a zinc mining company by taking over the picket line, that evidence a push against hierarchical gender relations, racial prejudice, and corporate-state power over unions and workers.[2] Cultural and literary scholar Benjamin Balthaser takes the same film and explores the scenes left on the cutting room floor, positing that the filmmakers desired a stronger assault against U.S. imperialism, anti-communism at home and abroad (such as McCarthyism and the Korean War), and white/gender supremacy, while the strikers on which the film was based, despite their sympathetic views and militancy, felt such commentary would hurt their labor and civil rights organizing — or even bring retribution.[3] Balthaser sees a restrained version born of competing interests, and Weinberg, without exploring the causes, notices the same effect: there is nearly no “mention of the broader political context,” little commentary on communism or America’s anti-communist policies.[4] It is a bit odd to argue Salt of the Earth was a cultural battleground of the Cold War that had little to say of communism, but Weinberg falls roughly on the same page as Balthaser: the film boldly takes a stand for racial and gender equality, and of course union and workers’ rights, but avoids the larger ideological battle, capitalism versus communism. They are correct: this is largely a liberal, not a leftist, film.

This does not mean communist sympathies made no appearance, of course: surviving the editing bay was a scene that introduced the character of Frank Barnes of “the International” (the Communist International), who strongly supported the strike and expressed a willingness to learn more of Mexican and Mexican American culture.[5] Later, “Reds” are blamed for causing the strike.[6] And as Weinberg notes, the Taft-Hartley Act, legislation laced with anti-communist clauses, is what forces the men to stop picketing.[7] Yet all this is as close as Salt comes to connecting labor, racial, and women’s struggles with a better world, how greater rights and freedom could create communism or vice versa. As Balthasar argues, the original script attempted to draw a stronger connection between this local event and actual/potential political-economic systems.[8] The final film positions communists as supporters of positive social changes for women, workers, and people of color, but at best only implies that patriarchy, workplace misery or class exploitation, and racism were toxins inherent to the capitalist system of which the United States was a part and only communism could address. And, it might be noted, the case for such an implication is slightly weaker for patriarchy and racism, as the aforementioned terms such as “Reds” only arise in conversations centered on the strike and the men’s relationships to it.

True, Salt of the Earth is a direct attack on power structures. Women, living in a company town with poor conditions like a lack of hot water, want to picket even before the men decide to strike; they break an “unwritten rule” by joining the men’s picket line; they demand “equality”; they mock men; they demand to take over the picket line when the men are forced out, battling police and spending time in jail.[9] Esperanza Quintero, the film’s protagonist and narrator, at first more dour, sparkles to life the more she ignores her husband Ramon’s demands and involves herself in the huelga.[10] By the end women’s power at the picket line has transferred to the home: the “old way” is gone, Esperanza tells Ramon when he raises a hand to strike her.[11] “Have you learned nothing from the strike?” she asks. Likewise, racist company men (“They’re like children”) and police (“That’s no way to talk to a white man”) are the villains, as is the mining company that forces workers to labor alone, resulting in their deaths, and offers miserable, discriminatory pay.[12] These struggles are often connected (intersectionality): when Esperanza denounces the “old way,” she compares being put in her place to the “Anglos” putting “dirty Mexicans” in theirs.[13] However, it could be that better working conditions, women’s rights, and racial justice can, as the happy ending suggests, be accomplished without communism. Without directly linking progress to the dismantling of capitalism, the film isolates itself from the wider Cold War debate.

For more from the author, subscribe and follow or read his books.


[1] Carl R. Weinberg, “‘Salt of the Earth’: Labor, Film, and the Cold War,” Organization of American Historians Magazine of History 24, no. 4 (October 2010): 42.

[2] Ibid., 42-44.

[3] Benjamin Balthaser, “Cold War Re-Visions: Representation and Resistance in the Unseen Salt of the Earth,” American Quarterly 60, no. 2 (June 2008): 349.

[4] Weinberg, “Salt,” 43.

[5] Salt of the Earth, directed by Herbert Biberman (1954; Independent Productions Corporation).

[6] Ibid.

[7] Ibid.

[8] Balthaser, “Cold War,” 350-351. “[The cut scenes] connect the particular and local struggle of the Mexican American mine workers of Local 890 to the larger state, civic, and corporate apparatus of the international cold war; and they link the cold war to a longer U.S. history of imperial conquest, racism, and industrial violence. Together these omissions construct a map of cold war social relations…”

[9] Salt of the Earth, Biberman.

[10] Ibid.

[11] Ibid.

[12] Ibid.

[13] Ibid.

Work, Activism, and Morality: Women in Nineteenth-Century America

This paper argues that nineteenth-century American women viewed work as having a moral nature, and believed this idea extended to public advocacy. The latter is true in two senses: 1) that public advocacy also had a moral nature, and 2) that at times a relationship existed between the moral nature of their work and that of their activism. Private work could be seen as a moral duty or an evil nightmare, depending upon the context, and different women likewise saw activism as either right and proper or unethical and improper. More conservative women, for instance, did not support the shattering of traditional gender roles in the public sphere, the troubling efforts of other women to push for political and social change, no matter the justification. Abolition, women’s rights, and Native American rights, if worth pursuing at all, were the purview of men. Reformist women, on the other hand, saw their public roles as moral responsibilities that echoed those of domestic life or addressed its iniquities. While the moral connection between the two spheres is at times frustratingly tenuous and indirect, let us explore women’s divergent views on the rightness or wrongness of their domestic work and political activity, while considering why some women saw relationships between them. In this context, “work” and its synonyms can be defined as a nineteenth-century woman’s everyday tasks and demeanor — not only what she does but how she behaves in the home as well (as we will see, setting a behavioral example could be regarded as crucial a role in domestic life as household tasks).

In the 1883 memoir Life Among the Piutes, Sarah Winnemucca Hopkins (born Thocmentony) expressed a conviction that the household duties of Piute women and men carried moral weight.[1] She entitled her second chapter “Domestic and Social Moralities,” domestic moralities being proper conduct regarding the home and family.[2] “Our children are very carefully taught to be good,” the chapter begins — and upon reaching the age of marriage, interested couples are warned of the seriousness of domestic responsibilities.[3] “The young man is summoned by the father of the girl, who asks him in her presence, if he really loves his daughter, and reminds him, if he says he does, of all the duties of a husband.”[4] The concepts of love, marriage, and becoming a family were inseparable from everyday work. The father would then ask his daughter the same question. “These duties are not slight,” Winnemucca Hopkins writes. The woman is “to dress the game, prepare the food, clean the buckskins, make his moccasins, dress his hair, bring all the wood, — in short, do all the household work. She promises to ‘be himself,’ and she fulfils her promise.”[5] “Be himself” may be indicative of becoming one with her husband, or even submitting to his leadership, but regardless of interpretation it is clear, with the interesting use of present tense (“fulfils”) and lack of qualifiers, that there is no question the woman will perform her proper role and duties. There is such a question for the husband, however: “if he does not do his part” when childrearing he “is considered an outcast.”[6] Mothers in fact openly discussed whether a man was doing his duty.[7] For Winnemucca Hopkins and other Piutes, failing to carry out one’s domestic labor was a shameful wrong. This chapter, and the book in general, attempts to demonstrate to a white American audience “how good the Indians were” — not lazy, not seeking war, and so on — and work is positioned as an activity that makes them ethical beings.[8] And ethical beings, it implies, do not deserve subjugation and brutality. True, Winnemucca Hopkins may have emphasized domestic moralities to garner favor from whites with certain expectations of duty — but that does not mean these moralities were not in fact roots of Piute culture; more favor could have been curried by de-emphasizing aspects whites may have felt violated the social norms of work, such as men taking over household tasks, chiefs laboring while remaining poor, and so on, but the author resists, which could suggest reliability.[9]

Like tending faithfully to private duties, for Winnemucca Hopkins advocacy for native rights was the right thing to do. A moral impetus undergirded both private and public acts. White settlers and the United States government subjected the Piutes, of modern-day Nevada, to violence, exploitation, internment, and removal; Winnemucca Hopkins took her skills as an interpreter and status as chief’s daughter to travel, write, petition, and lecture, urging the American people and state to end the suffering.[10] She “promised my people that I would work for them while there was life in my body.”[11] There was no ambiguity concerning the moral urgency of her public work: “For shame!” she wrote to white America, “You dare to cry out Liberty, when you hold us in places against our will, driving us from place to place as if we were beasts… Oh, my dear readers, talk for us, and if the white people will treat us like human beings, we will behave like a people; but if we are treated by white savages as if we are savages, we are relentless and desperate; yet no more so than any other badly treated people. Oh, dear friends, I am pleading for God and for humanity.”[12] The crimes against the Piutes not only justified Winnemucca Hopkins raising her voice — they should spur white Americans to do the same, to uphold their own values such as faith, belief in liberty, etc. For this Piute leader, just as there existed a moral duty to never shirk domestic responsibilities, there existed a moral duty to not turn a blind eye to oppression.

Enslaved women like Harriet Jacobs understood work in a different way. The nature of her domestic labor was decidedly immoral.[13] In Incidents in the Life of a Slave Girl (1861), she wrote “of the half-starved wretches toiling from dawn till dark on the plantations… of mothers shrieking for their children, torn from their arms by slave traders… of young girls dragged down into moral filth… of pools of blood around the whipping post… of hounds trained to tear human flesh… of men screwed into cotton gins to die…”[14] Jacobs, a slave in North Carolina, experienced the horrors of being sexual property, forced household work, and the spiteful sale of her children.[15] Whereas Winnemucca Hopkins believed in the rightness of her private work and public advocacy, related moral duties to the home and to her people, Jacobs had an even more direct connection between these spheres: the immorality of her private work led straight to, and justified, her righteous battle for abolition. Even before this, she resisted the evil of her work, most powerfully by running away, but also by turning away from a slaveowner’s sexual advances, among other acts.[16]

After her escape from bondage, Jacobs became involved in abolitionist work in New York and wrote Incidents to highlight the true terrors of slavery and push white women in the North toward the cause.[17] Much of her story has been verified by (and we know enough of slavery from) other sources; she is not merely playing to her audience and its moral sensitivities either.[18] One should note the significance of women of color writing books of this kind. Like Winnemucca Hopkins’ text, Jacobs’ contained assurances from white associates and editors that the story was true.[19] Speaking out to change hearts was no easy task — prejudiced skepticism abounded. Jacobs (and her editor, Lydia Maria Child) stressed the narrative was “no fiction” and expected accusations of “indecorum” over the sexual content, anticipating criticisms that could hamper the text’s purpose.[20] Writing could be dangerous and trying. Jacobs felt compelled to use pseudonyms to protect loved ones.[21] She ended the work by writing it was “painful to me, in many ways, to recall the dreary years I passed in bondage.”[22] Winnemucca Hopkins may have felt similarly. In a world of racism, doubt, reprisals, and trauma, producing a memoir was a brave, powerful act of advocacy.

Despite the pain (and concern her literary skills were inadequate[23]), Jacobs saw writing Incidents as the ethical path. “It would have been more pleasant for me to have been silent about my own history,” she confesses at the start, a perhaps inadvertent reminder that what is right is not always what is easy. She then presents her “motives,” her “effort in behalf of my persecuted people.”[24] It was right to reveal the “condition of two millions of women at the South, still in bondage, suffering what I suffered, and most of them far worse,” to show “Free States what Slavery really is,” all its “dark…abominations.”[25] Overall, the text is self-justifying. The evils of slavery warrant the exposé (Life Among the Piutes is similar). Jacobs’ public advocacy grew from and was justified by her experience with domestic labor and her moral values.

These things, for more conservative women, precluded public work. During the abolition and women’s rights movements of the nineteenth century, less radical women saw the public roles of their sisters as violating the natural order and setting men and women against each other.[26] Catherine Beecher, New York educator and writer, expressed dismay over women circulating (abolitionist) petitions in her 1837 “Essay on Slavery and Abolitionism, with Reference to the Duty of American Females.”[27] It was against a woman’s moral duty to petition male legislators to act: “…in this country, petitions to congress, in reference to the official duties of legislators, seem, IN ALL CASES, to fall entirely without [outside] the sphere of female duty. Men are the proper persons to make appeals to rulers whom they appoint…”[28] (This is an interesting use of one civil inequity to justify another: only men can vote, therefore only men should petition.) After all, “Heaven has appointed to one sex the superior, and to the other the subordinate station…”[29] Christianity was the foundation of the gender hierarchy, which meant, for Beecher, that women entering the political sphere violated women’s divinely-decreed space and responsibilities. Women’s “influence” and “power” were to be exerted through the encouragement of love, peace, and moral rightness, as well as by professional teaching, in the “domestic and social circle.”[30] In other words, women were to hint to men and boys the proper way to act in politics only while at home, school, and so forth.[31] This highlights why domestic “work” must reach definitionally beyond household tasks: just as Winnemucca Hopkins and Jacobs were expected to maintain certain demeanors in addition to completing their physical labors, here women must be shining examples, moral compasses, with bearings above reproach.

Clearly, direct calls and organizing for political and social change were wrong; they threatened “the sacred protection of religion” and turned woman into a “combatant” and “partisan.”[32] They set women against God and men. Elsewhere, reformist women were also condemned for speaking to mixed-sex audiences, attacking men instead of supporting them, and more.[33] Beecher and other women held values that restricted women to domestic roles, to “power” no more intrusive to the gender order than housework — to adopt these roles was moral, to push beyond them immoral. The connection between the ideological spheres: one was an anchor on the other. (Limited advocacy to keep women in domestic roles, however, seemed acceptable: Beecher’s essay was public, reinforcing the expectations and sensibilities of many readers, and she was an activist for women in education, a new role yet one safely distant from politics.[34]) Reformist women, of course, such as abolitionist Angelina Grimké, held views a bit closer to those of Jacobs and Winnemucca Hopkins: women were moral beings, and therefore had the ethical responsibility to confront wrongs just as men did, and from that responsibility came the inherent social or political rights needed for the task.[35]

The diversity of women’s beliefs was the product of their diverse upbringings, environments, and experiences. Whether domestic labor was viewed as moral depended on its nature, its context, its participants; whether engagement in the public sphere was seen as the same varied according to how urgent, horrific, and personal social and political issues were regarded. Clearly, race impacted how women saw work. The black slave could have a rather different perspective on moral-domestic duty than a white woman (of any class). One historian posited that Jacobs saw the evils of forced labor as having a corrosive effect on her own morals, that freedom was a prerequisite to a moral life.[36] A unique perspective born of unique experiences. Race impacted perspectives on activism, too, with voices of color facing more extreme, violent motivators like slavery and military campaigns against native nations. Factors such as religion, political ideology, lack of personal impact, race, class, and so on could build a wall of separation between the private and public spheres in the individual mind, between where women should and should not act, but they could also have a deconstructive effect, freeing other nineteenth-century American women to push the boundaries of acceptable behavior. That domestic work and public advocacy had moral natures, aligning here, diverging there, at times connecting, has rich support in the extant documents.

For more from the author, subscribe and follow or read his books.


[1] Sarah Winnemucca Hopkins, Life Among the Piutes (Mount Pleasant, SC: Arcadia Press, 2017), 25-27.

[2] Ibid., 25.

[3] Ibid. 25-26.

[4] Ibid. 26.

[5] Ibid.

[6] Ibid., 27.

[7] Ibid.

[8] Ibid.

[9] Ibid., 27-28.

[10] Ibid., 105-108 offers examples of Winnemucca Hopkins’ advocacy such as petitioning and letter writing. Her final sentence (page 107) references her lectures on the East coast.  

[11] Ibid., 105.

[12] Ibid., 106.

[13] “Slavery is wrong,” she writes flatly. Harriet Jacobs, Incidents in the Life of a Slave Girl, ed. Jennifer Fleischner (New York: Bedford/St. Martin’s, 2020), 95.

[14] Ibid., 96.

[15] Ibid., chapters 5, 16, 19.

[16] Ibid., 51 and chapter 27.

[17] Ibid., 7-18, 26.

[18] Ibid., 7-9.

[19] Ibid., 26-27, 207-209.

   Winnemucca Hopkins, Piutes, 109-119.

[20] Jacobs, Incidents, 25-27.

[21] Ibid., 25.

[22] Ibid., 207.

[23] Ibid., 25-26.

[24] Ibid., 26.

[25] Ibid.

[26] Catherine Beecher, “Essay on Slavery and Abolitionism, with Reference to the Duty of American Females,” in Kathryn Kish Sklar, Women’s Rights Emerges within the Antislavery Movement, 1830-1870 (New York: Bedford/St. Martin’s, 2019), 109-110.

[27] Ibid.

[28] Ibid., 110.

[29] Ibid., 109.

[30] Ibid.

[31] Ibid., 110.

[32] Ibid., 109-110.

[33] “Report of the Woman’s Rights Convention Held at Seneca Falls, N.Y.,” in ibid., 163.

“Pastoral Letter: The General Association of Massachusetts Churches Under Their Care,” in ibid., 120.

[34] Beecher, “Duty,” 109.

[35] Angelina Grimké, “An Appeal to the Women of the Nominally Free States,” in ibid., 103. See also Angelina Grimké, “Letter to Theodore Dwight Weld and John Greenleaf Whittier” in ibid., 132.

[36] Kathleen Kennedy (lecture, Missouri State University, April 12, 2022).

How the Women’s Rights Movement Grew Out of the Abolitionist Struggle

The women’s rights movement of mid-nineteenth century America grew out of the preceding and concurrent abolitionist movement because anti-slavery women recognized greater political power could help end the nation’s “peculiar institution.” The emancipation of women, in other words, could lead to the emancipation of black slaves. This is seen, for example, in the writings of abolitionist activist Angelina Grimké. “Slavery is a political subject,” she wrote in a letter to a friend on February 4, 1837, summarizing the words of her conservative critics, “therefore women should not intermeddle. I admitted it was, but endeavored to show that women were citizens & had duties to perform to their country as well as men.”[1] If women possessed full citizen rights, Grimké implied, they could fully engage in political issues like slavery and influence outcomes as men did. The political project of abolishing slavery necessitated political rights for the women involved in and leading it.

Other documents of the era suggest this prerequisite for abolition in similar ways. Borrowing the ideas of the Enlightenment and the national founding, abolitionists positioned the end of slavery as the acknowledgement of the inalienable rights of enslaved persons — to achieve this end, women’s rights would need to be recognized as well. In 1834, the American Anti-Slavery Society created a petition for women to sign that urged the District of Columbia to abolish slavery, calling for “the restoration of rights unjustly wrested from the innocent and defenseless.”[2] The document offered justification for an act as bold and startling (“suffer us,” “bear with us” the authors urge) as women petitioning government, for instance the fact that the suffering of slaves meant the suffering of fellow women.[3] Indeed, many Americans believed as teacher and writer Catherine Beecher did, that “in this country, petitions to congress, in reference to the official duties of legislators, seem, IN ALL CASES, to fall entirely without [outside] the sphere of female duty. Men are the proper persons to make appeals to rulers whom they appoint…”[4] It would not do for women to petition male legislators to act. In drafting, circulating, and signing this petition, women asserted a political right (an inalienable right of the First Amendment) for themselves, a deed viewed as necessary in the great struggle to free millions of blacks. (Many other bold deeds were witnessed in this struggle, such as women speaking before audiences.[5]

Beecher’s writings reveal that opponents of women’s political activism understood abolitionists’ sentiments that moves toward gender equality were preconditions for slavery’s eradication. She condemned the “thirst for power” of abolitionists; women’s influence was to be exerted through the encouragement of love, peace, and moral rightness, as well as by professional teaching, in the “domestic and social circle.”[6] The male sex, being “superior,” was the one to go about “exercising power” of a political nature.[7] Here gender roles were clearly defined, to be adhered to despite noble aims. The pursuit of rights like petitioning was, to Beecher, the wrong way to end the “sin of slavery.”[8] Yet this castigation of the pursuit of public power to free the enslaved supports the claim that such a pursuit, with such a purpose, indeed took place.

Overall, reformist women saw all public policy, all immoral laws, within their grasp if political rights were won (a troubling thought to Beecher[9]). In September 1848, one Mrs. Sanford, a women’s rights speaker at a Cleveland gathering of the National Negro Convention Movement, summarized the goals of her fellow female activists: they wanted “to co-operate in making the laws we obey.”[10] The same was expressed a month before at the historic Seneca Falls convention.[11] This paralleled the words of Grimké above, as well as her 1837 demand that women have the “right to be consulted in all the laws and regulations by which she is to be governed…”[12] Women saw themselves as under the heel of immoral laws. But as moral beings, Grimké frequently stressed, they had the responsibility to confront wrongs just as men did, and from that responsibility came the inherent political rights needed for such confrontations.[13] If a law such as the right to own human beings was unjust, women would need power over lawmaking, from petitioning to the vote, to correct it.

For more from the author, subscribe and follow or read his books.


[1] Angelina Grimké, “Letter to Jane Smith,” in Kathryn Kish Sklar, Women’s Rights Emerges within the Antislavery Movement, 1830-1870 (New York: Bedford/St. Martin’s, 2019), 93.

[2] The American Anti-Slavery Society, “Petition Form for Women,” in Kathryn Kish Sklar, Women’s Rights Emerges within the Antislavery Movement, 1830-1870 (New York: Bedford/St. Martin’s, 2019), 85.

[3] Ibid.

[4] Catherine Beecher, “Essay on Slavery and Abolitionism, with Reference to the Duty of American Females,” in Kathryn Kish Sklar, Women’s Rights Emerges within the Antislavery Movement, 1830-1870 (New York: Bedford/St. Martin’s, 2019), 110.

[5] “Report of the Woman’s Rights Convention Held at Seneca Falls, N.Y.,” in Kathryn Kish Sklar, Women’s Rights Emerges within the Antislavery Movement, 1830-1870 (New York: Bedford/St. Martin’s, 2019), 163.

[6] Beecher, “Duty,” 109.

[7] Ibid.

[8] Ibid., 111.

[9] Ibid., 110.

[10] “Proceedings of the Colored Convention,” in Kathryn Kish Sklar, Women’s Rights Emerges within the Antislavery Movement, 1830-1870 (New York: Bedford/St. Martin’s, 2019), 168.

[11] “Seneca Falls,” 165.

[12] Angelina Grimké, “Human Rights Not Founded on Sex,” in Kathryn Kish Sklar, Women’s Rights Emerges within the Antislavery Movement, 1830-1870 (New York: Bedford/St. Martin’s, 2019), 135.

[13] Angelina Grimké, “An Appeal to the Women of the Nominally Free States,” in Kathryn Kish Sklar, Women’s Rights Emerges within the Antislavery Movement, 1830-1870 (New York: Bedford/St. Martin’s, 2019), 103. See also Angelina Grimké, “Letter to Theodore Dwight Weld and John Greenleaf Whittier” in ibid., 132.

The Gender Order in Colonial America

Early New England history cannot be properly understood without thorough examination of the ways in which women, or the representations of women, threatened or maintained the gender hierarchy of English society. This is a complex task. Documents written by women and men alike could weaken or strengthen the ideology and practice of male dominance, just as the acts of women, whether accurately preserved in the historical record, distorted in their representation, or lost to humankind forever, could engage with the hierarchy in different ways. (The deeds of men could as well, but that falls beyond the scope of this paper.) This is not to say that every act or writing represented a conscious decision to threaten or shore up the gender order — some likely were, others likely not — but for the historian the outcome or impact grows clear with careful study. Of course, this paper does not posit that every source only works toward one end or the other. In some ways a text might undermine a social system, in other ways bolster it. Yet typically there will be a general trend. Uncovering such an overall impact upon the hierarchy allows for a fuller understanding of any given event in the English colonies during the sixteenth through eighteenth centuries. (That is, from the English perspective. This paper considers how the English saw themselves and others, yet the same analysis could be used for other societies, a point we will revisit in the conclusion.)

Let us begin with a source that works to maintain the gender order, Mary Rowlandson’s 1682 The Sovereignty and Goodness of God. Rowlandson was an Englishwoman from Massachusetts held captive for three months by the Narragansett, Nipmuc, and Wompanoag during King Philip’s War (1675-1676). Her text, which became popular in the colonies, carefully downplays the power of Weetamoo, the female Pocassett Wompanoag chief, whose community leadership, possession of vast land and servants, and engagement in diplomacy and war violated Rowlandson’s Puritan understanding of a woman’s proper place in society.[1] As historian Lisa Brooks writes, “Throughout her narrative, Rowlandson never acknowledged that Weetamoo was a leader equal to a sachem [chief], although this was common knowledge in the colonies. Rather, she labored to represent Weetamoo’s authority as a pretension.”[2] In contrast, Rowlandson had no issue writing of “Quanopin, who was a Saggamore​ [chief]” of the Narragansetts, nor of “King Philip,” Metacom, Wompanoag chief.[3] It was appropriate for men to hold power.

That Rowlandson presented Weetamoo’s authority as an act is a plausible interpretation of the former’s lengthy depiction of Weetamoo dressing herself — this “proud Dame” who took “as much time as any of the Gentry of the land.”[4] She was playing dress-up, playing a part, Rowlandson perhaps implied, an idea that grows stronger with the next line: “When she had dressed her self, her work was to make Girdles of Wampom…”[5] The gentry do not work; the powerful do not labor. How can a working woman have authority? Further, Rowaldson ignored the fact that wampum “work” was a key part of tribal diplomacy, attempted to portray her servitude as unto Quinnapin rather than Weetamoo (giving possessions first to him), and later labeled the chief an arrogant, “proud gossip” — meaning, Brooks notes, “in English colonial idiom, a woman who does not adhere to her position as a wife.”[6] Rowlandson likely felt the need, whether consciously or not, to silence discomforting realities of Native American nations. Weetamoo’s power, and a more egalitarian society, threatened the English gender order, and it would thus not do to present dangerous ideas to a wider Puritan audience.

“Likely” is used as a qualifier because it must be remembered that publication of The Sovereignty and Goodness of God had to go through male Puritan authorities like clergyman Increase Mather, who wrote the preface.[7] It remains an open question how much of this defense of the gender hierarchy comes from Rowlandson and how much from the constraints of that hierarchy upon her: under the eyes of Mather and others, a narrative that did not toe the Puritan line would simply go unpublished. But the overall impact is clear. Rowlandson, as presented in the text, held true to the proper role of women — and thus so should readers.

Conflict with native tribes and captivity narratives held a central place in the colonial English psyche. One narrative that did more to threaten the gender order was that of Hannah Dustin’s captivity, as told by religious leader Cotton Mather, first from the pulpit and then in his 1699 Decennium Luctousum. Unlike his father Increase, Cotton Mather was in a bit of a bind. Dangerous ideas were already on the loose; his sermon and writings would attempt to contain them.[8] Hannah Dustin of Massachusetts was captured by the Abenaki in 1697, during King William’s War. She was placed in servitude to an indigenous family of two men, three women, and seven children.[9] Finding themselves traveling with so few captors, Dustin and two other servants seized hatchets one night and killed the men and most of the women and children.[10] Dustin and the others scalped the ten dead and carried the flesh to Boston, earning fifty pounds, various gifts, and much acclaim. Mather’s representation of Dustin would have to confront and contextualize a seismic disturbance in the social order: women behaving like men, their use of extreme violence.

Mather first turned to the bible for rationalization, writing that Dustin “took up a Resolution, to imitate the Action of Jael upon Sisera…”[11] In Judges 4, a Kenite woman named Jael hammered a tent peg through the skull of the (male) Canaanite commander Sisera, helping the Israelites defeat the Canaanite army. Mather’s audiences would understand the meaning. Puritan women’s subservient and submissive status was rooted in the bible, yet there were extreme circumstances where female violence was justified; being likewise against the enemies of God, Dustin’s gruesome act could be tolerated. Mather then used place as justification: “[B]eing where she had not her own Life secured by any Law unto her, she thought she was not Forbidden by any Law to take away the Life of the Murderers…”[12] In other words, Dustin was in Native American territory, a lawless space. This follows the long-established colonizer mindset of our civilization versus their wilderness and savagery, but here, interestingly, the condemned latter was used as justification for a Puritan’s act.[13] Being unprotected by Puritan laws in enemy lands, Mather wrote, Dustin saw herself as also being free from such laws, making murder permissible. However, the clergyman’s use of “she thought” suggests a hesitation to fully approve of her deed.[14] He nowhere claims what she did was right.

Clearly, Mather attempted to prevent erosion of the gender order through various privisos: a woman murdering others could only be agreeable before God in rare situations, she was outside Puritan civilization and law, plus this was only her view of acceptable behavior. He was also sure to present her husband as a “Courageous” hero who “manfully” saved their children from capture at risk of his own life, as if a reminder of who could normally and properly use violence.[15] Yet Mather could not shield from the public what was already known, acts that threatened the ideology of male superiority and social dominance. The facts remained: a woman got the best of and murdered two “Stout” men.[16] She killed women and children, typically victims of men. She then took their scalps and received a bounty, as a soldier might do. Further, she was praised by men of such status as Colonel Francis Nicholson, governor of Maryland.[17] Mather could not fully approve of Dustin’s actions, but given the acclaim she had garnered neither could he condemn them. Both his relaying of Dustin’s deed and his tacit acceptance presented a significant deviation from social norms to Puritan communities.

Finally, let us consider the diary of Martha Ballard, written 1785-1812. Ballard, a midwife who delivered over eight hundred infants in Hallowell, Maine, left a daily record of her work, home, and social life. This document subverts the gender order by countering the contemporaneous texts positioning men as the exclusive important actors in the medical and economic spheres.[18] It is true that this diary was never meant for public consumption, unlike other texts. However, small acts by ordinary people undermine social systems, whether wittingly or not, and are never known to others. If this is true, surely texts, by their nature, can do the same. Either way, the diary did not remain private: it was read by Ballard’s descendants, historians, and possibly librarians, meaning its impact trickled beyond its creator and into the wider society of nineteenth century New England.[19]

Written by men, doctors’ papers and merchant ledgers of this period were silent, respectively, on midwives and women’s economic functions in towns like Hallowell, implying absence or non-involvement, whereas Ballard’s diary illuminated their importance.[20] She wrote, for example, on November 18, 1793: “At Capt Meloys. His Lady in Labour. Her women Calld… My patient deliverd at 8 hour 5 minute Evening of a fine daughter. Her attendants Mrss Cleark, Duttum, Sewall, & myself.”[21] This passage, and the diary as a whole, emphasized that it was common for midwives and women to safely and skilfully deliver infants, not a man or doctor present.[22] Further, her documentations such as “I have been pulling flax,” “Dolly warpt a piece for Mrs Pollard of 39 yards,” and “Dolly warpt & drawd a piece for Check. Laid 45 yds” made clear that women had economic responsibilities that went beyond their own homes, turning flax into cloth (warping is a key step) that could be traded or sold.[23] Women controlled their labor, earning wages: “received 6/ as a reward.”[24] Though Ballard’s text presents everyday tasks of New England women of her social class, and had a limited readership compared to Rowlandson or Mather’s writings, it too presents dangerous ideas that might bother a reader wedded to the gender hierarchy: that women could be just as effective as male doctors, and that the agency and labor of women hinted at possibilities of self-sufficiency.

The events in this essay, the captivity of English women during war and the daily activities of many English women during peace, would look different without gender analysis, without considering how the acts of women and representations of events related to the gender order. Rowlandson would simply be ignorant, failing to understand who her actual master was, Weetamoo’s position, and so on. Dustin’s violence would be business as usual, a captive killing to escape, with all of Mather’s rationalizations odd and unnecessary. Ballard’s daily entries would just be minutiae, with no connection to or commentary on the larger society from whence they came. Indeed, that is the necessary project. Examining how the gender hierarchy was defended or confronted provides the proper context for a fuller understanding of events — from an English perspective. A future paper might examine other societies, such as Native American nations, in the same way. Clearly, the acts of indigenous women and the (English) representations of those acts influenced English minds, typically threatening their hierarchy. But how did the acts of indigenous women and men, those of English women and men, and indigenous representations of such things engage with Native American tribes’ unique gender systems? We can find hints in English representations (Weetamoo may have been dismayed Rowlandson violated indigenous gender norms[25]), but for an earnest endeavor, primary sources by native peoples will be necessary, just as English sources enabled this writing.

For more from the author, subscribe and follow or read his books.


[1] Lisa Brooks, Our Beloved Kin: A New History of King Philip’s War (New Haven: Yale University Press, 2018), chapter one.

[2] Ibid., 264.

[3] Mary Rowlandson, The Sovereignty and Goodness of God by Mary Rowlandson with Related Documents, ed. Neal Salisbury (Boston: Bedford Books, 2018), 81.

[4] Ibid., 103.

[5] Ibid.

[6] Brooks, Our Beloved Kin, 264, 270.

[7] Rowlandson, Sovereignty, 28.

  Brooks, Our Beloved Kin, 264.

[8] “The Captivity of Hannah Dustin,” in Mary Rowlandson, The Sovereignty and Goodness of God by Mary Rowlandson with Related Documents, ed. Neal Salisbury (Boston: Bedford Books, 2018), 170-173.

[9] Ibid., 172.

[10] Ibid., 173.

[11] Ibid.

[12] Ibid.

[13] Kirsten Fischer, “The Imperial Gaze: Native American, African American, and Colonial Women in European Eyes,” in A Companion to American Women’s History, ed. Nancy A. Hewitt (Malden MA: Blackwell Publishing, 2002), 3-19.

[14] “Hannah Dustin,” 173.

[15] Ibid., 171-172.

[16] Ibid., 172.

[17] Ibid., 173.

[18] Laurel Thatcher Ulrich, A Midwife’s Tale: The Life of Martha Ballard, Based on Her Diary, 1785-1812 (New York: Vintage Books, 1999), 28-30.

[19] Ibid., 8-9, 346-352.

[20] Ibid., 28-30.

[21] Ibid., 162-163.

[22] See Ibid., 170-172 for infant mortality data.

[23] Ibid., 36, 73, 29.

[24] Ibid., 162. See also page 168.

[25] Brooks, Our Beloved Kin, 265.

The First American Bestseller: Mary Rowlandson’s 1682 ‘The Sovereignty and Goodness of God’

Historian John R. Gramm characterized Mary Rowlandson, an Englishwoman captured by allied Narragansett, Nipmuc, and Wompanoag warriors during King Philip’s War (1675-1676), as “both a victim and colonizer.”[1] This is correct, and observed in what is often labeled the first American bestseller. Rowlandson’s narrative of her experience, The Sovereignty and Goodness of God, is written through these inseparable lenses, a union inherent to the psychology of settler colonialism (to be a colonizer is to be a “victim”) and other power systems. Reading the narrative through both lenses, rather than one, avoids both dehumanization and a colonizer mindset, allowing for a more nuanced study.

Rowlandson as victim appears on the first page, with her town of Lancaster, Massachusetts, attacked by the aforementioned tribes: “Houses were burning,” women and children clubbed to death, a man dying from “split open…Bowels.”[2] At the final page, after she was held against her will for three months, forced to work, and ransomed for twenty pounds, she was still elaborating on the “affliction I had, full measure (I thought) pressed down and running over” — that cup of divinely ordained hardships.[3] Between war, bondage, the loss of her infant, and the elements such as hunger and cold, Rowlandson was a woman experiencing trauma, swept up in events and horrors beyond her control. “My heart began to fail,” she wrote, signifying her pain, “and I fell a weeping…”[4]

Rowlandson knew she was a victim. She did not know she was a colonizer, at least not in any negatively connoted fashion. Also from opening to close are expressions of racial and moral superiority. Native peoples are “dogs,” “beasts,” “merciless and cruel,” marked by “savageness and brutishness.”[5] She saw “a vast difference between the lovely faces of Christians, and the foul looks of these Heathens,” whose land was unadulterated “wilderness.”[6] Puritan society was civilization, native society was animalistic. That Rowlandson’s views persist despite her deeper understanding of and integration with Wompanoag society could be read as evidence of especially strong prejudices (though publication of her work may have required toeing the Puritan line). Regardless, her consciousness was thoroughly defined by religion and what historian Kristen Fischer called the “imperial gaze.”[7] Rowlandson’s town of Lancaster was in the borderlands, meaning more conflict with Native Americans; she was a prosperous minister’s wife, making religion an even more central part of her life than the average Puritan woman. (Compare this to someone like midwife Martha Ballard, whose distance from Native Americans and lower social class built a consciousness around her labor and relationships with other working women.[8]) Not only is the distinction between herself (civilized) and them (beastly) clear in Rowlandson’s mind, so is the religious difference — though for many European Americans Christianity and civilization were one and the same. The English victims are always described as “Christians,” which positions the native warriors as heathen Others (she of course makes this explicit as well, as noted).

These perspectives, of victim and colonizer, cannot be easily parsed apart. Setting aside Rowlandson’s kidnapping for a moment, settler colonization in some contexts requires a general attitude of victimhood. If “savages” are occupying land you believe God granted to you, as Increase Mather, who wrote Rowlandson’s preface, stated plainly, that is a wrong that can be addressed with violence.[9] Rowlandson is then a victim twofold. First, her Puritan promised land was being occupied by native peoples. Second, she was violently captured and held. To be a colonizer is to be a victim, by having “your” land violated by societies there before you, and by experiencing the counter-violence wrought by your colonization.

To only read Rowlandson’s captivity as victimhood is to simply adopt Rowlandson’s viewpoint, ignoring the fact that she is a foreigner with attitudes of racial and religious superiority who has encroached on land belonging to native societies. To read the captivity only through a colonizer lens, focusing on her troubling presence and views, is to dehumanize Rowlandson and ignore her emotional and physical suffering. When Chief Weetamoo’s infant dies, Rowlandson “could not much condole” with the Wampanoags, due to so many “sorrowfull dayes” of her own, including losing her own baby. She sees only the “benefit…more room.”[10] This callousness could be interpreted as a belief that Native Americans did not suffer like full human beings, mental resistance to an acknowledgement that might throw colonialism into question.[11] That is the colonizer lens. Yet from a victim-centered reading, it is difficult to imagine many contexts wherein a kidnapped person would feel much sympathy for those responsible for her captivity and servitude, the deaths of her infant and neighbors, and so on. Victim and colonizer indeed.

For more from the author, subscribe and follow or read his books.


[1] John R. Gramm (lecture, Missouri State University, February 15, 2022).

[2] Mary Rowlandson, The Sovereignty and Goodness of God by Mary Rowlandson with Related Documents, ed. Neal Salisbury (Boston: Bedford Books, 2018), 74.

[3] Ibid., 118.

[4] Ibid., 88.

[5] Ibid., 76-77, 113-114.

[6] Ibid., 100, 76.

[7] Kirsten Fischer, “The Imperial Gaze: Native American, African American, and Colonial Women in European Eyes,” in A Companion to American Women’s History, ed. Nancy A. Hewitt (Malden MA: Blackwell Publishing, 2002), 3-19.

[8] Laurel Thatcher Ulrich, A Midwife’s Tale: The Life of Martha Ballard, Based on Her Diary, 1785-1812 (New York: Vintage Books, 1999). Ballard and her husband, a working middle-class woman and a tax collector, faced financial hardship and ended up living in “semi-dependence on their son’s land”; see page 265. Compare this to Rowlandson, Sovereignty, 15-16: coming from and marrying into large landowning families, Rowlandson did not need to work to survive. Given her background, her consciousness goes beyond women and work, to larger collective concerns of community, civilization, and faith.

[9] Rowlandson, Sovereignty, 28.

  Lisa Brooks, Our Beloved Kin: A New History of King Philip’s War (New Haven: Yale University Press, 2018), 11.

[10] Rowlandson, Sovereignty, 97.

[11] Brooks, Our Beloved Kin, 282.

Three Thoughts on Democracy

The following are three musings on what might undermine and end American democracy, in the hopes such things can be countered.

Did the Electoral College prime Americans to reject democracy? The current declining trust in democracy and rising support for authoritarianism could perhaps be partly explained by preexisting anti-democratic norms. Supporters of the Electoral College, or those apathetic, were already comfortable with something disturbing: the candidate with fewer votes winning an election. How great a leap is it from there to tolerating (or celebrating) a candidate with fewer votes taking the White House due to some other reason? Trump and his supporters’ attempts to overturn a fair election may not be the best example here, as many of them believed Trump in fact won the most votes and was the proper victor, but one can fill in the blank with a clearer hypothetical. Imagine a violent coup takes place without anyone bothering to pretend an election was stolen; the loser simply uses force to seize power. Would a citizenry long agreeable to someone with fewer votes taking power be more complacent when a coup allows for the same? (Now imagine half the country wanted the coup leader to win the election — and this same half historically favored the Electoral College! Fertile soil for complacency.)

Does a two-party system make authoritarianism inevitable? No matter how terrible a presidential candidate is, he or she is better than the other party’s nominee. That is the mindset, and it helped secure Trump’s 2016 victory — the 62.9 million who voted for him were not all cultish true believers; many just regarded Democrats as the true enemy. Same for the 74.2 million who voted for him in 2020. Trump was a duncical demagogue with authoritarian tendencies who tried to deal a fatal blow to our democracy to stay in power. Future candidates will act in similar fashion. None of that matters in a nation with extreme political polarization. Authoritarians will earn votes, and possibly win, simply because they are not with the other party. The two-party trap could exterminate democracy.

We forget that authoritarians are popular. The Netflix docuseries How to Become a Tyrant offers many important warnings to those who care about preserving democracy. Perhaps its most crucial reminder is that authoritarians are popular. (Another: democracy is usually ended slowly, chipped away at.) Many are elected by majorities; even long after coming to power — with democracy replaced by reigns of terror — strongmen can have broad support, even devotion. This should not be so surprising. As noted above, one can see that authoritarianism as an ideology can grow favorable, as can candidates and politicians with authoritarian sentiments. (Research suggests the strongest predictor of whether someone is a Trump supporter is whether he or she has authoritarian views. Trump likely understood and used this.) Yet for those raised in free societies, this can be confounding. Could Americans really vote away democracy, could they be so blind? I would never do that. The answer is yes, and the question is: are you sure?

For more from the author, subscribe and follow or read his books.

Two Thoughts on Salem

Christians Blamed Native Americans for Witchcraft

Boston clergyman Cotton Mather saw New Englanders like Mercy Short as particularly vulnerable to attacks by the Devil in the late seventeenth century due to the presence of Christianity on Native American land (or, more in his parlance, land formerly occupied only by the indigenous). Mather’s A Brand Pluck’d Out of the Burning of 1693 opens with two sentences outlining how Mercy Short was captured by “cruel and Bloody Indians” in her youth.[1] They killed her family and held her for ransom, which was eventually paid. This first paragraph may seem out of place, its only purpose seemingly being to evoke sympathy: see how much this young woman has suffered. “[S]he had then already Born the Yoke in her youth, Yett God Almighty saw it Good for her to Bear more…”[2]

However, the paragraph serves to establish a tacit connection between indigenous people and the witchcraft plaguing Salem. This is made more explicit later in the text, when Mather writes that someone executed at Salem testified “Indian sagamores” had been present at witch meetings to organize “the methods of ruining New England,” and that Mercy Short, in a possessed state, revealed the same, adding Native Americans at such meetings held a book of “Idolatrous Devotions.”[3] Mather, and others, believed Indigenous peoples were involved in the Devil’s work, so torturous to New Englanders. This was perceived to be a reaction to the Puritan presence. “It was a rousing alarm to the Devil,” Mather wrote in The Wonders of the Invisible World (1692), “when a great company of English Protestants and Puritans came to erect evangelical churches in a corner of the world where he had reigned…”[4] The Devil, displeased that Christianity was now “preached in this howling wilderness,” used native peoples to try to drive the Puritans out, including the sorcery of “Indian Powwows,” religious figures.[5] Because of Christianity’s presence in the “New World,” people like Mercy Short were far more at risk of diabolical terror — Mather thought “there never was a poor plantation more pursued by the wrath of the Devil…”[6]

The Accusers Parroted Each Other and No One Noticed

During the Salem witch trials of 1692, hysteria spread and convictions were secured due in part to near-verbatim repetition among the accusers. It seems likely that, rather than arousing suspicion, the fact that New Englanders accusing their neighbors of witchcraft used the precise same phrasing was viewed as evidence of truthtelling. Elizabeth Hubbard, testifying against a native woman named Tituba, reported: “I saw the apparition of Tituba Indian, which did immediately most grievously torment me…”[7] This occurred until “the day of her examination, being March 1, and then also at the beginning of her examination, but as soon as she began to confess she left off hurting me and has hurt me but little since.”[8] This is nearly identical to the testimony that occurred the same day from Ann Putnam, Jr. She said, “I saw the apparition of Tituba, Mr. Parris’s Indian woman, which did torture me most grievously…till March 1, being the day of her examination, and then also most grievously also at the beginning of her examination, but since she confessed she has hurt me but little.”[9] Though premeditation is in the realm of the possible (in other words, Putnam and Hubbard aligning their stories beforehand), this could be the result of spontaneous mimicking, whether conscious or subconscious, in a courtroom that was rather open (the second testifier copied the first because she was present to hear it).

This was a pattern in the trials that strengthened the believability of witchcraft tales. At the trial of Dorcas Hoar, accusers testified that “I verily believe in my heart that Dorcas Hoar is a witch” (Sarah Bibber), “I verily believe that Dorcas Hoar, the prisoner at the bar, is a witch” (Elizabeth Hubbard), “I verily believe in my heart that Dorcas Hoar is a witch” (Ann Putnam, Jr.), and “I verily believe in my heart that Dorcas Hoar is a most dreadful witch” (Mary Walcott).[10] Like the statements on Tituba, these occurred on the same day — a self-generating script that spelled destruction for the accused.

For more from the author, subscribe and follow or read his books.


[1] Cotton Mather, A Brand Pluck’d Out of the Burning, in George Lincoln Burr, Narratives of the New England Witch Trials (Mineola, New York: Dover Publications, 2012), 259.

[2] Ibid.

[3] Ibid, 281-282.

[4] Cotton Mather, The Wonders of the Invisible World, in Richard Godbeer, The Salem Witch Hunt: A Brief History with Documents (New York: Bedford/St. Martin’s, 2018), 49.

[5] Ibid.

[6] Ibid.

[7] “Elizabeth Hubbard against Tituba,” in Richard Godbeer, The Salem Witch Hunt: A Brief History with Documents (New York: Bedford/St. Martin’s, 2018), 92.

[8] Ibid.

[9] Ibid, 93.

[10] “Sarah Bibber against Dorcas Hoar,” “Elizabeth Hubbard against Dorcas Hoar,” “Ann Putnam Jr. against Dorcas Hoar,” and “Mary Walcott against Dorcas Hoar,” in Richard Godbeer, The Salem Witch Hunt: A Brief History with Documents (New York: Bedford/St. Martin’s, 2018), 121-122.

Wars Must Be Declared and Led by the World, Not Single Nations Like the U.S.

The psychologist Steven Pinker, in Rationality, writes that “none of us, thinking alone, is rational enough to consistently come to sound conclusions: rationality emerges from a community of reasoners who spot each other’s fallacies.” This could be applied to governments contemplating war. Americans increasingly understand that the United States often engages in violence not for noble purposes like protecting innocents, democracy, and freedom, but rather to protect and grow its economic and global power. Other countries have similar histories. In sum this has cost scores of millions of lives. An important step to ending war (and indeed nations) is to lift its declaration and execution from the national to the international level. With war exclusively in the hands of the international community, the wrongful motives of individual States can be mitigated. It is a little-known fact that the U.S. has already agreed to this.

We can pause here for a few caveats. First, war must be the absolute last resort to any crisis, due to its horrific predictable and unpredictable consequences, its unavoidable traps. It often is not the last resort for individual governments — nor will it always be so for the international community, but the collective reasoning and clash of skepticism and enthusiasm from multiple parties may reduce the foolhardy rush to violence so common in human political history. Diplomacy and nonviolent punitive actions can be more fully explored. Second, this idea relates to both reactions to wars of aggression launched by single States and to observed atrocities within them. If one nation invades another, the decision to repulse the invader must be made by a vote of all the nations in the world, with all those in favor committing forces to an international army. Same for if genocide is proven, among other scenarios. Third, none of this prohibits the last legitimate instance of unilateral violence: national defense against an invading power.

The argument is that the era of the United States as the world’s policeman must end — the world can be the world’s policeman. This writer has long voiced opposition to war and to nations, advocating for a united, one-country Earth (and is in good company: as documented in Why America Needs Socialism, Gandhi, Einstein, Orwell, Dr. King, and many other giants of history supported this idea). Talk of just war and how nations must approach it should not be misconstrued as enthusiastic support for these things; rather, as stated, vesting the power to wage war solely in the international community is a move down the long road to global peace and unity (with one day an equilibrium perhaps being reached wherein no actor risks facing the wrath of the rest of the world). It is far preferable to a rogue superpower invading and bombing whoever it pleases.

The United Nations, of course, needs structural changes to make this possible. The small Security Council can authorize use of force, but its five permanent members (the U.S., the U.K., France, China, and Russia) have veto power, meaning a single country can forbid military action. The decision to use violence must pass to the General Assembly, where a majority vote can decide, similar to how resolutions are passed now. A united army already exists, with 70,000-100,000 UN troops currently serving, gathered from national forces all over the globe, commanded by generals from all over the globe. Like those of any individual country, such as the U.S., UN military ventures have seen defeats alongside great successes. UN forces must be strengthened as their role broadens. Finally, UN member countries must actually abide by the treaty they signed to no longer engage in “the threat or use of force against the territorial integrity or political independence of any state” (UN Charter Article 2). This was the entire point of founding the United Nations after World War II. The U.S. signs binding treaties (the U.S. Constitution, in Article 6, makes any treaty we sign with foreign powers the “supreme law of the land”), promising to forsake unilateral action (such as the UN Charter) or torture (such as the UN Convention Against Torture), then ignores them. That is why U.S. actions such as the invasion of Iraq, whether looked at from the viewpoint of U.S. or international law, are accurately labeled illegal. Under a new paradigm, the U.S. and all member States would have to accept that should the General Assembly vote against war, there will be no war — and accept consequences for illegal actions that undermine this vote.

As with a one-nation world, there will be much screaming about this now, but in the future, whether in a hundred years or 1,000, it could easily be taken for granted. The nationalist American mindset says, “If we see evil in the world we’re going in! We won’t get anyone’s permission. We won’t sacrifice our sovereignty or decision-making. America, fuck yeah!” Cooler heads may one day recognize that their own nation can commit evils, from unjust wars to crimes against humanity, making a community of reasoners an important check and balance. If violence is truly right and justified, most of the world will recognize that. New voices may also question why one country should carry (in patriotic theory at least) the brunt of the cost in blood and treasure to make the world safe for democracy and freedom, as is occasionally the case with U.S. military action. Why not have the world collectively bear that burden, if the world is to benefit?

For more from the author, subscribe and follow or read his books.

The 1939 Map That Redlined Kansas City — Do You Want to See It?

In 1933, the Home Owners’ Loan Corporation was created as part of the New Deal to help rescue lenders and homeowners from the Great Depression. Homeowners were out of work, facing foreclosure and eviction; banks were receiving no mortgage payments and in crisis. The HOLC offered relief by buying loans, with government funds, from the latter and refinancing them for the former. It also set about creating a map of 200 U.S. cities that lenders could use to make “safe” loans rather than risky ones.

Risky areas, marked in yellow or red, were those of both lower-value homes and darker-skinned residents, the “undesirables” and “subversives” and “lower-grade” people. This entrenched segregation and the racial wealth disparity, with blacks and other minorities having a difficult time getting home loans, ownership being a key to intergenerational wealth. The Federal Housing Administration also used the HOLC map when it backed mortgages to encourage lending (if a resident couldn’t make the payments, the FHA would step in and help — as long as you were the right sort of person in the right part of town; see Racism in Kansas City: A Short History).

Kansas City’s map was completed April 1, 1939. You can see that the areas along Troost (easiest to find by looking at the left edge of the grey Forest Hill Cemetery) are yellow, with red portions east and north of that, where blacks at this time were most heavily concentrated. The yellow shade actually extends, in some places, west of Troost to streets like Rockhill. Each section can be clicked on for a description (D24: “Negro encroachment threatened from north”; D21: “It is occupied by a low grade of low income laborers, chiefly Mexicans, some negroes”). The use of this map by lenders, real estate agents, developers, governments, and more would solidify the Troost wall and Jim Crow repression, and impact Kansas City into the next century.

For more from the author, subscribe and follow or read his books.

Five Ways to Raise MSU’s Profile by 2025

We have three years. In 2025, Missouri State University will celebrate twenty years since our name change. We’ve bolstered attendance, built and renovated campus-wide, and grown more competitive in sports, resulting in a fast-climbing reputation and wider brand awareness.

Let’s keep it going. Here are five strategies to go from fast-climbing to skyrocketing before the historic celebration.

1) Sponsor “Matt & Abby” on social media. Matt and Abby Howard, MSU grads, have over 3 million followers on TikTok, over 1 million subscribers on YouTube, and nearly 800,000 followers on Instagram. Their fun videos occasionally provide free advertising, as they wear MO State shirts and hoodies, but a sponsorship to increase and focus this (imagine them doing BearWear Fridays) would be beneficial. Their views are now collectively in the billions.

2) Offer Terrell Owens a role at a football game. Legendary NFL receiver Terrell Owens (who has a sizable social media presence of his own) appeared on the MSU sideline during the 2021 season, as his son Terique is a Bears wide receiver. Invite Terrell Owens to join the cheer squad and lead the chants at a game. Or ask him to speak at halftime. Advertise it widely to boost attendance and get the story picked up by the national press.

3) Convince John Goodman to get on social media. Beloved actor and MSU alumnus John Goodman is now involved in university fundraising and related media — that’s huge. (Say, get him a role at a game, too.) The only thing that could make this better is if he would get on socials. Goodman would have millions of followers in a day, and with that comes exposure for MO State. Who knows what it would take to convince him after all these years avoiding it, but someone at this university has his ear…and should try.

4) Keep going after that Mizzou game. Mizzou men’s basketball coach Cuonzo Martin, as the former coach at MSU, is our best bet in the foreseeable future for the first MSU-Mizzou showdown since the Bears’ 1998 victory. In fact, a deal was in the works in summer 2020, but quickly fell apart. Martin’s contract ends in 2024 — if it is not renewed, scheduling a game will become much more difficult. Today MO State plays Mizzou in nearly all sports, even if football is irregular (last in 2017, next in 2033). We should keep fighting for a men’s basketball game. Then, of course, win it.

5) Build and beautify. From the John Goodman Amphitheatre to the renovation of Temple Hall, the campus is growing, dazzling. This should continue, for instance with the proposed facility on the south side of Plaster Stadium. Improving football facilities ups the odds of a future invite to an FBS conference. And one cannot forget more trees, possibly the most inexpensive way to radically beautify a university. Filling campus with more greenery, with more new and restored buildings, will position Missouri State as a destination campus for the next 20 years and beyond.

This article first appeared on Yahoo! and the Springfield News-Leader.

For more from the author, subscribe and follow or read his books.

Slowly Abandoning Online Communication and Texting

I grow increasingly suspicious of speaking to others digitally, at least in written form — comments, DMs, texts. It has in fact been 1.5 years since I last replied to a comment on socials, and in that time have attempted to reduce texting and similar private exchanges. Imagine that, a writer who doesn’t believe in written communication.

The motive for these life changes were largely outlined in Designing a New Social Media Platform:

As everyone has likely noticed, we don’t speak to each other online the way we do in person. We’re generally nastier due to the Online Disinhibition Effect; the normal inhibitions, social cues, and consequences that keep us civil and empathetic in person largely don’t exist. We don’t see each other the same way, because we cannot see each other. Studies show that, compared to verbal communication, we tend to denigrate and dehumanize other people when reading their written disagreements, seeing them as less capable of feeling and reason, which can increase political polarization. We can’t hear tone or see facial expressions, the eyes most important of all, creating fertile ground for both unkindness and misunderstandings. In public discussions, we also tend to put on a show for spectators, perhaps sacrificing kindness for a dunk that will garner likes. So let’s get rid of all that, and force people to talk face-to-face.

Circling back to these points is important because they obviously apply not only to social media but to texting, email, dating apps, and many other features of modern civilization. We all know how easy it is for a light disagreement to somehow turn into something terribly ugly when texting a friend, partner, or family member. It happens so fast we’re bewildered, or angered that things spiraled out of control, that we were so inexplicably unpleasant. It needn’t be this way. Some modes of communication are difficult to curb — if your job involves email, for instance — but it’s helpful to seek balance. You don’t have to forsake a tool completely if you don’t want to, just use it differently, adopt principles. A good rule: at the first hint of disagreement or conflict, stop. (Sometimes we even know it’s coming, and can act preemptively.) Stop texting or emailing about whatever it is. Ask to Facetime or Zoom, or meet in person, or call (at least you can hear them). Look into their eyes, listen to their voice. There are things that are said via text and on socials that would simply never be said in person or using more intimate technologies.

Progress will be different for each person. Some would rather talk than text anyway, and excising the latter from their lives would be simple. Others may actually be able to email less and cover more during meetings. Some enviable souls have detached themselves from social media altogether — which I hope to do at some point, but have found a balance or middle ground for now, since it’s important to me to share my writings, change the way people think, draw attention to political news and actions, and keep track of what local organizations and activists are up to (plus, my job requires social media use).

Changing these behaviors is key to protecting and saving human relationships, and maybe even society itself. First, if there’s an obvious way to avoid firestorms with friends and loved ones, keeping our bonds strong rather than frayed, we should take it. Second, the contribution of social media to political polarization, hatred, and misinformation since 2005 (maybe of the internet since the 1990s) is immeasurable, with tangible impacts on violence and threats to democracy. Society tearing itself apart due at least partially to this new technology sounds less hyperbolic by the day.

And it’s troubling to think that I, with all good intentions, am still contributing to that by posting, online advocacy perhaps having a negative impact on the world alongside an important positive one. What difference does it really make, after all, to share an opinion but not speak to anyone about it? Wouldn’t a social media platform where everyone shared their opinions but did not converse with others, ignored the comments, be just as harmful to society as a platform where we posted opinions and also went to war in the comments section? Perhaps so. The difference may be negligible. But in a year and a half, I have not engaged in any online debate or squabble, avoiding heated emotions toward individuals and bringing about a degree of personal peace (I have instead had political discussions in person, where it’s all more pleasant and productive). If I could advocate for progressivism or secularism while avoiding heightened emotions toward individual pious conservatives, whether friends or random strangers, they could do the same, posting and opining while sidestepping heightened emotions toward me. This doesn’t solve the divisiveness of social media — the awful beliefs and posts from the other side (whichever that is for you) are still there. Plenty of harmful aspects still exist beside the positive ones that keep us on. But perhaps it lowers the temperature a little.

For more from the author, subscribe and follow or read his books.

Free Speech on Campus Under Socialism

Socialism seeks to make power social, to enrich the lives of ordinary people with democracy and ownership. Just as the workers should own their workplaces and citizens should have decision-making power over law and policy, universities under socialism would operate a bit differently. The states will not own public universities, nor individuals and investors private ones. Such institutions will be owned and managed by the professors, groundskeepers, and other workers. There is a compelling case for at least some student control as well, especially when it comes to free speech controversies.

Broadening student power in university decision-making more closely resembles a consumer cooperative than a worker cooperative, described above and analyzed elsewhere. A consumer cooperative is owned and controlled by those who use it, patrons, rather than workers. This writer’s vision of socialism, laid bare in articles and books, has always centered the worker, and it is not a fully comfortable thought to allow students, merely passing through a college for two, four, or six years and greatly outnumbering the workers, free reign over policy. There is a disconnect here between workers and majority rule, quite unlike in worker cooperatives (I have always been a bit suspicious of consumer co-ops for this reason). However, it is likely that a system of checks and balances (so important in a socialist direct democracy) could be devised. Giving students more power over their place of higher learning is a positive thing (think of the crucial student movements against college investments in fossil fuels today), as this sacred place is for them, but this would have to be balanced with the power of the faculty and staff, who like any other workers deserve control over their workplace. A system of checks and balances, or specialized areas of authority granted to students, may be a sensible compromise. This to an extent already exists, with college students voting to raise their fees to fund desired facilities, and so on.

One specialized area could be free speech policy. Socialism may be a delightful solution to ideological clashes and crises. I have written on the free speech battles on campuses, such as in Woke Cancel Culture Through the Lens of Reason. There I opined only in the context of modern society (“Here’s what I think we should do while stuck in the capitalist system”). The remarks in full read:

One hardly envies the position college administrators find themselves in, pulled between the idea that a true place of learning should include diverse and dissenting opinions, the desire to punish and prevent hate speech or awful behaviors, the interest in responding to student demands, and the knowledge that the loudest, best organized demands are at times themselves minority opinions, not representative.

Private universities are like private businesses, in that there’s no real argument against them cancelling as they please.

But public universities, owned by the states, have a special responsibility to protect a wide range of opinion, from faculty, students, guest speakers, and more, as I’ve written elsewhere. As much as this writer loves seeing the power of student organizing and protest, and the capitulation to that power by decision-makers at the top, public colleges should take a harder line in many cases to defend views or actions that are deemed offensive, in order to keep these spaces open to ideological diversity and not drive away students who could very much benefit from being in an environment with people of different classes, ethnicities, genders, sexual orientations, religions, and politics. Similar to the above, that is a sensible general principle. There will of course be circumstances where words and deeds should be crushed, cancellation swift and terrible. Where that line is, again, is a matter of disagreement. But the principle is simply that public colleges should save firings, censorship, cancellation, suspension, and expulsion for more extreme cases than is current practice. The same for other public entities and public workplaces. Such spaces are linked to the government, which actually does bring the First Amendment and other free speech rights into the conversation, and therefore there exists a special onus to allow broader ranges of views.

But under socialism, the conversation changes. Imagine for a moment that college worker-owners gave students the power to determine the fate of free speech controversies, student bodies voting on whether to allow a speaker, fire a professor, kick out a student, and so forth. This doesn’t solve every dilemma and complexity involved in such decisions, but it has a couple benefits. First, you don’t have a small power body making decisions for everyone else, an administration enraging one faction (“They caved to the woke Leftist mob”; “They’re tolerating dangerous bigots”). Second, the decision has majority support from the student body; the power of the extremes, the perhaps non-representative voices, are diminished. Two forms of minority rule are done away with (this is what socialism aims to do, after all), and the decision has more legitimacy, with inherent popular support. More conservative student bases will make different decisions than more liberal ones, but that is comparable to today’s different-leaning administrations in thousands of colleges across the United States.

Unlike in the excerpt above, which refers to the current societal setup, private and public colleges alike will operate like this — these classifications in fact lose their meanings, as both are owned by the workers and become the same kind of entity. A university’s relationship to free speech laws, which aren’t going anywhere in a socialist society, then needs to be determined. Divorced from ownership by states, institutions of higher learning could fall outside free speech laws, like other cooperatives (where private employers and colleges largely fall today). But, to better defend diverse views, worthwhile interactions, and a deeper education, let’s envision a socialist nation that applies First Amendment protections to all universities (whether that preserved onus should be extended to all cooperatives can be debated another time).

When a university fires a professor today for some controversial comment, it might land in legal trouble, sued for violating First Amendment rights and perhaps forced to pay damages. Legal protection of rights is a given in a decent society. Under socialism, can you sue a student body (or former student body, as these things take a while)? Or just those who voted to kick you out? Surely not, as ballots are secret and you cannot punish those who were for you alongside those against you. Instead, would this important check still be directed against the university? This would place worker-owners in a terrible position: how can decision-making over free speech cases be given to the student body if it’s the worker-owners who will face the lawsuits later? One mustn’t punish the innocent and let the guilty walk. These issues may speak to the importance of worker-owners reserving full power, minority power, to decide free speech cases on campus. Yet if punishment in the future moves beyond money, there may be hope yet for the idea of student power. It may not be fair for a university to pay damages because of what a student body ruled, but worker-owners could perhaps stomach a court-ordered public apology on behalf of student voters, mandated reinstatement of a professor or student or speaker, etc.

With free speech battles, someone has to make the final call. Will X be tolerated? As socialism is built, as punishment changes, it may be worth asking: “Why not the students?”

For more from the author, subscribe and follow or read his books.

The Future of American Politics

The following are five predictions about the future of U.S. politics. Some are short-term, others long-term; some are possible, others probable.

One-term presidents. In a time of extreme political polarization and razor-thin electoral victories, we may have to get used to the White House changing hands every four years rather than eight. In 2016, Trump won Michigan by 13,000 votes, Wisconsin by 27,000, Pennsylvania by 68,000, Arizona by 91,000. Biden won those same states in 2020 by 154,000, 21,000, 82,000, and 10,000, respectively. Other states were close as well, such as Biden’s +13,000 in Georgia or Clinton’s +2,700 in New Hampshire. Competitive races are nothing new in election history, and 13 presidents (including Trump) have failed to reach a second term directly after their first, but Trump’s defeat was the first incumbent loss in nearly 30 years. The bitter divisions and conspiratorial hysteria of modern times may make swing state races closer than ever, resulting in fewer two-term presidents — at least consecutive ones — in the near-term.

Mail privacy under rightwing attack. When abortion was illegal in the United States, there were many abortions. If Roe falls and states outlaw the procedure, or if the Supreme Court continues to allow restrictions that essentially do the same, we will again see many illegal terminations — only they will be far safer and easier this time, with abortion pills via mail. Even if your state bans the purchase, sale, or use of the pill, mail forwarding services or help from out-of-town friends (shipping the pills to a pro-choice state and then having them mailed to you) will easily get the pills to your home. Is mail privacy a future rightwing target? The U.S. has a history of banning the mailing of contraceptives, information on abortion, pornography, lottery tickets, and more, enforced through surveillance, requiring the Supreme Court to declare our mail cannot be opened without a warrant. It is possible the Right will attempt to categorize abortion pills as items illegal to ship and even push for the return of warrantless searches.

Further demagoguery, authoritarianism, and lunacy. Trump’s success is already inspiring others, some worse than he is, to run for elected office. His party looks the other way or enthusiastically embraces his deceitful attempts to overturn fair elections because it is most interested in power, reason and democracy be damned. Same for Trump’s demagoguery, his other lies and authoritarian tendencies, his extreme policies, his awful personal behavior — his base loves it all and it’s all terribly useful to the GOP. While Trump’s loss at the polls in 2020 may cause some to second-guess the wisdom of supporting such a lunatic, at least those not among the 40% of citizens who still believe the election was stolen, at present it seems the conservative base and the Republican Party are largely ready for Round 2. What the people want and the party tolerates they will get; what’s favored and encouraged will be perpetuated and created anew. It’s now difficult to imagine a normal human being, a classic Republican, a decent person like Mitt Romney, Liz Cheney, Jon Huntsman, John Kasich, or even Marco Rubio beating an extremist fool at the primary polls. The madness will likely continue for some time, both with Trump and others who come later, with only temporary respites of normalcy between monsters. Meanwhile, weaknesses in the political and legal system Trump exploited will no doubt remain unfixed for an exceptionally long time.

Republicans fight for their lives / A downward spiral against democracy. In a perverse sort of way, Republican cheating may be a good sign. Gerrymandering, voter suppression in all its forms, support for overturning a fair election, desperation to hold on to the Electoral College, and ignoring ballot initiatives passed by voters are the acts and sentiments of the fearful, those who no longer believe they can win honestly. And given the demographic changes already occurring in the U.S. that will transform the nation in the next 50-60 years (see next section), they’re increasingly correct. Republicans have an ever-growing incentive to cheat. Unfortunately, this means the Democrats do as well. Democrats may be better at putting democracy and fairness ahead of power interests, but this wall already has severe cracks, and one wonders how long it will hold. For example, the GOP refused to allow Obama to place a justice on the Supreme Court, and many Democrats dreamed of doing the same to Trump, plus expanding the Court during the Biden era. Democrats of course also gerrymander U.S. House and state legislature districts to their own advantage (the Princeton Gerrymandering Project is a good resource), even if Republican gerrymandering is worsefour times worse — therefore reaping bigger advantages. It’s sometimes challenging to parse out which Democratic moves are reactions to Republican tactics and which they would do anyway to protect their seats, but it’s obvious that any step away from impartiality and true democracy encourages the other party to do the same, creating a downward anti-democratic spiral, a race to the bottom.

(One argument might be addressed before moving on. Democrats generally make it easier for people to vote and support the elimination of the Electoral College, though again liberals are not angels and there are exceptions to both these statements. Aren’t those dirty tactics that serve their interests? As I wrote in The Enduring Stupidity of the Electoral College, which shows that this old anti-democratic system is unfair to each individual voter, “True, the popular vote may serve Democratic interests. Fairness serves Democratic interests. But, unlike unfairness, which Republicans seek to preserve, fairness is what’s right. Giving the candidate with the most votes the presidency is what’s right.” Same for not making it difficult for people who usually vote the “wrong” way to cast their ballots! You do what is right and fair, regardless of who it helps.)

Democratic dominance. In the long-term, Democrats will become the dominant party through demographics alone. Voters under 30 favored the Democratic presidential candidate by large margins in 2004, 2008, 2012, 2016, and 2020 — voters under 40 also went blue by a comfortable margin. Given that individual political views mostly remain stable over time (the idea that most or even many young people will grow more conservative as they age is unsupported by research), in 50 or 60 years this will be a rather different country. Today we still have voters (and politicians) in their 80s and 90s who were segregationists during Jim Crow. In five or six decades, those over 40 today (who lean Republican) will be gone, leaving a bloc of older voters who have leaned blue their entire lives, plus a new generation of younger and middle-aged voters likely more liberal than any of us today. This is on top of an increasingly diverse country, with people of color likely the majority in the 2040s — with the white population already declining by total numbers and as a share of the overall population, Republican strength will weaken further (the majority of whites have long voted Republican; the majority of people of color vote blue). A final point: the percentage of Americans who identify as liberal is steadily increasing, as opposed to those who identify as conservative, and Democrats have already won the popular vote in seven of the last eight presidential elections. Republican life rafts such as the Electoral College (whose swing states will experience these same changes) and other anti-democratic practices will grow hopelessly ineffective under the crushing weight of demographic metamorphosis. Assuming our democracy survives, the GOP will be forced to moderate to have a chance at competing.

For more from the author, subscribe and follow or read his books.

Actually, “Seeing Is Believing”

Don’t try to find “seeing isn’t believing, believing is seeing” in the bible, for though Christians at times use these precise words to encourage devotion, they come from an elf in the 1994 film The Santa Clause, an instructive fact. It is a biblical theme, however, with Christ telling the doubting Thomas, “Because you have seen me, you have believed; blessed are those who have not seen and yet have believed” (John 20:29), 2 Corinthians 5:7 proclaiming “We walk by faith, not by sight,” and more.

The theme falls under the first of two contradictory definitions of faith used by the religious. Faith 1 is essentially “I cannot prove this, I don’t have evidence for it, but I believe nonetheless.” Many believers profess this with pride — that’s true faith, pure faith, believing what cannot be verified. This is just the abandonment of critical thinking, turning off the lights. Other believers see the problem with it. A belief can’t be justified under Faith 1. Without proof, evidence, and reason, they realize, their faith is on the baseless, ridiculous level of every other wild human idea — believing in Zeus without verification, Allah without verification, Santa without verification. Faith 2 is the corrective: “I believe because of this evidence, let me show you.” The “evidence,” “proof,” and “logic” then offered are terrible and fall apart at once, but that has been discussed elsewhere. “Seeing isn’t believing, believing is seeing” aligns with the first definition, while Faith 2 would more agree with the title of this article (though room is always left for revelation as well).

I was once asked what would make me believe in God again, and I think about this from time to time. I attempt to stay both intellectually fair and deeply curious. Being a six on the Dawkins scale, I have long maintained that deities remain in the realm of the possible, in the same way our being in a computer simulation is possible, yet given the lack of evidence there is little reason to take it seriously at this time, as with a simulation. For me, the last, singular reason to wonder whether God or gods are real is the fact existence exists — but supposing higher powers were responsible for existence brings obvious problems of its own that are so large they preclude religious belief. Grounds for believing in God again would have to come from elsewhere.

“Believing is seeing” won’t do. It’s just a hearty cry for confirmation bias and self-delusion (plus, as a former Christian it has already been tried). Feeling God working in your life, hearing his whispers, the tugs on your heart, dreams and visions, your answered prayers, miracles…these things, experienced by followers of all religions and insane cults, even by myself long ago, could easily be imagined fictions, no matter how much you “know” they’re not, no matter how amazing the coincidences, dramatic the life changes, vivid the dreams, unexplainable the events (of current experience anyway; see below).

In contrast, “seeing is believing” is rational, but one must be careful here, too. It’s a trillion times more sensible to withhold belief in extraordinary claims until you see extraordinary evidence than to believe wild things before verifying, maybe just hoping some proof, revelation, comes along later. The latter is just gullibility, taking off the thinking cap, believing in Allah, Jesus, or Santa because someone told you to. However, for me, “seeing is believing” can’t just mean believing the dreadful “evidence” of apologetics referenced above, nor could it mean the god of a religion foreign to me appearing in a vision, confounding or suggestive coincidences and “miracles,” or other personal experiences that do not in any way require supernatural explanations. That’s not adequate seeing.

It would have to be a personal experience of greater magnitude. Experiencing the events of Revelation might do it — as interpreted by Tim LaHaye and Jerry B. Jenkins in their popular (and enjoyable, peaking with Assassins) book series of the late 90s and early 2000s, billions of Christians vanish, the seas turn to blood, people survive a nuclear bombing unscathed, Jesus and an army of angels arrive on the clouds, and so forth. These kinds of personal experiences would seem less likely to be delusions (though they still could be, if one is living in a simulation, insane, etc.), and would be a better basis for faith than things that have obvious or possible natural explanations, especially if they were accurately prophesied. In other words, at some stage personal experience does become a rational basis for belief; human beings simply tend to adopt a threshold that is outrageously low, far outside necessitated supernatural involvement. (It’s remarkable where life takes you: from “I’m glad I won’t have to go through the tribulation, as a believer” to “The tribulation would be reasonable grounds to become a believer again.”) Of course, I suspect this is all mythological and have no worry it will occur. How concerned is the Christian over Kalki punishing evildoers before the world expires and restarts (Hinduism) or the Spider Woman covering the land with her webs before the end (Hopi)? I will convert to one of these faiths if their apocalyptic prophesies come to pass.

The reaction of the pious is to say, “But others saw huge signs like that, Jesus walked on water and rose from the dead and it was all prophesied and –” No. That’s the challenge of religion. Stories of what other people saw can easily be made-up, often to match prophesy. Even a loved one relating a tale could have been tricked, hallucinating, delusional, lying. You can only trust the experiences you have, and even those you can’t fully trust! This is because you could be suffering from something similar — human senses and perceptions are known to miserably fail and mislead. The only (possible) solution is to go big. Really big. Years of predicted, apocalyptic disasters that you personally survive. You still might not be seeing clearly. But belief in a faith might be finally justified on rational, evidentiary grounds, in alignment with your perceptions. “Seeing is believing,” with proper parameters.

Anything short of this is merely “believing is seeing” — elf babble.

For more from the author, subscribe and follow or read his books.

History, Theory, and Ethics

The writing of history and the theories that guide it, argues historian Lynn Hunt in Writing History in the Global Era, urgently need “reinvigoration.”[1] The old meta-narratives used to explain historical change looked progressively weaker and fell under heavier criticism as the twentieth century reached its conclusion and gave way to the twenty-first.[2] Globalization, Hunt writes, can serve as a new paradigm. Her work offers a valuable overview of historical theories and develops an important new one, but this paper will argue Hunt implicitly undervalues older paradigms and fails to offer a comprehensive purpose for history under her theory. This essay then proposes some guardrails for history’s continuing development, not offering a new paradigm but rather a framing that gives older theories their due and a purpose that can power many different theories going forward.

We begin by reviewing Hunt’s main ideas. Hunt argues for “bottom-up” globalization as a meta-narrative for historical study, and contributes to this paradigm by offering a rationale for causality and change that places the concepts of “self” and “society” at its center. One of the most important points that Writing History in the Global Era makes is that globalization has varying meanings, with top-down and bottom-up definitions. Top-down globalization is “a process that transforms every part of the globe, creating a world system,” whereas the bottom-up view is myriad processes wherein “diverse places become connected and interdependent.”[3] In other words, while globalization is often considered synonymous with Europe’s encroachment on the rest of the world, from a broader and, as Hunt sees it, better perspective, globalization would in fact be exemplified by increased interactions and interdependence between India and China, for example.[4] The exploration and subjugation of the Americas was globalization, but so was the spread of Islam from the Middle East across North Africa to Spain. It is not simply the spread of more advanced technology or capitalism or what is considered to be, in eurocentrism, the most enlightened culture and value system, either: it is a reciprocal, “two-way relationship” that can be found anywhere as human populations move, meet, and start to rely on each other, through trade for example.[5] Hunt seeks to overcome two problems here. First, the eurocentric top-down approach and its “defects”; second, the lack of a “coherent alternative,” which her work seeks to provide.[6]

Hunt rightly and persuasively makes the case for a bottom-up perspective of globalization as opposed to top-down, then turns to the question of why this paradigm has explanatory power. What is it about bottom-up globalization, the increasing interactions and interdependence of human beings, that brings about historical change? Here Hunt is situating her historical lens alongside and succeeding previous ones, explored early in the work. Marxism, modernization, and the Annales School offered theories of causality. Cultural and political change was brought about by new modes of economic production, the growth of technology and the State, or by geography and climate, respectively.[7] The paradigm of identity politics, Hunt notes, at times lacked such a clear “overarching narrative,” but implied that inclusion of The Other, minority or oppressed groups, in the national narrative was key to achieving genuine democracy (which more falls under purpose, to be explored later).[8] Cultural theories rejected the idea, inherent in older paradigms, that culture was produced by economic or social relations; culture was a force unto itself, comprised of language, semiotics, discourse, which determined what an individual thought to be true and how one behaved.[9] “Culture shaped class and politics rather than the other way around” — meaning culture brought about historical change (though many cultural theorists preferred not to focus on causation, perhaps similar to those engaged in identity politics).[10] Bottom-up globalization, Hunt posits, is useful as a modern explanatory schema for the historical field. It brings about changes in the self (in fact in the brain) and of society, which spurs cultural and political transformations.[11] There is explanatory power in increased connections between societies. For instance, she suggests that drugs and stimulants like coffee, brought into Europe through globalization, produced selves that sought pleasure and thrill (i.e. altered the neurochemistry of the brain) and changed society by creating central gathering places, coffeehouses, where political issues could be intensely discussed. These developments may have pushed places like France toward democratic and revolutionary action.[12] For Hunt, it is not enough to say culture alone directs the thinkable and human action, nor is the mind simply a social construction — the biology of the brain and how it reacts and operates must be taken into account.[13] The field must move on from cultural theories.

Globalization, a useful lens through which to view history, joins a long list, only partially outlined above. Beyond economics, advancing technology and government bureaucracy, geography and environment, subjugated groups, and culture, there is political, elite, or even “Great Men” history; social history, the story of ordinary people; the history of ideas, things, and diseases and non-human species; microhistory, biography, a close look at events and individuals; and more.[14] Various ways of looking at history, some of which are true theories that include causes of change, together construct a more complete view of the past. They are all valuable. As historian Sarah Maza writes, “History writing does not get better and better but shifts and changes in response to the needs and curiosities of the present day. Innovations and new perspectives keep the study of the past fresh and interesting, but that does not mean we should jettison certain areas or approaches as old-fashioned or irrelevant.”[15] This is a crucial reminder. New paradigms can reinvigorate, but historians must be cautious of seeing them as signals that preceding paradigms are dead and buried.

Hunt’s work flirts with this mistake, though perhaps unintentionally. Obviously, some paradigms grow less popular, while others, particularly new ones, see surges in adherents. Writing History in the Global Era outlines the “rise and fall” of theories over time, the changing popularities and new ways of thinking that brought them about.[16] One implication in Hunt’s language, though such phrasing is utilized from the viewpoint of historical time or those critical of older theories, is that certain paradigms are indeed dead or of little use — “validity” and “credibility” are “questioned” or “lost,” “limitations” and “disappointments” discovered, theories “undermined” and “weakened” by “gravediggers” before they “fall,” and so forth.[17] Again, these are not necessarily Hunt’s views, rather descriptors of changing trends and critiques, but Hunt’s work offers no nod to how older paradigms are still useful today, itself implying that different ways of writing history are now irrelevant. With prior theories worth less, a new one, globalization, is needed. Hunt’s work could have benefited from more resistance to this implication, with a serious look at how geography and climate, or changing modes of economic production, remain valuable lenses historians use to chart change and find truth — an openness to the full spectrum of approaches, for they all work cooperatively to reveal the past, despite their unique limitations. Above, Maza mentioned “certain areas” of history in addition to “approaches,” and continued: “As Lynn Hunt has pointed out, no field of history [such as ancient Rome] should be cast aside just because it is no longer ‘hot’…”[18] Hunt should have acknowledged and demonstrated that the precise same is true of approaches to history.

Another area that deserves more attention is purpose. In the same way that not all historical approaches emphasize causality and change, not all emphasize purpose. Identity politics had a clear use: the inclusion of subjugated groups in history helped move nations toward political equality.[19] With other approaches, however, “What is it good for?” is more difficult to answer. This is to ask what utility a theory had for contemporary individuals and societies (and has for modern ones), beyond a more complete understanding of yesteryear or fostering new research. It may be more challenging to see a clear purpose in the study of how the elements of the longue durée, such as geography and climate, of the Annales School change human development. How was such a lens utilized as a tool, if in fact it was, in the heyday of the Annales School? How could it be utilized today? (Perhaps it could be useful in mobilizing action against climate change.) The purpose of history — of each historical paradigm — is not always obvious.

Indeed, Hunt’s paradigm “offers a new purpose for history: understanding our place in an increasingly interconnected world,” a rather vague suggestion that sees little elaboration.[20] What does it mean to understand our place? Is this a recycling of “one cannot understand the present without understanding the past,” a mere truism? Or is it to say that a bottom-up globalization paradigm can be utilized to demonstrate the connection between all human beings, breaking down nationalism or even national borders? After all, the theory moves away from eurocentrism and the focus on single nations. Perhaps it is something else, one cannot know for certain. Of course, Hunt may have wanted to leave this question to others, developing the tool and letting others determine how to wield it. However, hesitation on Hunt’s part to more deeply and explicitly explore purpose, to adequately show how her theory is useful to the present, may be a simple desire to avoid the controversy of politics. This would be disappointing to those who believe history is inherently political or anchored to ethics, but either reason is out of step with Hunt’s introduction. History, Hunt writes on her opening page, is “in crisis” due to the “nagging question that has proved so hard to answer…‘What is it good for?’”[21] In the nineteenth and twentieth centuries, she writes, the answer shifted from developing strong male leaders to building national identity and patriotism to contributing to the social movements of subjugated groups by unburying histories of oppression.[22] All of these purposes are political. Hunt deserves credit for constructing a new paradigm, with factors of causality and much fodder for future research, but to open the work by declaring a crisis of purposelessness, framing purposes as political, and then not offering a fully developed purpose through a political lens (or through another lens, explaining why purpose need not be political) is an oversight.

Based on these criticisms, we have a clear direction for the field of history. First, historians should reject any implication of a linear progression of historical meta-narratives, which this paper argues Hunt failed to do. “Old-fashioned” paradigms in fact have great value today, which must be noted and explored. A future work on the state of history might entirely reframe, or at least dramatically add to, the discussion of theory. Hunt tracked the historical development of theories and their critics, with all the ups and downs of popularity. This is important epistemologically, but emphasizes the failures of theories rather than their contributions, and presents them as stepping stones to be left behind on the journey to find something better. Marxism had a “blindness to culture” and had to be left by the wayside, its replacement had this or that limitation and was itself replaced, and so on.[23] Hunt writes globalization will not “hold forever” either.[24] A future work might instead, even if it included a brief, similar tracking, focus on how each paradigm added to our understanding of history, continued to do so, and how it does so today. As an example of the second task, Anthony Reid’s 1988 Southeast Asia in the Age of Commerce, 1450-1680 was written very much in the tradition of the Annales School, with a focus on geography, resources, climate, and demography, but it would be lost in a structure like Hunt’s, crowded out by the popularity of cultural studies in the last decades of the twentieth century.[25] Simply put, the historian must break away from the idea that paradigms are replaced. They are replaced in popularity, but not in importance to the mission of more fully understanding the past. As Hunt writes, “Paradigms are problematic because by their nature they focus on only part of the picture,” which highlights the necessity of the entire paradigmic spectrum, as does her putting globalization theory into practice, suggesting that coffee from abroad spurred revolutionary movements in eighteenth-century Europe, sidelining countless other factors.[26] Every paradigm helps us see more of the picture. It would be a shame if globalization was downplayed as implicitly irrelevant only a couple decades from now, if still a useful analytical lens. Paradigms are not stepping stones, they are columns holding up the house of history — more can be added as we go.

This aforementioned theoretical book on the field would also explore purpose, hypothesizing that history cannot be separated from ethics, and therefore from politics. Sarah Maza wrote in the final pages of Thinking About History:

Why study history? The simplest response is that history answers questions that other disciplines cannot. Why, for instance, are African-Americans in the United States today so shockingly disadvantaged in every possible respect, from income to education, health, life expectancy, and rates of incarceration, when the last vestiges of formal discrimination were done away with half a century ago? Unless one subscribes to racist beliefs, the only way to answer that question is historically, via the long and painful narrative that goes from transportation and slavery to today via Reconstruction, Jim Crow laws, and an accumulation, over decades, of inequities in urban policies, electoral access, and the judicial system.[27]

This is correct, and goes far beyond the purpose of answering questions. History is framed as the counter, even the antidote, to racist beliefs. If one is not looking to history for such answers, there is nowhere left to go but biology, racial inferiority, to beliefs deemed awful. History therefore informs ethical thinking; its utility is to help us become more ethical creatures, as (subjectively) defined by our society — and the self. This purpose is usually implied but rarely explicitly stated, and a discussion on the future of history should explore it. Now, one could argue that Maza’s dichotomy is simply steering us toward truth, away from incorrect ideas rather than unethical ones. But that does not work in all contexts. When we read Michel Foucault’s Discipline and Punish, he is not demonstrating that modes of discipline are incorrect — and one is hardly confused as to whether he sees them as bad things, these “formulas of domination” and “constant coercion.”[28] J.R. McNeill, at the end of Mosquito Empires: Ecology and War in the Greater Caribbean, 1620-1914, writes that yellow fever’s “career as a governing factor in human history, mercifully, has come to a close” while warning of a lapse in vaccination and mosquito control programs that could aid viruses that “still lurk in the biosphere.”[29] The English working class, wrote E.P. Thompson, faced “harsher and less personal” workplaces, “exploitation,” “unfreedom.”[30] The implications are clear: societies without disciplines, without exploitation, with careful mosquito control would be better societies. For human beings, unearthing and reading history cannot help but create value judgements, and it is a small step from the determination of what is right to the decision to pursue it, political action. It would be difficult, after all, to justify ignoring that which was deemed ethically right.

Indeed, not only do historians implicitly suggest better paths and condemn immoral ones, the notion that history helps human beings make more ethical choices is already fundamental to how many lay people read history — what is the cliché of being doomed to repeat the unlearned past about if not avoiding tragedies and terrors deemed wrong by present individuals and society collectively? As tired and disputed as the expression is, there is truth to it. Studying how would-be authoritarians often use minority groups as scapegoats for serious economic and social problems to reach elected office in democratic systems creates pathways for modern resistance, making the unthinkable thinkable, changing characterizations of what is right or wrong, changing behavior. Globalization may alter the self and society, but the field of history itself, to a degree, does the same. This could be grounds for a new, rather self-congratulatory paradigm, but the purpose, informing ethical and thus political decision-making, can guide many different theories, from Marxism to globalization. As noted, prior purposes of history were political: forming strong leaders, creating a national narrative, challenging a national narrative. A new political purpose would be standard practice. One might argue moving away from political purposes is a positive step, but it must be noted that the field seems to move away from purpose altogether when it does so. Is purpose inherently political? This future text would make the case that it is. A purpose cannot be posited without a self-evident perceived good. Strong leaders are good, for instance — and therefore should be part of the social and political landscape.

In conclusion, Hunt’s implicit dismissal of older theories and her incomplete purpose for history deserve correction, and doing so pushes the field forward in significant ways. For example, using the full spectrum of paradigms helps us work on (never solve) history’s causes-of-causes ad infinitum problem. Changing modes of production may have caused change x, but what caused the changing modes of production? What causes globalization in the first place? Paradigms can interrelate, helping answer the thorny questions of other paradigms (perhaps modernization or globalization theory could help explain changing modes of production, before requiring their own explanations). How giving history a full purpose advances the field is obvious: it sparks new interest, new ways of thinking, new conversations, new utilizations, new theories, while, like the sciences, offering the potential — but not the guarantee — of improving the human condition.

For more from the author, subscribe and follow or read his books.


[1] Lynn Hunt, Writing History in the Global Era (New York: W.W. Norton & Company, 2014), 1.

[2] Ibid, 26, 35-43.

[3] Ibid, 59. See also 60-71.

[4] Ibid, 70.

[5] Ibid.

[6] Ibid, 77.

[7] Ibid, 14-17.

[8] Ibid, 18.

[9] Ibid, 18-27.

[10] Ibid, 27, 77.

[11] Ibid, chapters 3 and 4.

[12] Ibid, 135-141.

[13] Ibid, 101-118.

[14] Sarah Maza, Thinking About History (Chicago: University of Chicago Press, 2017).

[15] Maza, Thinking, 236.

[16] Hunt, Writing History, chapter 1.

[17] Ibid, 8-9, 18, 26-27, chapter 1.

[18] Maza, Thinking, 236.

[19] Hunt, Writing History, 18.

[20] Ibid, 10.

[21] Ibid, 1.

[22] Ibid, 1-7.

[23] Ibid, 8.

[24] Ibid, 40.

[25] Anthony Reid, Southeast Asia in the Age of Commerce, 1450-1680, vol. 1, The Lands Below the Winds (New Haven: Yale University Press, 1988).

[26] Hunt, Writing History, 121, 135-140.

[27] Maza, Thinking, 237.

[28] Michel Foucault, Discipline and Punish (New York: Vintage Books, 1995), 137.

[29] J.R. McNeill, Mosquito Empires: Ecology and War in the Greater Caribbean, 1620-1914 (New York: Cambridge University Press, 2010), 314.

[30] E.P. Thompson, The Essential E.P. Thompson (New York: The New Press, 2001), 17. 

Is It Possible For Missouri State to Grow Larger Than Mizzou?

Students and alumni of Missouri State (and perhaps some of the University of Missouri) at times wonder if MSU will ever become the largest university in the state. While past trends are never a perfect predictor of the future, looking at the enrollment patterns of each institution can help offer an answer. Here are the total student growths since 2005.

Mizzou
Via its Student Body Profile reports and enrollment summary (Columbia campus):

2005 – 27,985
2006 – 28,253
2007 – 28,477
2008 – 30,200
2009 – 31,314
2010 – 32,415
2011 – 33,805
2012 – 34,748
2013 – 34,658
2014 – 35,441
2015 – 35,448
2016 – 33,266
2017 – 30,870
2018 – 29,866
2019 – 30,046
2020 – 31,103
2021 – 31,412

Missouri State
Via its enrollment history report (Springfield campus):

2005 – 19,165
2006 – 19,464
2007 – 19,705
2008 – 19,925
2009 – 20,842
2010 – 20,949
2011 – 20,802
2012 – 21,059
2013 – 21,798
2014 – 22,385
2015 – 22,834
2016 – 24,116
2017 – 24,350
2018 – 24,390
2019 – 24,126
2020 – 24,163
2021 – 23,618

In the past 16 years, MSU gained on average 278.3 new students each Fall. Mizzou gained 214.2 new students per year, an average tanked by the September 2015 racism controversy. Before the controversy (2005-2015 data), Mizzou gained 746.3 new students per year (MSU, over the same ten years, +366.9). From a low point in 2018, Mizzou has since, over a three-year period, gained on average 515.3 new students (over the same time, MSU saw -257.3 students — one school’s gain is often the other’s loss). This is too short a timeframe to draw unquestionable conclusions, but with Mizzou back on its feet it seems likely to continue to acquire more students on average each year, making MSU’s ascension to the top unlikely.

Predicting future enrollment patterns is rather difficult, of course. Over the past decade, fewer Americans have attended university, including fewer Missourians — and that was before COVID. Like a pandemic or a controversy, some disruptors cannot be predicted, nor can boosts to student populations. But most challenges will be faced by both schools: fewer young people, better economic times (which draws folks to the working world), pandemics, etc. The rising cost of college may give a university that is slightly more affordable an edge, as has been Missouri State’s long-time strategy. An increased profile through growing name recognition (it’s only been 16 years since Missouri State’s name change), success in sports, clever marketing schemes (alumnus John Goodman is now involved with MSU), ending Mizzou’s near-monopoly on doctoral degrees, and so on could make a difference, but there remains a huge advantage to simply being an older school, with a head-start in enrollment and brand recognition.

For more from the author, subscribe and follow or read his books.

COVID Showed Americans Don’t Leech Off Unemployment Checks

In most states, during normal times, you can use unemployment insurance for at most 26 weeks, half the year, and will receive 30-50% of the wages from your previous job, up to a certain income. This means $200-400 a week on average. One must meet a list of requirements to qualify, for instance having been fired from a job due to cutbacks, not through fault of your own. Only 35-40% of unemployed persons receive UI.

This means that at any given time, about 2 million Americans are receiving UI; in April/May 2020, with COVID-19 and State measures to prevent its spread causing mass firings, that number skyrocketed to 22 million. Put another way, just 1-3% of the workforce is usually using UI, and during the pandemic spike it was about 16%. Just before that rise, it was at 1.5% — and it returned to that rate in November 2021, just a year and a half later. Indeed, the number of recipients fell as fast as it shot up, from 16% to under 8% in just four months (September 2020), down to 4% in six months (November 2020). As much pearl-clutching as there was among conservatives (at least those who did not use UI) over increased dependency, especially with the temporary $600 federal boost to UI payments, tens of millions of Americans did not leech off the system. They got off early, even though emergency measures allowed them to stay on the entire year of 2020 and into the first three months of 2021! (The trend was straight down, by the way, even before the $600 boost ended.)

This in fact reflects what we’ve always known about unemployment insurance. It’s used as intended, as a temporary aid to those in financial trouble (though many low-wage workers don’t have access to it, which must be corrected). Look at the past 10 years of UI use. The average stay in the program (“duration”) each year was 17 or 18 weeks in times of economic recovery, 14 or 15 weeks in better economic times (sometimes even fewer). Four months or so, then a recipient stops filing for benefits, having found a job or ameliorated his or her crisis in some fashion. Some “enjoy” the 30-50% of previous wages for the whole stretch, but the average recipient doesn’t even use UI for 20 weeks, let alone the full 26 allowed. This makes sense, given how much of a pay cut UI is. Again, many Americans stop early, and the rest are cut off — so why all the screaming about leeching? Only during the COVID crisis did the average duration climb higher, to 26-27 weeks, as the federal government offered months of additional aid, as mentioned — again, many did not receive benefits for as long as they could have.

Those that receive benefits will not necessarily do the same next year. In times of moderate unemployment, for example, about 30% of displaced workers and 50% of workers on temporary layoff who receive benefits in Year 1 will reapply for benefits in Year 2. The rest do not refile.

However, we must be nuanced thinkers. Multiple things can be true at the same time. UI can also extend unemployment periods, which makes a great deal of sense even if UI benefits represent a drastic pay cut. UI gives workers some flexibility to be more selective in the job hunt. An accountant who has lost her position may, with some money coming in and keeping a savings account afloat, be able to undertake a longer search for another accounting job, rather than being forced to take the first thing she can find, such as a waitressing job. This extra time is important, because finding a similar-wage job means you can keep your house or current apartment, won’t fall further into poverty, etc. There are many factors behind the current shortage of workers, and UI seems to be having a small effect (indeed, studies range between no effect and moderate effects). And of course, in a big, complex world there will be some souls who avoid work as long as they can, and others who commit fraud (during COVID, vast sums were siphoned from our UI by individuals and organized crime rings alike, in the U.S. and from around the globe; any human being with internet access can attempt a scam). But that’s not most Americans. While UI allows workers to be more selective, prolonging an unemployed term a bit, they nevertheless generally stop filing for benefits early and avoid going back.

To summarize, for the conservatives in the back. The U.S. labor force is 161 million people. A tiny fraction is being aided by UI at any given moment. Those that are generally don’t stay the entire time they could. Those who do use 26 weeks of benefits will be denied further aid for the year (though extended benefits are sometimes possible in states with rising unemployment). Most recipients don’t refile the next year. True, lengths of unemployment may be increased some, and there will always be some Americans who take advantage of systems like this, but most people would prefer not to, instead wanting what all deserve — a good job, with a living wage.

For more from the author, subscribe and follow or read his books.

Comparative Power

The practice of reconstructing the past, with all its difficulties and incompleteness, is aided by comparative study. Historians, anthropologists, sociologists, and other researchers can learn a great deal about their favored society and culture by looking at others. This paper makes that basic point, but, more significantly, makes a distinction between the effectiveness of drawing meaning from cultural similarity/difference and doing the same from one’s own constructed cultural analogy, while acknowledging both are valuable methods. In other words, it is argued here that the historian who documents similarities and differences between societies stands on firmer methodological ground for drawing conclusions about human cultures than does the historian who is forced to fill in gaps in a given historical record by studying other societies in close geographic and temporal proximity. Also at a disadvantage is the historian working comparatively with gaps in early documentation that are filled in later documentation. This paper is a comparison of comparative methods — an important exercise, because such methods are often wielded due to a dearth of evidence in the archives. The historian should understand the strengths and limitations of various approaches (here reciprocal comparison, historical analogy, and historiographic comparison) to this problem.

To begin, a look at reciprocal comparison and the meaning derived from such an effort, derived specifically from likenesses or distinctions. Historian Robert Darnton found meaning in differences in The Great Cat Massacre: and Other Episodes in French Cultural History. What knowledge, Darnton wondered in his opening chapter, could we gain of eighteenth century French culture by looking at peasant folk tales and contrasting them to versions found in other places in Europe? Whereas similarities might point to shared cultural traits or norms, differences would isolate the particular mentalités of French peasants, how they viewed the world and what occupied their thoughts, in the historical tradition of the Annales School.[1] So while the English version of Tom Thumb was rather “genial,” with helpful fairies, attention to costume, and a titular character engaging in pranks, in the French version the Tom Thumb character, Poucet, was forced to survive in a “harsh, peasant world” against “bandits, wolves, and the village priest by using his wits.”[2] In a tale of a doctor cheating Death, the German version saw Death immediately kill the doctor; with a French twist, the doctor got away with his treachery for some time, becoming prosperous and living to old age — cheating paid off.[3] Indeed, French tales focused heavily on survival in a bleak and brutal world, and on this world’s particularities. Characters with magical wishes asked for food and full bellies, they got rid of children who did not work, put up with cruel step-mothers, and encountered many beggars on the road.[4] Most folk tales mix fictional elements like ogres and magic with socio-economic realities from the place and time they are told, and therefore the above themes reflect the ordinary lives of French peasants: hunger, poverty, the early deaths of biological mothers, begging, and so on.[5] In comparing French versions with those of the Italians, English, and Germans, Darnton noticed unique fixations in French peasant tales and then contrasted these obsessions with the findings of social historians on the material conditions of peasant life, bringing these things together to find meaning, to create a compelling case for what members of the eighteenth century French lower class thought about day to day and their attitudes towards society.

Now, compare Darnton’s work to ethno-historian Helen Rountree’s “Powhatan Indian Women: The People Captain John Smith Barely Saw.” Rountree uses ethnographic analogy, among other tools, to reconstruct the daily lives of Powhatan women in the first years of the seventeenth century. Given that interested English colonizers had limited access to Powhatan women and a “cloudy lens” of patriarchal eurocentrism through which they observed native societies, and given that the Powhatans left few records themselves, Rountree uses the evidence of daily life in nearby Eastern Woodland tribes to describe the likely experiences of Powhatan women.[6] For example: “Powhatan women, like other Woodland Indian women, probably nurse their babies for well over a year after birth, so it would make sense to keep baby and food source together” by bringing infants into the fields with them as the women work.[7] Elsewhere “probably” is dropped for more confident takes: “Powhatan men and women, like those in other Eastern Woodland tribes, would have valued each other as economic partners…”[8] A lack of direct archival knowledge of Powhatan society and sentiments is shored up through archival knowledge of other native peoples living in roughly the same time and region. The meaning Rountree derives from ethnographic analogy, alongside other techniques and evidence, is that the English were wrong, looking through their cloudy lens, to believe Powhatan women suffered drudgery and domination under Powhatan men. Rather, women experienced a great deal of autonomy, as well as fellowship and variety, in their work, and were considered co-equal partners with men in the economic functioning of the village.[9]  

Both Darnton and Rountree admit their methods have challenges where evidence is concerned. Darnton writes that his examination of folktales is “distressingly imprecise in its deployment of evidence,” the evidence is “vague,” because the tales were written down much later — exactly how they were orally transmitted at the relevant time cannot be known.[10] In other words, what if the aspect of a story one marks as characteristic of the French peasant mentalité was not actually in the verbal telling of the tale? It is a threat to the legitimacy of the project. Rountree is careful to use “probably” and “likely” with most of her analogies; the “technique is a valid basis for making inferences if used carefully” (emphasis added), and one must watch out for the imperfections in the records of other tribes.[11] For what if historical understanding of another Eastern Woodland tribe is incorrect, and the falsity is copied over to the narrative of the Powhatan people? Rountree and Darnton acknowledge the limitations of their methods even while firmly believing they are valuable for reconstructing the past. This paper does not dispute that — however, it would be odd if all comparative methods were created equal.

Despite its challenges, reciprocal comparison rests on safer methodological ground, for it at least boasts two actually existing elements to contrast. For instance, Darnton has in his possession folktales from France and from Germany, dug up in the archives, and with them he can notice differences and thus derive meaning about how French peasants viewed the world. Such meaning may be incorrect, but is less likely to be so with support from research on the material conditions of those who might be telling the tales, as mentioned. Rountree, on the other hand, wields a tool that works with but one existing element. Historical, cultural, or ethnographic analogy takes what is known about other peoples and applies it to a specific group suffering from a gap in the historical record. This gap, a lack of direct evidence, is filled with an assumption — which may simply be wrong, without support from other research, like Darnton enjoys, to help out (to have such research would make analogy unnecessary). Obviously, an incorrect assumption threatens to derail derived meaning. If the work of Powhatan women differed in a significant way from other Eastern Woodland tribes, unseen and undiscovered and even silenced by analogy, the case of Powhatan economic equality could weaken. Again, this is not to deny the method’s value, only to note the danger that it carries compared to reciprocal comparison. Paradoxically, the inference that Powhatan society resembled other tribes nearby seems as probable and reasonable as it is bold, risky.

Similarly, Michel-Rolph Trouillot, in Silencing the Past: Power and the Production of History, also found meaning with absence when examining whether Henri Christophe, monarch of Haiti after its successful revolution against the French from 1791 to 1804, was influenced by Frederick the Great of Prussia when Christophe named his new Milot palace “San Souci.” Was the palace named after Frederick’s own in Potsdam, or after Colonel San Souci, a revolutionary rival Christophe killed? Trouillot studied the historical record and found that opportunities for early observers to mention a Potsdam-Milot connection were suspiciously ignored.[12] For example, Austro-German geographer Karl Ritter, a contemporary of Christophe, repeatedly described his palace as “European” but failed to mention it was inspired by Frederick’s.[13] British consul Charles Mackenzie, “who visited and described San Souci less than ten years after Christophe’s death, does not connect the two palaces.”[14] Why was a fact that was such a given for later writers not mentioned early on if it was true?[15] These archival gaps of course co-exist with Trouillot’s positive evidence (“Christophe built San Souci, the palace, a few yards away from — if not exactly — where he killed San Souci, the man”[16]), but are used to build a case that Christophe had Colonel San Souci in mind when naming his palace, a detail that evidences an overall erasure of the colonel from history.[17] By contrasting the early historical record with the later one, Trouillot finds truth and silencing.

This historiographic comparison is different from Rountree’s historical analogy. Rountree fills in epistemological gaps about Powhatan women with the traits of nearby, similar cultures; Trouillot judges the gaps in early reports about Haiti’s San Souci palace to suggest later writers were in error and participating in historical silencing (he, like Darnton, is working with two existing elements and weighs the differences). Like Rountree’s, Trouillot’s method is useful and important: the historian should always seek the earliest writings from relevant sources to develop an argument, and if surprising absences exist there is cause to be suspicious that later works created falsities. However, this method too flirts with assumption. It assumes the unwritten is also the unthought, which is not always the case. It may be odd or unlikely that Mackenzie or Ritter would leave Potsdam unmentioned if they believed in its influence, but not impossible or unthinkable. It further assumes a representative sample size — Trouillot is working with very few early documents. Would the discovery of more affect his thesis? As we see with Trouillot and Rountree, and as one might expect, a dearth in the archives forces assumptions.

While Trouillot’s conclusion is probable, he is nevertheless at greater risk of refutation than Darnton or, say, historian Kenneth Pomeranz, who also engaged in reciprocal comparison when he put China beside Europe during the centuries before 1800. Unlike the opening chapter of The Great Cat Massacre, The Great Divergence finds meaning in similarities as well as differences. Pomeranz seeks to understand why Europe experienced an Industrial Revolution instead of China, and must sort through many posited causal factors. For instance, did legal and institutional structures more favorable to capitalist development give Europe an edge, contributing to greater productivity and efficiency?[18] Finding similar regulatory mechanisms like interest rates and property rights, and a larger “world of surprising resemblances” before 1750, Pomeranz argued for other differences: Europe’s access to New World resources and trade, as well as to coal.[19] This indicates that Europe’s industrialization occurred not due to the superior intentions, wisdom, or industriousness of Europeans but rather due to unforeseen, fortunate happenings, or “conjunctures” that “often worked to Western Europe’s advantage, but not necessarily because Europeans created or imposed them.”[20] Reciprocal comparison can thus break down eurocentric perspectives by looking at a broader range of historical evidence. No assumptions need be made (rather, assumptions, such as those about superior industriousness, can be excised). As obvious as it is to write, a wealth of archival evidence, rather than a lack, makes for safer methodological footing, as does working with two existing evidentiary elements, no risky suppositions necessary.

A future paper might muse further on the relationship between analogy and silencing, alluded to earlier — if Trouillot is correct and a fact-based narrative is built on silences, how much more problematic is the narrative based partly on analogy?[21] As for this work, in sum, the historian must use some caution with historical analogy, historiographic comparison, and other tools that have an empty space on one side of the equation. These methods are hugely important and often present theses of high probability. But they are by nature put at risk by archival gaps; reciprocal comparison has more power in its derived meanings and claims about other cultures of the past — by its own archival nature.

For more from the author, subscribe and follow or read his books.


[1] Anna Green and Kathleen Troup, eds., The Houses of History: A Critical Reader in Twentieth-Century History and Theory, 2nd ed. (Manchester: Manchester University Press, 2016), 111.

[2] Robert Darnton, The Great Cat Massacre: And Other Episodes in French Cultural History (New York: Basic Books, 1984), 42.

[3] Ibid, 47-48.

[4] Ibid, 29-38.

[5] Ibid, 23-29.

[6] Helen C. Rountree, “Powhatan Indian Women: The People Captain John Smith Barely Saw,” Ethnohistory 45, no. 1 (winter 1998): 1-2.

[7] Ibid, 4.

[8] Ibid, 21.

[9] Ibid, 22.

[10] Darnton, Cat Massacre, 261.

[11] Rountree, “Powhatan,” 2.

[12] Michel-Rolph Trouillot, Silencing the Past: Power and the Production of History (Boston: Beacon Press, 1995), 61-65.

[13] Ibid, 63-64.

[14] Ibid, 62.

[15] Ibid, 64.

[16] Ibid, 65.

[17] Ibid, chapters 1 and 2.

[18] Kenneth Pomeranz, The Great Divergence: China, Europe, and the Making of the Modern World Economy (Princeton: Princeton University Press, 2000), chapters 3 and 4.

[19] Ibid, 29, 279-283.

[20] Ibid, 4.

[21] Trouillot, Silencing, 26-27.

Will Capitalism Lead to the One-Country World?

In Why America Needs Socialism, I offered a long list of ways the brutalities and absurdities of capitalism necessitate a better system, one of greater democracy, worker ownership, and universal State services. The work also explored the importance of internationalism, moving away from nationalistic ideas (the simpleminded worship of one’s country) and toward an embrace of all peoples — a world with one large nation. Yet these ideas could have been more deeply connected. The need for internationalism was largely framed as a response to war, which, as shown, can be driven by capitalism but of course existed before it and thus independently of it. The necessity of a global nation was only briefly linked to global inequality, disastrous climate change, and other problems. In other words, one could predict that the brutalities and absurdities of international capitalism, such as the dreadful activities of transnational corporations, will push humanity toward increased global political integration.

As a recent example of a (small) step toward political integration, look at the 2021 agreement of 136 nations to set a minimum corporate tax rate of 15% and tax multinational companies where they operate, not just where they are headquartered. This historic moment was a response to corporations avoiding taxes via havens in low-tax countries, moving headquarters, and other schemes. Or look to the 2015 Paris climate accords that set a collective goal of limiting planetary warming to 1.5-2 degrees Celsius, a response to the environmental damage wrought by human industry since the Industrial Revolution. There is a recognition that a small number of enormous companies threaten the health of all people. Since the mid-twentieth century, many international treaties have focused on the environment and labor rights (for example, outlawing forced labor and child labor, which were always highly beneficial and profitable for capitalists). The alignment of nations’ laws is a remarkable step toward unity. Apart from war and nuclear weapons, apart from the global inequality stemming from geography (such as an unlucky lack of resources) or history (such as imperialism), the effects and nature of modern capitalism alone scream for the urgency of internationalism. Capital can move about the globe, businesses seeking places with weaker environmental regulations, minimum wages, and safety standards, spreading monopolies, avoiding taxes, poisoning the biosphere, with an interconnected global economy falling like a house of cards during economic crises. The movement of capital and the interconnectivity of the world necessitate further, deeper forms of international cooperation.

Perhaps, whether in one hundred years or a thousand, humanity will realize that the challenges of multi-country accords — goals missed or ignored, legislatures refusing to ratify treaties, and so on — would be mitigated by a unified political body. A single human nation could address tax avoidance, climate change, and so on far more effectively and efficiently.

On the other hand, global capitalism may lead to a one-nation world in a far more direct way. Rather than the interests of capitalists spurring nations to work together to confront said interests, it may be that nations integrate to serve certain interests of global capitalism, to achieve unprecedented economic growth. The increasing integration of Europe and other regions provides some insight. The formation of the European Union’s common market eliminated taxes and customs between countries, and established a free flow of capital, goods, services, and workers, generating around €1 trillion in economic benefit annually. The EU market is the most integrated in the world, alongside the Caribbean Single Market and Economy, both earning sixes out of seven on the scale of economic integration, one step from merging entirely. Other common markets exist as well, being fives on the scale, uniting national economies in Eurasia, Central America, the Arabian Gulf, and South America; many more have been proposed. There is much capitalists enjoy after single market creation: trade increases, production costs fall, investment spikes, profits rise. Total economic and political unification may be, again, more effective and efficient still. Moving away from nations and toward worldwide cohesion could be astronomically beneficial to capitalism. Will the push toward a one-nation world come from the need to reign in capital, to serve capital, or both?

For more from the author, subscribe and follow or read his books.

When The Beatles Sang About Killing Women

Move over, Johnny Cash and “Cocaine Blues.” Sure, “Early one mornin’ while making the rounds / I took a shot of cocaine and I shot my woman down… Shot her down because she made me slow / I thought I was her daddy but she had five more” are often the first lyrics one thinks of when considering the violent end of the toxic masculinity spectrum in white people music. (Is this not something you ponder? Confront more white folk who somehow only see these things in black music, you’ll get there.) But The Beatles took things to just as dark a place.

Enter “Run For Your Life” from their 1965 album Rubber Soul, a song as catchy as it is chilling: “You better run for your life if you can, little girl / Hide your head in the sand, little girl / Catch you with another man / That’s the end.” Jesus. It’s jarring, the cuddly “All You Need Is Love” boy band singing “Well, I’d rather see you dead, little girl / Than to be with another man” and “Let this be a sermon / I mean everything I’ve said / Baby, I’m determined / And I’d rather see you dead.” But jealous male violence in fact showed up in other Beatles songs as well, and in the real world, with the self-admitted abusive acts and attitudes of John Lennon, later regretted but no less horrific for it.

This awfulness ensured The Beatles would be viewed by many of posterity as a contradictory element, with proto-feminist themes and ideas of the 1960s taking root in their music alongside possessive, murderous sexism. That is, if these things are noticed at all.

For more from the author, subscribe and follow or read his books.

With Afghanistan, Biden Was in the ‘Nation-building Trap.’ And He Did Well.

You’ve done it. You have bombed, invaded, and occupied an oppressive State into a constitutional democracy, human rights and all. Now there is only one thing left to do: attempt to leave — and hope you are not snared in the nation-building trap.

Biden suffered much criticism over the chaotic events in Afghanistan in August 2021, such as the masses of fleeing Afghans crowding the airport in Kabul and clinging to U.S. military planes, the American citizens left behind, and more, all as the country fell to the Taliban. Yet Biden was in a dilemma, in the 16th century sense of the term: a choice between two terrible options. That’s the nation-building trap: if your nation-building project collapses after or as you leave, do you go back in and fight a bloody war a second time, or do you remain at home? You can 1) spend more blood, treasure, and years reestablishing the democracy and making sure the first war was not in vain, but risk being in the exact same situation down the road when you again attempt to leave. Or 2) refuse to sacrifice any more lives (including those of civilians) or resources, refrain from further war, and watch oppression return on the ruins of your project. This is a horrific choice to make, and no matter what you would choose there should be at least some sympathy for those who might choose the other.

Such a potentiality should make us question war and nation-building, a point to which we will return. But here it is important to recognize that the August chaos was inherent in the nation-building trap. Biden had that dilemma to face, and his decision came with unavoidable tangential consequences. For example, the choice, as the Taliban advanced across Afghanistan, could be reframed as 1) send troops back in, go back to war, and prevent a huge crowd at the airport and a frantic evacuation, or 2) remain committed to withdraw, end the war, but accept that there would be chaos as civilians tried to get out of the country. Again, dismal options.

This may seem too binary, but the timeline of events appears to support it. With a withdraw deadline of August 31, the Taliban offensive began in early May. By early July, the U.S. had left its last military base, marking the withdraw as “effectively finished” (this is a detail often forgotten). Military forces only remained in places like the U.S. embassy in Kabul. In other words, from early May to early July, the Taliban made serious advances against the Afghan army, but the rapid fall of the nation occurred after the U.S. and NATO withdraw — with some Afghan soldiers fighting valiantly, others giving up without a shot. There are countless analyses regarding why the much larger, U.S.-trained and -armed force collapsed so quickly. U.S. military commanders point to our errors like: “U.S. military officials trained Afghan forces to be too dependent on advanced technology; they did not appreciate the extent of corruption among local leaders; and they didn’t anticipate how badly the Afghan government would be demoralized by the U.S. withdrawal.” In any event, one can look at either May-June (when U.S. forces were departing and Taliban forces were advancing) or July-August (when U.S. forces were gone and the Taliban swallowed the nation in days) as the key decision-making moment(s). Biden had to decide whether to reverse the withdraw, send troops back in to help the Afghan forces retake lost districts (and thus avoid the chaos of a rush to the airport and U.S. citizens left behind), or hold firm to the decision to end the war (and accept the inevitability of turmoil). Many will argue he should have chosen option one, and that’s an understandable position. Even if you had to fight for another 20 years, and all the death and maiming that comes with it, and face the same potential scenario when you try to withdraw in 2041, some would support it. But for those who desired an end to war, it makes little sense to criticize Biden for the airport nightmare, or the Taliban takeover or American citizens being left behind (more on that below). “I supported withdraw but not the way it was done” is almost incomprehensible. In the context of that moment, all those things were interconnected. In summer 2021, only extending and broadening the war could have prevented those events. It’s the nation-building trap — it threatens to keep you at war forever.

The idea that Biden deserves a pass on the American citizens unable to be evacuated in time may draw special ire. Yes, one may think, maybe ending the war in summer 2021 brought an inevitable Taliban takeover (one can’t force the Afghan army to fight, and maybe we shouldn’t fight a war “Afghan forces are not willing to fight themselves,” as Biden put it) and a rush to flee the nation, but surely the U.S. could have done more to get U.S. citizens (and military allies such as translators) out of Afghanistan long before the withdraw began. This deserves some questioning as well — and as painful as it is to admit, the situation involved risky personal decisions, gambles that did not pay off. Truly, it was no secret that U.S. forces would be leaving Afghanistan in summer 2021. This was announced in late February 2020, when Trump signed a deal with the Taliban that would end hostilities and mark a withdraw date. U.S. citizens (most dual citizens) and allies had over a year to leave Afghanistan, and the State Department contacted U.S. citizens 19 times to alert them of the potential risks and offer to get them out, according to the president and the secretary of state. Thousands who chose to stay changed their minds as the Taliban advance continued. One needn’t be an absolutist here. It is possible some Americans fell through the cracks, or that military allies were given short shrift. And certainly, countless Afghan citizens had not the means or finances to leave the nation. Not everyone who wished to emigrate over that year could do so. Yet given that the withdraw date was known and U.S. citizens were given the opportunity to get out, some blame must necessarily be placed on those who wanted to stay despite the potential for danger — until, that is, the potential became actual.

Biden deserves harsh criticism, instead, for making stupid promises, for instance that there would be no chaotic withdraw. The world is too unpredictable for that. Further, for a drone strike that blew up children before the last plane departed. And for apparently lying about his generals’ push to keep 2,500 troops in the country.

That is a good segue for a few final thoughts. The first revolves around the question: “Regardless of the ethics of launching a nation-building war, is keeping 2,500 troops in the country, hypothetically forever, the moral thing to do to prevent a collapse into authoritarianism or theocracy?” Even if one opposed and condemned the invasion as immoral, once that bell has been rung it cannot be undone, and we’re thus forced to consider the ethics of how to act in a new, ugly situation. Isn’t 2,500 troops a “small price to pay” to preserve a nascent democracy and ensure a bloody war was not for nothing? That is a tempting position, and again one can have sympathy for it even if disagreeing, favoring full retreat. The counterargument is that choosing to leave a small force may preserve the nation-building project but it also incites terrorism against the U.S. We know that 9/11 was seen by Al-Qaeda as revenge for U.S. wars and military presences in Muslim lands, and the War on Terror has only caused more religious radicalization and deadly terrorist revenge, in an endless cycle of violence that should be obvious to anyone over age three. So here we see another dilemma: leave, risk a Taliban takeover, but (begin to) extricate yourself from the cycle of violence…or stay, protect the democracy, but invite more violence against Americans. This of course strays dangerously close to asking who is more valuable, human beings in Country X or Country Y, that old, disgusting patriotism or nationalism. But this writer detests war and nation-building and imperialism and the casualties at our own hands (our War on Terror is directly responsible for the deaths of nearly 1 million people), and supports breaking the cycle immediately. That entails total withdraw and living with the risk of the nation-building endeavor falling apart.

None of this is to say that nation-building cannot be successful in theory or always fails in practice. The 2003 invasion of Iraq, which like that of Afghanistan I condemn bitterly, ended a dictatorship; eighteen years later a democracy nearly broken by corruption, security problems, and the lack of enforcement of personal rights stands in its place, a flawed but modest step in the right direction. However, we cannot deny that attempting to invade and occupy a nation into a democracy carries a high risk of failure. For all the blood spilled — ours and our victims’ — the effort can easily end in disaster. War and new institutions and laws hardly address root causes of national problems that can tear a new country apart, such as religious extremism, longstanding ethnic conflict, and so on. It may in fact make such things worse. This fact should make us question the wisdom of nation-building. As discussed, you can “stay until the nation is ready,” which may mean generations. Then when you leave, the new nation may still collapse, not being as ready as you thought. Thus a senseless waste of lives and treasure. Further, why do we never take things to their logical conclusion? Why tackle one or two brutal regimes and not all the others? If we honestly wanted to use war to try to bring liberty and democracy to others, the U.S. would have to bomb and occupy nearly half the world. Actually “spreading freedom around the globe” and “staying till the job’s done” means wars of decades or centuries, occupations of almost entire continents, countless millions dead. Why do ordinary Americans support a small-scale project, but are horrified at the thought of a large-scale one? That is a little hint that what you are doing needs to be rethought.

Biden — surprisingly, admirably steadfast in his decision despite potential personal political consequences — uttered shocking words to the United States populace: “This decision about Afghanistan is not just about Afghanistan. It’s about ending an era of major military operations to remake other countries.” Let’s hope that is true.

For more from the author, subscribe and follow or read his books.

Hegemony and History

The Italian Marxist Antonio Gramsci, writing in the early 1930s while imprisoned by the Mussolini government, theorized that ruling classes grew entrenched through a process called cultural hegemony, the successful propagation of values and norms, which when accepted by the lower classes produced passivity and thus the continuation of domination and exploitation from above. An ideology became hegemonic when it found support from historical blocs, alliances of social groups (classes, religions, families, and so on) — meaning broad, diverse acceptance of ideas that served the interests of the bourgeoisie in a capitalist society and freed the ruling class from some of the burden of using outright force. This paper argues that Gramsci’s theory is useful for historians because its conception of “divided consciousness” offers a framework for understanding why individuals failed to act in ways that aligned with their own material interests or acted for the benefit of oppressive forces. Note this offering characterizes cultural hegemony as a whole, but it is divided consciousness that permits hegemony to function. Rather than a terminus a quo, however, divided consciousness can be seen as created, at least partially, by hegemony andas responsible for ultimate hegemonic success — a mutually reinforcing system. The individual mind and what occurs within it is the necessary starting point for understanding how domineering culture spreads and why members of social groups act in ways that puzzle later historians.

Divided (or contradictory) consciousness, according to Gramsci, was a phenomenon in which individuals believed both hegemonic ideology and contrary ideas based on their own lived experiences. Cultural hegemony pushed such ideas out of the bounds of rational discussion concerning what a decent society should look like. Historian T.J. Jackson Lears, summarizing sociologist Michael Mann, wrote that hegemony ensured “values rooted in the workers’ everyday experience lacked legitimacy… [W]orking class people tend to embrace dominant values as abstract propositions but often grow skeptical as the values are applied to their everyday lives. They endorse the idea that everyone has an equal chance of success in America but deny it when asked to compare themselves with the lawyer or businessman down the street.”[1] In other words, what individuals knew to be true from simply functioning in society was not readily applied to the nature of the overall society; some barrier, created at least in part by the process of hegemony, existed. Lears further noted the evidence from sociologists Richard Sennett and Jonathon Cobb, whose subaltern interviewees “could not escape the effect of dominant values” despite also holding contradictory ones, as “they deemed their class inferiority a sign of personal failure, even as many realized they had been constrained by class origins that they could not control.”[2] A garbage collector knew the fact that he was not taught to read properly was not his fault, yet blamed himself for his position in society.[3] The result of this contradiction, Gramsci observed, was often passivity, consent to oppressive systems.[4] If one could not translate and contrast personal truths to the operation of social systems, political action was less likely.

To understand how divided consciousness, for Gramsci, was achieved, it is necessary to consider the breadth of the instruments that propagated dominant culture. Historian Robert Gray, studying how the bourgeoisie achieved hegemony in Victorian Britain, wrote that hegemonic culture could spread not only through the state — hegemonic groups were not necessarily governing groups, though there was often overlap[5] — but through any human institutions and interactions: “the political and ideological are present in all social relations.”[6] Everything in Karl Marx’s “superstructure” could imbue individuals and historical blocs with domineering ideas: art, media, politics, religion, education, and so on. Gray wrote that British workers in the era of industrialization of course had to be pushed into “habituation” of the new and brutal wage-labor system by the workplace itself, but also through “poor law reform, the beginnings of elementary education, religious evangelism, propaganda against dangerous ‘economic heresies,’ the fostering of more acceptable expressions of working-class self help (friendly societies, co-ops, etc.), and of safe forms of ‘rational recreation.’”[7] The bourgeoisie, then, used many social avenues to manufacture consent, including legal reform that could placate workers. Some activities were acceptable under the new system (joining friendly societies or trade unions) to keep more radical activities out of bounds.[8] It was also valuable to create an abstract enemy, a “social danger” for the masses to fear.[9] So without an embrace of the dominant values and norms of industrial capitalism, there would be economic disaster, scarcity, loosening morals, the ruination of family, and more.[10] The consciousness was therefore under assault by the dominant culture from all directions, heavy competition for values derived from lived experience, despite the latter’s tangibility. In macro, Gramsci’s theory of cultural hegemony, to quote historian David Arnold, “held that popular ideas had as much historical weight or energy as purely material forces” or even “greater prominence.”[11] In micro, it can be derived, things work the same in the individual mind, with popular ideas as powerful as personal experience, and thus the presence of divided consciousness.

The concept of contradictory consciousness helps historians answer compelling questions and solve problems. Arnold notes Gramsci’s questions: “What historically had kept the peasants [of Italy] in subordination to the dominant classes? Why had they failed to overthrow their rulers and to establish a hegemony of their own?”[12] Contextually, why wasn’t the peasantry more like the industrial proletariat — the more rebellious, presumed leader of the revolution against capitalism?[13] The passivity wrought from divided consciousness provided an answer. While there were “glimmers” of class consciousness — that is, the application of lived experience to what social systems should be, and the growth of class-centered ideas aimed at ending exploitation — the Italian peasants “largely participated in their own subordination by subscribing to hegemonic values, by accepting, admiring, and even seeking to emulate many of the attributes of the superordinate classes.”[14] Their desires, having “little internal consistency or cohesion,” even allowed the ruling class to make soldiers of peasants,[15] meaning active participation in maintaining oppressive power structures. Likewise, Lears commented on the work of political theorist Lawrence Goodwyn and the question of why the Populist movement in the late nineteenth century United States largely failed. While not claiming hegemony as the only cause, Lears argued that the democratic movement was most successful in parts of the nation with democratic traditions, where such norms were already within the bounds of acceptable discussion.[16] Where they were not, where elites had more decision-making control, the “received culture” was more popular, with domination seeming more natural and inevitable.[17] Similarly, Arnold’s historiographical review of the Indian peasantry found that greater autonomy (self-organization to pursue vital interests) of subaltern groups meant hegemony was much harder to establish, with “Gandhi [coming] closest to securing the ‘consent’ of the peasantry for middle-class ideological and political leadership,” but the bourgeoisie failing to do the same.[18] Traditions and cultural realities could limit hegemonic possibilities; it’s just as important to historians to understand why something does not work out as it is to comprehend why something does. As a final example, historian Eugene Genovese found that American slaves demonstrated both resistance to and appropriation of the culture of masters, both in the interest of survival, with appropriation inadvertently reinforcing hegemony and the dominant views and norms.[19] This can help answer questions regarding why slave rebellions took place in some contexts but not others, or even why more did not occur — though, again, acceptance of Gramscian theory does not require ruling out all causal explanations beyond cultural hegemony and divided consciousness. After all, Gramsci himself favored nuance, with coexisting consent and coercion, consciousness of class or lived experience mixing with beliefs of oppressors coming from above, and so on.

The challenge of hegemonic theory and contradictory consciousness relates to parsing out aforementioned causes. Gray almost summed it up when he wrote, “[N]or should behavior that apparently corresponds to dominant ideology be read at face value as a direct product of ruling class influence.”[20] Here he was arguing that dominant culture was often imparted in indirect ways, not through intentionality of the ruling class or programs of social control.[21] But one could argue: “Behavior that apparently corresponds to dominant ideology cannot be read at face value as a product of divided consciousness and hegemony.” It is a problem of interpretation, and it can be difficult for historians to parse out divided consciousness or cultural hegemony from other historical causes and show which has more explanatory value. When commenting on the failure of the Populist movement, Lears mentioned “stolen elections, race-baiting demagogues,” and other events and actors with causal value.[22] How much weight should be given to dominant ideology and how much to stolen elections? This interpretive nature can appear to weaken the usefulness of Gramsci’s model. Historians have developed potential solutions. For instance, as Lears wrote, “[O]ne way to falsify the hypothesis of hegemony is to demonstrate the existence of genuinely pluralistic debate; one way to substantiate it is to discover what was left out of public debate and to account historically for those silences.”[23] If there was public discussion of a wide range of ideas, many running counter to the interests of dominant groups, the case for hegemony is weaker; if public discussion centered around a narrow slate of ideas that served obvious interests, the case is stronger. A stolen election may be assigned less casual value, and cultural hegemony more, if there existed restricted public debate. However, the best evidence for hegemony may remain the psychoanalysis of individuals, as seen above, that demonstrate some level of divided consciousness. Even in demonstrability, contradictory consciousness is key to Gramsci’s overall theory. A stolen election may earn less casual value if such insightful individual interviews can be submitted as evidence.  

In sum, for Gramscian thinkers divided consciousness is a demonstrable phenomenon that powers (and is powered by) hegemony and the acceptance of ruling class norms and beliefs. While likely not the only cause of passivity to subjugation, it offers historians an explanation as to why individuals do not act in their own best interests that can be explored, given causal weight, falsified, or verified (to degrees) in various contexts. Indeed, Gramsci’s theory is powerful in that it has much utility for historians whether true or misguided.

For more from the author, subscribe and follow or read his books.


[1] T.J. Jackson Lears, “The Concept of Cultural Hegemony: Problems and Possibilities,” The American Historical Review 90, no. 3 (June 1985): 577.

[2] Ibid, 577-578.

[3] Ibid, 578.

[4] Ibid, 569.

[5] Robert Gray, “Bourgeois Hegemony in Victorian Britain,” in Tony Bennet, ed., Culture, Ideology and Social Process: A Reader (London: Batsford Academic and Educational, 1981), 240.

[6] Ibid, 244.

[7] Ibid.

[8] Ibid, 246.

[9] Ibid, 245.

[10] Ibid.

[11] David Arnold, “Gramsci and the Peasant Subalternity in India,” The Journal of Peasant Studies 11, no. 4 (1984):158.

[12] Ibid, 157.

[13] Ibid, 157.

[14] Ibid, 159.

[15] Ibid.

[16] Lears, “Hegemony,” 576-577.

[17] Ibid.

[18] Arnold, “India,” 172.

[19] Lears, “Hegemony,” 574.

[20] Gray, “Britain,” 246.

[21] Ibid, 245-246.

[22] Ibid, 276.

[23] Lears, “Hegemony,” 586.