A Scathing Review of the Last History Book I Read

Historian Vanessa M. Holden’s Surviving Southampton: African American Women and Resistance in Nat Turner’s Community argues that the August 1831 slave uprising in Virginia commonly known as Nat Turner’s Rebellion was in fact a community-wide rebellion involving black women, both free and unfree.[1] Holden writes that the event should be called the “Southampton Rebellion,” indicative of the county, for it “was far bigger than one man’s inspired bid for freedom.”[2] A community “produced [Turner]” and “the success of the Southampton Rebellion was the success of a community of African Americans.”[3] The scholar charts not only women’s everyday resistance prior to the revolt, participation in the uprising, and endurance of its aftermath, but also that of children. Sources are diverse, including early nineteenth-century books and Works Progress Administration interviews, and much material from archives at the Library of Virginia, the Virginia Historical Society, and the Virginia Museum of History and Culture.[4] Holden is an associate professor of history and African American studies at the University of Kentucky; her work has appeared in several journals, but Surviving Southampton appears to be her first book.[5] Overall, it is one of mixed success, for while community involvement in the revolt is established, some of Holden’s major points suffer from limited evidence and unrefined rhetoric.

This is a work of — not contradictions, but oddities. Not fatal flaws, but sizable question marks. For a first point of critique, we can examine Holden’s second chapter, “Enslaved Women and Strategies of Evasion and Resistance.” While it considers enslaved women’s important “everyday resistance” such as “work stoppages, sabotage, feigned illness, and truancy,” plus the use of code and secret meetings, that occurred before the revolt, it offers limited examples of women’s direct participation in the Southampton Rebellion.[6] There are two powerful incidents. A slave named Charlotte attempted to stab a white owner to death, while Lucy held down a mistress at another farm to prevent her escape.[7] After the revolt was quelled, both were executed.[8] The chapter also details more minor happenings: Cynthia cooked for Nat Turner and the other men, Venus passed along information, and Ester, while also taking over a liberated household, stopped Charlotte from killing that owner, which one might describe as counterrevolutionary.[9] This is all the meaningful evidence that comprises a core chapter of the text. (It is telling that this chapter has the fewest citations.[10]) It is true that Holden seeks to show women’s participation in resistance before and after the Southampton Rebellion, not just during its three days. Looking at the entire book, this is accomplished. But to have so few incidents revealing women’s involvement in the central event creates the feeling that this work is a “good start,” rather than a finished product. And it stands in uncomfortable contrast to the language of the introduction.

Holden notes in the first few pages of Surviving Southampton that historians have begun adopting wider perspectives on slave revolts.[11] As with her work, there is increasing focus on slave communities, not just the men after whom the revolts are named. “However,” Holden writes, “even though new critiques have challenged the centrality of individual male enslaved leaders and argued for the inclusion of women in a broader definition of enslaved people’s resistance, violent rebellion remains the prerogative of enslaved men in the historiography.”[12] To scholars, Holden declares, “enslaved men rebel while enslaved women resist.”[13] She is of course right to challenge this gendered division. But a chapter 2 that is light on evidence does not suffice to fully address the problem. The rest of the book does not help much — chapter 3, on free blacks’ involvement in the revolt, features just one free woman of color, who testified, possibly under coercion, in defense of an accused rebel, stating that she had urged him not to join Turner.[14] Not exactly a revolutionary urging, though she was saving a man’s life in court, a resistive act. Charlotte and Lucy were certainly rebels, and one might describe those who provided nourishment, information, or legal defense to the men using the same phrasing, but more evidence is needed to strengthen the case. Holden’s women-as-rebels argument is not wrong, it just needs more support than two to five historical events.

The position would be further aided by excising or editing bizarre, undermining elements, such as a passage at the end of the second chapter. There is a mention of the “divergent actions of Ester and Charlotte,” followed by a declaration that “instead of labeling enslaved women as either for or against the rebellion, it is more useful to understand enslaved women as embedded in its path and its planning.”[15] It is fair to say that we cannot fully know Ester’s stance on the revolution — she could have been against it and saved that enslaver, or she could have been for it and taken the same action. We do not actually know if she was counterrevolutionary. But Charlotte’s violent action surely reveals an embrace of the revolt. It is at least a safe assumption. Is Holden’s statement not stripping female slaves of their agency? Not for nor against rebellion, just in its path, swept up in the events of men? How can women be rebels if they are not for the rebellion? Here we do have a contradiction, and not just of the introduction, for nine pages earlier in chapter 2 the author wrote: “Past histories of the Southampton Rebellion regard Ester and Charlotte’s story as anomalous and their actions as spontaneous. However, their motives were not different from those of male rebels.”[16] Here the women have agency, their revolutionary motives purportedly known. The attempted stabbing was “as much a part of the Southampton Rebellion” as anything else.[17] It is a strange shift from empowering Ester and enslaved women as freedom fighters to downplaying Charlotte and advising one not to mark women as for the rebellion.

Language is a consistent problem in the book, and this is intertwined with organization and focus issues. This is apparent from the beginning. First one reads the aforementioned pages of the introduction, where it is clear Holden wants to erase a gendered division in scholarship and lift the black woman to one who “rebels,” not simply “resists.” The reader may then sit up, turn the book over, and wonder about the subtitle: African American Women and Resistance in Nat Turner’s Community. True, as we have seen, much of the text concerns everyday resistance before and after the uprising, but the fact that “rebellion” is not in the title instead is just slightly inconsistent with what Holden is rightly trying to do.

Similarly, look to an entire chapter that stands out as odd in a book allegedly focused on African American Women. Chapter 4 concerns children’s place in the Southampton Rebellion, and focuses almost exclusively on boys. In a short text — only 125 pages — an entire chapter is a significant portion. Why has Holden shifted away from women? Recall, returning to the introduction, that the University of Kentucky scholar aims to show that the revolt was a community-wide event. It was not solely defined by the deeds of men, nor women, nor slaves, nor freepersons — it also involved children, four of whom stood trial and were expelled from Virginia.[18] Here Surviving Southampton has a bit of an identity crisis. It cannot fully decide if it wants to focus on women or on the community as a whole. The title centers black women, as does Holden’s rebuke of the historiography for never framing women “as co-conspirators in violent rebellion…[only] as perpetrators of everyday resistance.”[19] Chapter 2 covers women to correct this. But the thesis has to do with the idea that “whole neighborhoods and communities” were involved.[20] Thus, the book has a chapter on children (boys), free black men alongside women, and so on. The subtitle of this work should have centered the entire community, not just women, and the introduction should have brought children as deeply into the historiographical review as women.

Finally, we turn to the author’s use of the phrases “geographies of evasion and resistance” and “geographies of surveillance and control.”[21] What this means is the how and where of oppressive tactics and resistive action. Geographies of resistance could include a slave woman’s bed, as when Jennie Patterson let a fugitive stay the night.[22] There existed a place (bed, cabin) and method (hiding, sheltering) of disobedience — this was a “geography.” Likewise, slave patrols operated at certain locations and committed certain actions, to keep slaves under the boot — a geography.[23] At times, Holden writes, these where-hows, these sites of power, would overlap.[24] The kitchen was a place of oppression and revolt for Charlotte.[25] Just as Patterson’s cabin was a geography of resistance, it was also one of control, as slave patrols would “visit all the negro quarters, and other places of suspected assemblies of unlawful assemblies of slaves…”[26] Thus, the scholar posits, blacks in Southampton County had to navigate these overlaps and use their knowledge of oppressive geographies “when deciding when and how to resist,” when creating liberatory geographies.[27]

As an initial, more minor point of critique, use of this language involves much repetition and redundancy. Repetitive phrasing spans the entire work, but can also be far more concentrated: “Enslaved women and free women of color were embedded in networks of evasion and resistance. They navigated layered geographies of surveillance and control. They built geographies of evasion and resistance. These women demonstrate how those geographies become visible in Southampton County through women’s actions.”[28] Rarely are synonyms considered. As an example of redundancy, observe: “These geographies of surveillance and control were present on individual landholdings, in the neighborhood where the rebellion took place, and throughout the country.”[29] Geographies were present? In other words, oppressive systems Holden bases on place were at places. There are many other examples of such things.[30]

The “geography of evasion and resistance” is not only raised ad nauseam, it seems to be a dalliance with false profundity.[31] It has the veneer of theory, but in reality little explanatory value. Of course oppressive systems and acts of rebellion operated in the same spaces; of course experience with and knowledge of the former informed the latter (and vice versa). This is far too trite to deserve such attention; it can be noted where appropriate, without fanfare. “Layered geographies of surveillance and survival” sounds profound, and its heavy use implies the same (note also that theory abhors a synonym), but it is largely mere description. Does the concept really help us answer questions? Does it actually deepen our understanding of what occurred in Patterson’s cabin or Charlotte’s kitchen? Of causes and effects? Does it mean anything more than that past experience (knowledge, actions, place) influences future experience, which is important to show in a work of history but is nevertheless a mere truism?

Granted, Holden never explicitly frames her “geography” as theory. But the historian consistently stresses its importance (“mapping” a resistive geography appears in the introduction and in the last sentence of the last chapter) and ascribes power to it.[32] After charting the ways enslaved women resisted before the rebellion, Holden writes: “Understanding the layered social and physical geography of slavery in Southampton and Virginia is important for understanding Black women’s roles in the Southampton Rebellion more broadly. Most remained firmly rooted to the farms where they labored as men visited rebellion on farm after farm late in the summer of 1831.”[33] Well, of course patterns — places, actions — of everyday resistance might foreshadow and inform women’s wheres and hows once Turner began his campaign. Elsewhere Holden notes that small farms and the nature of women’s work allowed female slaves greater mobility and proximity to white owners, a boon to resistance.[34] Women were “uniquely placed to learn, move through, and act within the layered physical and social geographies of each farm.”[35] Again, this is fancy language that merely suggests certain realities had advantages and could be helpful to future events. It goes no deeper, and it is truly puzzling that it is so emphasized. Such facts could have been briefly mentioned without venturing into the realm of theme and pseudo-theory.

Overall, Surviving Southampton deserves credit for bringing the participation of women, children, and free blacks in the 1831 uprising into the conversation. Our field’s understanding of this event is indeed broadened. But this would have been a much stronger work with further evidence and editing. Quality writing and sufficient proof are subjective notions, but that in no way diminishes their importance to scholarship. As it stands, this text feels like an early draft. Both general readers and history students should understand its limitations.

For more from the author, subscribe and follow or read his books.

[1] Vanessa Holden, Surviving Southampton: African American Women and Resistance in Nat Turner’s Community (Urbana: University of Illinois Press, 2021), 5-10. 

[2] Ibid., 7.

[3] Ibid., 2, 6.

[4] Ibid., x, 132-134 for example.

[5] “Vanessa M. Holden,” The University of Kentucky, accessed March 2, 2023, ​​https://history.as.uky.edu/users/vnho222.

[6] Holden, Surviving Southampton, 23, 35.

[7] Ibid., 28, 36.

[8] Ibid., 37, 81.

[9] Ibid., 28, 36.

[10] Ibid., 132-134.

[11] Ibid., 5.

[12] Ibid.

[13] Ibid., 6.

[14] Ibid., 52.

[15] Ibid., 37.

[16] Ibid., 28.

[17] Ibid.

[18] Ibid., 79.

[19] Ibid., 6.

[20] Ibid.

[21] Ibid., chapter 1 for instance.

[22] Ibid., 24.

[23] Ibid., 12-22.

[24] Ibid., 12.

[25] Ibid., 28.

[26] Ibid., 20.

[27] Ibid., 12.

[28] Ibid., 37.

[29] Ibid., 8.

[30] See ibid., 9: “The generational position of Black children as the community of the future was culturally significant and a pointed concern for African American adults, whose strategies for resistance and survival necessarily accounted for these children. Free and enslaved Black children and youths were a significant part of their community’s strategies for resistance and survival.”

[31] The near-irony of this paper’s phrasing is not lost.

[32] Holden, Surviving Southampton, 8, 120.

[33] Ibid., 25.

[34] Ibid., 34-35.

[35] Ibid., 34.

What Star Trek Can Teach Marvel/DC About Hero v. Hero Fights

What misery has befallen iconic franchises these days! From Star Wars to The Walking Dead, it’s an era of mediocrity. Creative bankruptcy, bad writing, and just plain bizarre decisions are characteristic, and will persist — fanbases will apparently continue paying for content no matter how dreadful, offering little incentive for studios to alter course. Marvel, for instance, appears completely out of gas. While a Spiderman film occasionally offers hope, I felt rather dead inside watching Thor: Love and Thunder, Dr. Strange in the Multiverse of Madness, and Wakanda Forever. Admittedly, I have not bothered with She-Hulk, Quantumania, Hawkeye, Ms. Marvel, Eternals, Black Widow, Loki, Shang-Chi, WandaVision, or Falcon and the Winter Soldier, and probably never will, but reviews from those I trust often don’t rise above “meh.” Of course, I do not glorify Marvel’s 2008-2019 (Iron Man to End Game) period as quite the Golden Age some observers do; there were certainly better movies produced then, but also some of the OKest or most forgettable: Incredible Hulk, Iron Man 2, Thor: The Dark World, Age of Ultron, Captain Marvel, Civil War, and the first 30 minutes of Iron Man 3 (I turned it off).

DC, as is commonly noted, has been a special kind of disaster. While Joker, Wonder Woman, The Batman, and Zack Snyder’s Justice League were pretty good, Justice League, Batman v. Superman, Suicide Squad, and Wonder Woman 1984, among others I’m sure, were atrocious. Two of these were so bad they were simply remade — try to imagine Marvel doing that, it’s difficult to do. Man of Steel, kicking off the series in 2013, was rather average. I liked the choice of a darker, grittier superhero universe, to stand in contrast to Marvel. But it wasn’t well executed. Remember Nolan’s The Dark Knight from 2008? That’s darkness done right. Joker and the others did it decently, too. But most did not. The DCEU is now being rebooted entirely, under the leadership of the director of the best Marvel film, Guardians of the Galaxy.

But Star Trek, it seems, has crashed and burned unlike any other franchise. Star Trek used to be about interesting, “what if” civilizations and celestial phenomena. It placed an emphasis on philosophy and moral questions, forcing characters to navigate difficult or impossible choices. It was adventurous, visually and narratively bright, and optimistic about the future of the human race, which finally unites and celebrates its infinite diversity and tries to do the same with other species it encounters. These things defined the series I watched growing up: The Next Generation, Voyager, Deep Space Nine, and Enterprise. The 2009 reboot Star Trek was more a dumb action movie (the sequels were worse), but at least it was a pretty fun ride. By most accounts the new television series since 2017 are fairly miserable: they’re dark, violent, gritty, stupid, with about as much heart as a Transformers movie (which is what Alex Kurtzman, the helmsman of New Trek, did prior). I have only seen clips of these shows and watched many long reviews from commentators I trust, save for one or two full episodes I stumbled upon which confirmed the nightmare. Those who have actually seen the shows start to finish may have a more accurate perspective. Regardless, when I speak of Star Trek being able to teach Marvel and DC anything, I mean Old Trek.

Batman v. Superman and Captain America: Civil War were flawed films (one more so) that got heroes beating each other up. A fun concept that I’m sure the comics do a million times better than these duds. The methods of getting good guys to fight, in my view, were painfully ham-fisted and unconvincing. The public is upset in both movies about collateral damage that the heroes caused when saving the entire world? Grow the fuck up, you all would have died. Batman wants to kill Superman because he might turn evil one day? Why not just work on systems of containment, with kryptonite, and use them if that happens? Aren’t you a good guy? Superman fights Batman because Lex Luthor will kill his mother if he doesn’t, when trying to enlist Batman’s help might be more productive? (Note that Batman finds Martha right away when their fight ends and they do talk; not sure how, but it happens.) Talking to Batman, explaining the situation, and working through the problem together may sound lame or impossible, but recall that these are both good guys. That’s probably what they would do. Superman actually tries to do this, right before the battle starts. The screenwriters make a small attempt to hold together this ridiculous house of cards, while still making sure the movie happens. Superman is interrupted by Batman’s attack. Then he’s too mad to just blurt it out at any point. “I need your help! We’re being manipulated! My mother’s in danger!” When your conflict hinges completely on two justice-minded people not having a short conversation, it’s not terribly convincing.

Civil War has the same problems: there’s a grand manipulator behind the scenes and our heroes won’t say obvious things that would prevent the conflict. They must be dumbed down. Zemo, the antagonist, wants the Avengers to fall apart, so he frames the Winter Soldier for murder. Tony Stark and allies want to bring the Winter Soldier in dead or alive, while Captain America and allies want to protect him and show that he was framed. If Cap had set up a Zoom call, he could have calmly explained the reasons why he believed Bucky was innocent; he could have informed Tony and the authorities that someone was clearly out to get the Winter Soldier, even brainwashing him after the framing to commit other violent acts. Steve Rogers’ dear friends and fellow moral beings probably would have listened. Instead, all the good guys have a big brawl at the airport (of course, no one dies in this weak-ass “Civil War”). Then Zemo reveals that the Winter Soldier murdered Tony Stark’s parents decades ago. This time Cap does try to explain. “It wasn’t him, Hydra had control of his mind!” He could have kept yelling it, but common sense must be sacrificed on the altar of the screenplay. Iron Man is now an idiot, anyway, a blind rage machine incapable of rational thought. Just like Superman. Who cares if Bucky wasn’t in control of his actions? Time to kill! So the good guy ignores the sincere words of the other good guy — his longtime friend — and they have another pointless fight.

Of course, these movies do other small things to create animosity between heroes, which is beneficial. Superman has a festering dislike of Batman’s rough justice, such as the branding of criminals. Batman is affected by the collateral damage of Superman saving the day in Man of Steel (how Lex Luthor knows Batman hates Superman, or manipulates him into hating the Kryptonian, is not explained). Tony Stark wants the government to determine when and how the Avengers act, while Steve Rogers wants to maintain independence. (The first position is a stretch for any character, as “If we can’t accept limitations we’re no different than the bad guys” is obviously untrue, given motivations, and limitations will almost certainly prevent these heroes from saving the entire world. Remember how close it came a few times? Imagine if you had to wait for the committee vote; imagine if the vote was “sit this one out.” It’s fairly absurd. But it would make a tiny bit more sense to have Captain America — the Boy Scout, the soldier — be the bootlicker following orders, not the rebellious billionaire playboy.) Still, the fisticuffs only come about because protagonists go stupid.

There are better ways to get heroes battling. If you want an evil manipulator and good guys incapable of communicating, just have one hero be mind controlled. Or, if you want to maintain agency, do what Star Trek used to do so well and create a true moral conundrum. Not “should we be regulated” or some such nonsense. A “damned if you do, damned if you don’t” scenario, with protagonists placed on either side. In the Deep Space Nine episode “Children of Time,” the crew lands on a planet that has a strange energy barrier. They discover a city of 8,000 people — their own descendants! They are in a time paradox. When the crew attempts to leave the planet, the descendants say, the energy barrier throws them 200 years into the past and their ship is damaged beyond repair in the ensuing crash. They have no choice but to settle there — leaving behind loved ones off-world and in another time, mourning their friends who died in the crash, and, most importantly, unable to return to the war that threatens the survival of Earth. The crew tries to figure out a way to escape the paradox. But they have a terrible moral choice to make. If they escape the energy barrier, they will end the existence of 8,000 people to save their own skins — the crash will never have occurred, thus no descendants. If they decide not to escape, not to avoid the crash, they will never see their loved ones again, friends will die, and the Federation may lose the war. This is a dilemma in the original sense of the word: there are no good options. Characters fall on different sides of the decision. No, Deep Space Nine isn’t dumb enough for everyone to begin punching each other in the face, but you see a fine foundation for such a thing to occur in a superhero film. You see the perspectives of both sides, and they actually make sense. You can see how, after enough time and argument and tension, good people might be willing to use violence against other good people, their comrades, to either save a civilization or win a war.

As a similar example, there’s the Voyager episode “Tuvix,” in which two members of another crew are involved in a transporter accident. The beaming combines them into a single, new individual. He has personality traits and memories of the two crewmen, but is a distinct, unique person. The shocked crew must come to terms with this event and learn to accept Tuvix. A month or two later comes the ethical dilemma: a way to reverse the fusion is developed. The two original crewmen can be restored, but Tuvix will cease to exist. Tears in his eyes, he begs for his life. What do you do? Kill one to save two? Kill a stranger to save a friend? Can’t you see Captain America standing up for the rights of a new being, while Iron Man insists that the two originals have an overriding right to life? Give good people good reason to come to blows. Such ideas and crises can be explored in the superhero realm just as easily as in Star Trek.

This is much more powerful and convincing than disagreements over — yawn — treaties and whether arm boy should die for events he had no control over.

For more from the author, subscribe and follow or read his books.

How Women Were Driven Away From Politics After the American Revolution

Historian Rosemarie Zagarri’s Revolutionary Backlash: Women and Politics in the Early American Republic reads like a sequel to historian Mary Beth Norton’s Liberty’s Daughters, the 1980 text that charted women’s leap into political activity in the 1760s and ’70s.[1] During the American Revolution, women debated politics, published editorials, and engaged in boycotts, protests, and other forms of disruptive action, which undermined gender norms but was nonetheless applauded by men as important to the cause.[2] Zagarri explores what came after: from the 1780s to the 1820s, a subset of women, called “female politicians,” continued their involvement in political activity, supporting newly formed parties and attempting to influence elections.[3] A wider debate arose over women’s role in society, with the ideals of the revolution such as natural rights and equality feeding a new ideology of “women’s rights,” which clashed with the traditional order of absolute male power.[4] With desperate times, the great battle for independence, in the past, there was less enthusiasm for an expanding feminine sphere. Those on this side of the debate pushed for women to return to the home and serve as “republican wives” and “republican mothers,” positive influences on husbands and sons, the helmsmen of the new nation, especially as partisanship threatened to tear the United States apart.[5] Women were considered more moral beings, who could restore rationality, civility, and peace among men.[6] By the 1830s, the “backlash” to women’s entrance into informal politics (or formal, in the case of female-enfranchised New Jersey) came to dominate American attitudes, and soon women “vehemently denied the political nature” of their words and deeds, and “distanced themselves from party politics and electoral affairs,” to avoid being attacked and “vilified.”[7] In positing that a significant feminine advance and evolving attitudes were swiftly hammered back, and rooting her explanation in changing societal realities (bitter partisanship, universal suffrage for white men, strengthening parties, scientific claims of women’s inferiority), Zagarri has made a substantial contribution to the field of early American history.[8]

Zagarri has also set the standard for bountiful evidence entwined with brevity of text. Indeed, Revolutionary Backlash is not even 190 pages. The George Mason professor of early American history has produced three prior texts on the era, all of them roughly the same length. The brevity, and clarity of writing, make Backlash accessible for a general audience. But the evidence is voluminous, and notably includes what one might call explicit “missing links.” For example, it is important that Zagarri demonstrates that men (and other women) pushed women to stay home and guide husbands and sons to be of wise character at the expense of engaging in the political arena themselves. Finding mutual exclusion in the historical record helps dismiss the possibility that women were asked to be republican mothers but were also tolerated in a new sphere, as they were during the revolution. True, the historian can track change by examining the available documents and noting the prevalence of an idea in one era (for instance, celebration of women’s political participation) and another idea in the next era (condemnation of the same), but it must be stressed that finding historical actors discussing — and creating — the shift itself can be exceedingly difficult. They would make any argument far more powerful. Here Zagarri has succeeded, unearthing explicit sources that connect old philosophies with new, musing over both, rejecting one, advocating the other. One speaker insisted that wives and mothers ought to guide and mold males to be “future Citizens, future Legislators, Magistrates, Judges and Generals” but would be ridiculed if they attempted to engage in political battles themselves.[9] A newspaper article declared, prescriptively, that party affiliations like “Fed. and Rep. and Demo. ingrate to woman’s ear,” rejecting the activities of female politicians, but stated women should work “behind the scene” to cool off feuds among men and raise children who cared more about brotherhood and freedom than party ideology.[10] These finds are triumphs, and the fact that there are very few of them cited, compared to a wealth of “unlinked” documents (discussing a woman’s domestic guiding role or the impropriety of political women, but not both), speaks to how difficult they can be to find and the depth of Zagarri’s research.

Zagarri largely uses primary sources throughout her work — newspapers, letters, memoirs, books, pamphlets, plays, and so on, many found in archives at the Library of Congress, Harvard University, and several state historical societies.[11] Included artwork is particularly interesting, including a circa 1815 illustration of a woman, holding an infant, trying to prevent male partisans from coming to blows, a visualization of women’s pacifying role.[12] Or a painting from that year, depicting a relatively diverse political gathering, compared to an 1852 painting of the same that featured only white males — politics, no matter how informal, was no longer an activity for women.[13] Zagarri uses secondary sources from historians, and others, such as political scientists and literary critics,[14] to supplement her eighteenth- and nineteenth-century works, but they are largely mined for their primary citations and rarely mentioned by name in-text.

Together, her evidence convincingly demonstrates that women were involved in informal and formal politics after the American Revolution (chapters 1-3), and that reactionary developments pushed them out (chapters 4 and 5), solidifying and codifying notions of male privilege and superiority. This restored a gender order that many men and women were afraid was collapsing. “As women are now to take a part in the jurisprudence of our state,” a New Jersey paper wrote concerning women voters, “we may shortly expect to see them take the helm — of government.”[15] When Elizabeth Bartlett was nominated for register of deeds in Middlesex County, Massachusetts, a pseudonymous woman wrote in a Boston magazine whether women would soon be “Governor, Senator, Representative[?]… I have some curiosity to know where we are to stop.”[16] The major factors the George Mason historian outlines put such fears to rest, eroding women’s progress and shoring up the gender hierarchy. Locating documents that explicitly demonstrated intentionality — showing that suffrage for white men and its laws that excluded female voters, that the entrenchment of the parties and end of “street” politics involving non-voters, or that essentialist science that questioned women’s mental capacities were consciously intended to drive women from the political sphere — would have bolstered Zagarri’s causal argument, but such sources are as difficult to find as “missing links,” if not more so.[17] The sources that show a push for women to avoid political disputes and instead quietly better the characters of males at home, to save a divided nation, come tantalizingly close, but never delivering the killing blow of stepping beyond happy, “circumstantial” developments for power to a place of explicit plans to take advantage of a polarization crisis and cast women out of politics. Still, the reader is left with little doubt that animosity toward female politicians fed the calls for women to terminate attempts at direct influence and shift to indirect, domestic efforts, and shaped the other oppressive developments as well.

In conclusion, Zagarri thoroughly accomplishes her aims. The reader will not soon forget the bold advocacy of early republic women, the debate over women’s rights, and the (other) dramatic societal developments that hardened attitudes against women activists and wrought stricter subjugation. Equally interesting is how female politicians continued working. Successfully driven away from the parties and any form of governance such as voting or elected office, the core of politics, women operated on the periphery, creating their own social reform organizations to advocate against slavery, prostitution, and alcohol.[18] They spoke, wrote, rallied, organized, boycotted, petitioned or lobbied public officials, and even tried to get the right candidates elected, all while fiercely denying their involvement in politics or interest in rejecting male authority over them.[19] They framed their activities as moral, not political, work. They were moral beings pushing for moral reform, which fell into the feminine sphere, not the male sphere of government.[20] They used the very ideology, moral purity, that precipitated their expulsion from the center of politics, a blow to women’s rights, to protect their continued advocacy, a bitingly clever and largely effective reframing, given how Americans understood politics in that age.[21] Revolutionary Backlash is must reading for anyone seeking to understand American women’s history, the political history of the early republic, creative resistance, or the fickle nature of progress.

For more from the author, subscribe and follow or read his books.

[1] Mary Beth Norton, Liberty’s Daughters: The Revolutionary Experience of American Women, 1750-1800 (Ithaca, New York: Cornell University Press, 1996). Rosemarie Zagarri, Revolutionary Backlash: Women and Politics in the Early American Republic (Philadelphia: University of Pennsylvania Press, 2007), 2.

[2] Ibid.

[3] Zagarri, Revolutionary Backlash, 5.

[4] Ibid., 4-5.

[5] Ibid., 5-6.

[6] Ibid., 124-134.

[7] Ibid., 4, 9.

[8] Ibid., 6-7, 180.

[9] Ibid., 132.

[10] Ibid.

[11] Ibid., 231.

[12] Ibid., 131.

[13] Ibid., 162, 164.

[14] Ibid., 110, 180.

[15] Ibid., 78.

[16] Ibid., 79.

[17] These are similar concepts, but not precisely the same. A missing link shows how one idea replaced another. A finding of intentionality shows why someone did something. You might find the true motive of a historical actor but not see the bridge between old and new ideas; conversely, you might see a bridge but not a motive. Some sources, however, could be both a missing link and a finding of intentionality. A term for this is forthcoming.

[18] Ibid., 142.

[19] Ibid., 142-145.

[20] Ibid.

[21] Ibid., 146 suggests women saw a hard divide between their moral crusades and legitimate political activity. Though this may be difficult to grasp today, Zagarri seeks to “analyze politics in the terms in which people at the time understood the concept” (page 8).

Historians on Capitalism’s Ascendance

What conditions must be present to declare that capitalism has finally come to dominate one’s society? American historians have varying views on this, as the meaning of capitalism is varied. Further complicating matters, the new paradigm in scholarship is to worry less about precise definitions of capitalism, as Seth Rockman points out in “What Makes the History of Capitalism Newsworthy?” — an acceptance of the “varieties of capitalism” would seem to make a determination of hegemonic conditions more challenging. Our question has become harder to answer as research has progressed, not easier. Still, historians’ answers and interpretations are interesting. As Paul Gilje writes in “The Rise of Capitalism in the Early Republic,” some of course simply point to capitalist ownership of the means of production — businesses, the factories, the tools and equipment, the resources, and so on — but others disavow domination until capitalists also control the political system. The level of industrialization is another suggestion, but historians such as Jonathan Prude and William Stott urge a step away from centering mechanization and new technology, instead focusing on the division of labor, which brings artisans of the early American republic, a key period of transition, deeper into the story of capitalism’s rise. In other words, for our purposes, hegemonic capitalism would not be reached until the division of labor (the creation of goods being broken down into small steps that low-skilled wage workers could accomplish, replacing the full-task, high-skilled creator, increasing production) had come to define society, regardless of the state of industrialization and even, perhaps, who owned the means of production and the State. 

Further, wage labor, Rockman notes, has long been considered the “sine qua non of capitalism,” its essential element. Before capitalism came to dominate, for instance in eighteenth-century America, most people were involved in their own familial agricultural production — most workers, including artisans and merchants, did not work for someone else, receiving a wage from an employer. The shift to wage labor was a massive change. However, Rockman writes, New Capitalist Studies have drifted away from wage work as the essential characteristic of the capitalist system. Millions of American slaves certainly worked for someone else, and while they received no wages this was a form of exploitation that could not be ignored in the story of economic development. In other words, some historians, those who see slavery as integral to capitalism rather than oppositional or anomalous (see Gilje), may look beyond widespread wage labor for the key characteristic of hegemonic capitalism. They may look instead, for example, to the ubiquity of a new ideology. Michael Zakim and Gary Kornblith’s Capitalism Takes Command phrased it as “capital’s transformation into an ism.” The wider culture adopted principles beneficial to business, though this took some time. As John Larson writes in “The Genie and the Troll: Capitalism in the Early American Republic,” despite some historians positing that capitalism arrived in America with the first European ships, others have demonstrated that the yeoman farmers of the early republic still held cultural values detached from the capitalistic ethos, disinterested in profit. But eventually values like individualism and competition did replace community reciprocity, respected hierarchy, and other norms (see Gilje and Larson). Could one claim that capitalism had reached predominance without the new self-interested belief system doing so as well?

Let’s consider how a few other historians addressed this question of what constitutes capitalism and how American society transitioned to it in the early republic period. In “The Enemy is Us: Democratic Capitalism in the Early Republic,” Gordon Wood addresses the debate between “moral economy” and “market economy” historians of the early American republic. The former posit that period farmers lacked a capitalistic, profit-driven mindset, focusing only on family and community needs. The latter argue that farmers engaged in market exchange to a high enough degree to evidence that capitalism’s grip on society occurred before moral economy historians would like to admit. Wood argues that moral economy proponents have strained reason in their attempts to portray farmers without any capitalistic ethos. Farmers are portrayed as merely providing for their families no matter how intensely they engage in the market. They cannot be capitalistic because, for the caricature-crafting moral economist historians, by definition capitalists have greedy, evil intentions. Wood critiques such thinkers for a lack of nuance and for attempting to fit the early republic into a Marxian framework that strictly requires a checklist of elements for the existence of capitalism; for instance, if there was no class exploitation of American farmers, there was no capitalism, despite the existing free market and other characteristics. Among the evidence Wood uses are the writings of a period laborer to show the true nature of economic activities and social relations. The cartoonish capitalists, Wood writes, did not bring about capitalism, but rather ordinary people with moral and market sensibilities; indeed, the essay breaks down some barriers between the sides of the historical debate.

Jean-Christophe Agnew helms the afterword of Capitalism Takes Command, delivering a short essay entitled “Anonymous History.” The piece concerns the financial instruments of capitalism and their effect on ordinary Americans, such as farmers, in the early republic period. Agnew argues that the “paperwork” of capitalism, the new financial, commercial, and legal devices — the mortgages, debts, securities, bonds, investments, lines of credit, and so forth — powered the shift from a society of family-centered production to industrial capitalism. Farmers became caught up in these instruments, pulling them from their private worlds and communities and into a truly national economic system. Agnew draws on the essays featured in Capitalism Takes Command for evidence, touching on several of their themes and theses, pulling them together. He cites, for instance, nineteenth-century farmers who felt their world had been turned upside down as they were sucked into the developing commercial system, worrying they would “die of mortgage” and complaining the mortgage “watched us every minute, and it ruled us right and left,” to quote a poem. The historian’s title, “Anonymous History,” refers to a focus on the economic practices, such as use of these financial instruments, that impacted ordinary people, rather than a focus on big names like Rockefeller and Carnegie, who do not appear in the book.

Wood argues that farmers had more of an entrepreneurial spirit and sought profit in the market, touching on the ideology and free market elements necessary for capitalism’s ascendance and moving the timeline up a bit, to the chagrin of moral economist historians. Agnew also finds earlier, more advanced capitalist development among farmers, though it is more against their will and to their detriment; he focuses on the financialization characteristics of the new economic system. These historians are not alone in their conclusions. Robert E. Wright, in “Capitalism and the Rise of the Corporate Nation,” also centers financial instruments in the story of capitalism’s ascendance. Investment, credit, and loans allowed corporations to hugely expand production and profits; corporations then came to dominate the U.S. economy. Agnew and Wright would concur that modern capitalism entails widespread financialization, which became a true force in the early republic period. Jeanne Boydston, who penned “The Woman Who Wasn’t There: Women’s Labor Market and the Transition to Capitalism in the United States,” might better relate to Gordon Wood. Wood wrote about ordinary Americans engaging in market exchange and rejected the false dichotomy of capitalist mindset vs. family-focused provider. Boydston looks at women’s economic roles in the antebellum era. Women’s work helped farms maintain their self-sufficiency and at the same time fed the market and grew the economy — for instance, women undertook outwork for textile mills, receiving compensation. What served the family served capitalism, and vice versa.

While these are not the only elements of hegemonic capitalism, they are major ones that offer a glimpse at a lively historiographical discussion. Given the nuance of the debate and the myriad definitions (or definitional apathy) inherent, it seems sensible to posit that a healthy (or unhealthy) mix of societal realities — a preeminent ideology, capitalist ownership of the means of production and governmental functions, wage labor, industrialization, financialization, a free market, the division of production tasks, and more (such as consumption levels or competition; see Larson) — are necessary to stamp capitalism as hegemonic, a thesis that should please both everyone and no one.

For more from the author, subscribe and follow or read his books.

Football as Chess

Of all the sports, American football is most like chess.

(The other football is the most like dance.)

Look at a Sunday game and then at the chessboard.

The two teams in their light and dark, away and home.

Rows of pawns, offensive and defensive lines, charging each other — clash.

But these are heavier, slower pieces, just a square or two at a time.

The opposing sides are desperate to get to one key piece, the quarterback-king, who must stay well-defended. If he gets trapped that’s trouble.

What does the knight do but a quick out route? Tight end.

The rook with its go route, the bishop with its slant. Speedy receivers.

What if the rook played QB in college and did a quick castle? The king’s suddenly out wide, playing receiver.

After each play, a stop — time to tweak your strategy, thinking multiple moves ahead, and make the next play call. Adjust the board and your headset, because you’re the coach.

Make it to your opponent’s endzone with a pawn and score another queen!

At least that many football players don’t leave the field with injury.

Play football, risk brain damage. Play chess, risk madness?

For more from the author, subscribe and follow or read his books.

It’s Illegal for Most Public Universities in Missouri to Offer PhDs

Only one public university (system) in Missouri can offer PhDs. Only one can offer first-professional degrees in law (J.D.), medicine (M.D.), and more.

The University of Missouri and its supporters in the legislature have for decades maintained a monopoly on doctoral degrees. For a long time, only UM system schools could offer them.

For instance, in 2005, Missouri State University was banned from offering any doctoral, first-professional, or engineering programs unless it was in cooperation with Mizzou, which would be the degree-awarding institution. This was the price of then-Southwest Missouri State’s name change to Missouri State. The name for limits on growth, to protect Mizzou’s position as the state’s largest university and its “prestige.” Other laws barred or scared off other universities from offering the highest degrees.

In 2018, Missouri passed a law with some good changes, some bad. Universities were finally given a pathway to offer more doctoral degrees — like, say, a Doctorate in Education (Ed.D) — without going through Mizzou. But it was enshrined into law that “the University of Missouri shall be the only state college or university that may offer doctor of philosophy degrees or first-professional degrees, including dentistry, law, medicine, optometry, pharmacy, and veterinary medicine” (H.B. 1465, page 4). Further, engineering degrees and a few others must still go through Mizzou.

Impacted universities include Missouri State, Truman, Central Missouri, Southeast Missouri, Harris-Stowe, Lincoln, Missouri Southern, Missouri Western, and Northwest Missouri. Looking at their catalogues you find no doctoral programs, with a few exceptions, such as two at Central Missouri offered through Mizzou and Indiana State, and eight at Missouri State, with one through UMKC.

Proponents frame all this as eliminating costly duplicate programs and promoting cooperation. But by that reasoning, why should multiple universities offer the same bachelor’s degrees? The actual reasoning is obvious. A monopoly on doctoral degrees means more students and income for the UM system. At the expense of every other public university. At the expense of students, who may want to study elsewhere. And to the detriment of the state, which loses money to other states when students don’t get into Mizzou or a sister school, are priced out, or do not find the program they’re looking for — they have no choice but to go to graduate school in another state.

It’s high time Missouri legislators corrected this nonsense. Students, alumni, and everyday proponents of fairness and sanity should contact their legislators and those who serve the districts of affected universities.

For more from the author, subscribe and follow or read his books.

When to Stop Watching ‘The Walking Dead’

Mercifully, The Walking Dead came to an end in November 2022. Its final season was released for the masses on Netflix this month. Having trudged through the entire series, we can at last confirm that yes, we have wasted years of our lives.

(This article exists primarily for those who have not seen the show or are a few seasons in. There are a couple light spoilers for the part of the show you should watch, seasons 1-8 [oops, article spoiler!]. There are some heavier spoilers for the later seasons, but who cares — you shouldn’t watch them. Secondarily, the piece exists for those who have seen the entire thing and seek commiseration.)

Just over halfway through its 11-season run, The Walking Dead began a slow decline in quality from which it simply never recovered. The fatal blow was the loss of its main character Rick Grimes in season 9, when actor Andrew Lincoln departed. A show with a large cast of characters needs an anchor, someone to revolve around. One can perhaps better get away with a hundred characters if that was the nature of the show from the beginning, but TWD is disorienting because it has a main character for eight seasons and then none for the last three. It lost its center. (The comics did it right. The creator, Robert Kirkman, abruptly ended the series when Rick died, shocking fans and leaving the bamboozled distributor throwing out fake upcoming issue covers. See, readers experienced this world through Rick, and when he ended so did the experience. No one was safe in the dystopia, not Rick, not us. If only the show had been bold enough to do that.) Other key reasons for the descent from a solid hit to the okay-est show ever include the inevitable repetition (we have to find a new home again, we have to fight the next bad guy / group), the delightful slow burn’s eventual devolution into a miserable 45 minutes of nothingness that strongly suggested the showrunners had no idea how to wrap this thing up, and the creeping contrivances and character stupidity that is a hallmark of poor writing, as I wrote elsewhere:

Bad writing is when characters begin following the script, rather than the story being powered by the motivations of the characters… The characters’ wants, needs, decisions, actions, and abilities [should determine] the course of events — like in real life… Series that blast the story in a direction that requires characters, in out-of-character ways, to go along with it will always suffer… The Walking Dead, in addition to forgetting to have a main character after a while and in general overstaying its welcome, was eventually infected with this. (There’s no real reason for all the main characters to cram into an RV to get Maggie to medical care in season 6, leaving their town defenseless; but the writers wanted them to all be captured by Negan for an exciting who-did-he-kill cliffhanger. There’s no reason Carl doesn’t gun Negan down when he has the chance in season 7, as he planned to do, right after proving his grit by massacring Negan’s guards; but Negan is supposed to be in future episodes.)

While the derivative format and bad writing reared their ugly heads before it, “Wrath,” the final episode of season 8, is when one should say a firm goodbye to The Walking Dead. Finish the season and never look back. It’s not simply that things get worse after this — and they do — but “Wrath” actually does a decent job rounding off the show’s theme. What made TWD powerful was not only its compelling characters who you could lose at any time, its great action, gore, horror, and twists, but its question of how to hold onto your humanity when humanity has gone to hell. Do you maintain your decency and ethics, or do you survive? You cannot often have both. Characters struggle to remain good people. Some are mostly successful. For others, the struggle pulls them into madness. Some lose momentarily or entirely, in order to live, descending into a darkness and doing horrific things. Can our protagonists still be called good? We are asked this; the characters ask it of themselves. “Wrath” deals with this issue. Rick wants to return to who he was, to reclaim some of his humanity, and build a world where it can be restored for all. Other protagonists — who one loves just as much as one loves Rick — begin plotting to do awful things to an enemy in the next season. This is the episode’s mild cliffhanger, the attempt to draw you back for more. If you walk away from the show, you’ll have to give up on seeing where that story thread goes. Having seen such, I argue it’s not worth it. End the series there, knowing your heroes will continue fighting to survive in this zombie apocalypse for the rest of their lives, and at the same time fighting not to fall into savagery and evil. After season 8, this theme is increasingly forgotten, and you’d better believe that the show is no longer smart enough to include it in the actual conclusion.

Seasons 9 through 11 have their positives of course. Alpha and the Whisperers are kind of cool, there’s some good horror moments that keep the walkers dangerous, and Negan’s redemption arc is without question the most interesting element. But otherwise there’s not a lot to write home about. Beyond Rick vanishing and more nothing-to-see-here episodes, there are desperate, disorienting time jumps, a horde of new characters that aren’t particularly interesting (if you’ve seen these seasons, try to remember who Magna is, it’s impossible), and a season 11 villain / community, Pamela and the Commonwealth, that is the weakest of the series. Plus, since the Commonwealth is a large, safe city, our characters get to leave the terrifying apocalyptic tribulation and enter the pulse-pounding world of…local journalism, courtroom drama, and peaceful protests over inequality. The last episodes try to pull at your heartstrings with flashback footage from earlier episodes, when the show was actually good, but this also felt somewhat desperate to me and wasn’t terribly successful. And yes, Rick and Michonne appear at the very end, but it’s a nothing burger: they are precisely where we last saw them, with Rick a captive and Michonne searching for him. Just in case there’s a movie. The end. Besides those bits, this season could have been inserted earlier in the show and you would never have known it was designed to be the last one — it’s simply more of the same, another bad guy defeated. The “why” of it all was entirely beyond me. That’s what you tend to ask yourself after season 8. Why does this show exist? Why am I watching this? May this writing save you some valuable time.

For more from the author, subscribe and follow or read his books.

Radical Feminism v. Cultural Feminism

With Daring to Be Bad, Alice Echols is the first historian to chart the rapid rise and fall of radical feminism in twentieth-century America.[1] Radical feminism, birthed in 1967, was eclipsed by cultural feminism by 1975.[2] Writing in 1989, Echols sought to demystify radical feminism for readers.[3] This required a significant exploration of the tendency that succeeded it: “A study of this sort seems to me especially important because radical feminism is so poorly understood and so frequently conflated with cultural feminism. This conceptual confusion arises in part because radical feminism was not monolithic and aspects of radical feminism did indeed anticipate cultural feminism.”[4] The latter evolved from the former, plus “cultural feminists almost always identified themselves as radical feminists and insisted that they were deepening rather than jettisoning” radicalism, creating fertile ground for disorientation.[5] Echols’ work is an intriguing history of these theories. Let us review the major distinctions between them.

Radical and cultural feminists have important points of intellectual departure. Radical feminism sought revolutionary changes in power structures, along Marxist lines, to bring about gender equality; cultural feminism was a turn inward, attention drifting away from the State and toward women’s culture, with the establishment of women’s businesses and other supports (stores, health clinics, credit unions, festivals) that to critics represented “an evasion of patriarchy rather than a full-throated struggle against it.”[6] The former was an anticapitalist movement for political transformation, the latter a self-sufficiency, self-improvement counterculture that rejected class struggle.[7] The radicals stressed the personal was political — a new system was needed to rectify oppression in the home, the bedroom, and so on.[8] Culture-minded reformers viewed matters from the other direction: the personal was the “foremost site of change,” from which a new world could be built.[9] Each movement had some form of opposition to male political supremacy and the construction of a new women’s culture, but each poured most of their energies into one arena. Echols offers a helpful parallel by pointing to the civil rights movement, which saw black nationalist offshoots that were “more involved in promoting black culture than in confronting the racist policies of the state.”[10] Of course, there were many other ideological differences among feminists. For instance, would women’s liberation be best served by minimizing male-female differences (the tack of the radicals) or placing more value on a unique female nature dismissed by the patriarchal society (the tack of the culturalists)?[11] Should you eradicate gender or celebrate it?[12]

Both tendencies left important legacies. The women in the earlier movement for social transformation demonstrated the power ordinary women have to enact political change. “They fought for safe, effective, accessible contraception; the repeal of all abortion laws; the creation of high-quality, community-controlled child-care centers; and an end to the media’s objectification of women.”[13] Unjust rape and domestic violence policies were challenged, as was exclusion from workplaces and universities.[14] Radical feminists engaged in direct action and civil disobedience, disrupting Miss America pageants and Senate hearings, hosting rallies, marches, and sit-ins.[15] Their organizing pushed the United States in a new direction. The Fourteenth Amendment was applied to women in Reed v. Reed (1971), the Equal Rights Amendment sailed through Congress (1972), the right to an abortion was guaranteed (1973), and more. With the ascendance of cultural feminism, political successes, expectedly, trailed off.[16] However, the later movement for personal transformation turned away from the talk of capitalism’s overthrow and other tenants of radicalism, broadening the tent. After the 1970s, far more women of color joined the movement, for instance.[17] In the same way, during the heyday of the radical feminists, “liberal feminism was…in some cases morediverse” than the radical feminist movement.[18] Though cultural feminism cannot be applauded for shifting focus away from political struggle, and much merit can be found in radical feminist beliefs, it is difficult to deny that more women might be attracted to a more tempered movement further divorced from the Marxist niche. This went beyond anticapitalism, as well, to other aspects of radical thinking. One of the defining texts of cultural feminism was Jane Alpert’s 1973 “Mother Right” piece, which “reaffirmed rather than challenged dominant cultural assumptions about women” by refusing to erase male-female differences, instead celebrating the “biological difference between the sexes… The unique consciousness or sensibility of women…”[19] Cultural feminism was better adapted to mainstream American ideologies, and could therefore attract a wider, more diverse following.

Overall, Daring to Be Bad offers history students and lay readers many ideas and phenomena to consider. It spotlights the bitter infighting leftwing movements typically experience. It prompts one to ask whether members of an oppressed group should focus on their commonalities or fully embrace their differences (an intersectional, but potentially paralyzing or divisive, approach). And will, as Alpert wrote, “economic and political changes…follow rather than precede sweeping changes in human consciousness”?[20] Or is it best to change social structures first, as the radicals insisted, freeing human thought, letting ideology catch up? Echols has produced both a fine history of a Leftist movement and a potential guide for future struggles.

For more from the author, subscribe and follow or read his books.

[1] Alice Echols, Daring to Be Bad: Radical Feminism in America, 1967-1975 (Minneapolis: University of Minnesota Press, 2019), xvi.

[2] Ibid., 5.

[3] Ibid., xvi.

[4] Ibid., 6.

[5] Ibid., 7.

[6] Ibid., viii-ix, xviii-xix.

[7] Ibid., 6-7.

[8] Ibid., ix, 3.

[9] Ibid., xix-xx.

[10] Ibid., 7.

[11] Ibid., xviii.

[12] Ibid., 6, 9.

[13] Ibid., 4.

[14] Ibid., vii-viii.

[15] Ibid., ix-x.

[16] Ibid., 293.

[17] Ibid., 291.

[18] Ibid., xxii.

[19] Ibid., 250, 252.

[20] Ibid., 251.

Review: ‘Pauli Murray: A Personal and Political Life’

Troy R. Saxby, casual academic at the University of Newcastle, offers an intimate, engaging look at an increasingly recognized twentieth-century human rights advocate in Pauli Murray: A Personal and Political Life. Pauli Murray’s personal life was as turbulent and winding as her political life was significant to American justice movements. She experienced great personal loss, poverty, discrimination, health problems, and struggles with sexuality and gender identity from the 1910s to the mid-1980s.[1] Saxby’s biography seeks to “connect Murray’s inner life with her incredibly active public life,” which included civil rights activism (first pushing for the integration of the University of North Carolina), helping found the National Organization for Women to work for gender equality, becoming an influential lawyer, professor, and author, and later being the first black woman to serve as an Episcopal priest.[2] Her writings and legal arguments influenced Brown v. Board of Education (and other NAACP battles) and Reed v. Reed (the 1971 case that first applied the 14th Amendment’s Equal Protection Clause to sex), broadening rights for blacks, women, and, after her death, LGBTQ Americans.[3] Considering Murray’s private struggles, Saxby argues, “is essential to understanding Murray,” with her early experiences, her most intimate feelings and thoughts, “shaping her…aspirations.”[4] This of course is a mere truism. All people are molded by prior experience, circumstances, and so on. Still, the impact Murray’s private life had on her public service is a fascinating history, and was lacking in the historiography.[5] Let us consider what motivated Pauli Murray.

One intriguing aspect of Murray’s life was her early refusal to cooperate with unjust systems. As a child in 1910s America, racial oppression led Murray to “hate George Washington, mumble allegiance to the flag, resist standing for ‘The Star-Spangled Banner,’” and more.[6] She “boycotted segregated facilities — instead of taking public transport, she rode her bike.”[7] Not only does this foreshadow Murray’s important work for civil rights, it suggests that her central motivations were already operating, or at least in development, at seven years old, typically an age of conformity. Thus, an exploration of what drove Murray should start there.

From Saxby’s text, it could be argued that early feelings of alienation played a role — this is more of a subconscious motivation, but important nonetheless. Segregation and sexism of course “othered” the young Murray, but there is much else. She was separated from her father and many siblings at age three, after her mother’s death; she fought feelings of abandonment; caretakers like Aunt Pauline and her grandfather were not affectionate; Murray’s darker complexion stood out in her new family, and she felt like an outsider; her complexion was lighter than most of her classmates, however, drawing mockery; the family’s middle-class values kept her at a distance from neighborhood kids; “Pauli also felt different from her classmates because she did not have visible parents”; she was even left-handed, unlike most students and adults.[8] At every turn, Saxby writes, “Pauli Murray stands apart, somehow ‘other.’”[9] A complex, constant sense of difference helped mold Murray into a child who could rebel against nationalism and segregation, among other things: “Pauli’s rebellious streak, a hallmark of her adult life, emerged at school — such was her ability to turn a classroom to chaos that one of her primary school teachers would take Pauli with her whenever she was called away from the classroom.”[10] The field of psychology has shown that children lacking a sense of belonging often act out (and struggle with poor mental health).[11] Whereas other children without her experiences might go along with hands over hearts and direction toward the back of a public bus, Murray’s history of alienation led to resistance.

There were of course positive influences as well, more conscious motivations, such as her grandparents’ emphasis on black pride and uplift, and Aunt Pauline’s assurances that Murray was destined for greatness.[12] The Fitzgeralds in fact had long “avoided any contact with white people if it meant losing dignity…”[13] America and its segregation had previously been questioned and experienced. There are many factors that push us to do what we do. But Murray’s view that “in some ways, I was alien” dominates the text, especially as she becomes an adult and her feelings toward women and her interest in passing as a man develop.[14] She was, at the same time, rejected from one college for being a woman and from another for being black.[15] Murray remained The Other in myriad ways. This fact contributed to her mental health challenges.[16] That it also pushed her toward activism seems a sensible supposition: Otherness impacted her behavior as a child (so it might do the same in adulthood), and it could only be rectified through policy change. Murray’s sense of difference that contributed to behavioral nonconformity against unjust systems as a child persisted, rose to a more conscious place, and manifested anew in work in the black struggle and the feminist movement — as an adult, Murray could work to create a society with greater inclusion for herself and others. She never felt like she belonged, so she built a world with more belonging.

Overall, Pauli Murray conjures many musings on the nature of history and biography, which undergraduate and graduate students may find interesting. For instance, environment and prior experiences motivating an individual is a given, as stated, but it is also open to interpretation. What factors were at play, and how influential each was, can be argued at length, based on historical sources. Other historians may see the Fitzgeralds’ rebellion against segregation as a much more significant factor on Murray’s activist path than her sense of being an outsider. One may instead emphasize the constant tragedies of her life and consider potential connections to social oppression, such as her father being killed by a white man in an insane asylum when she was ten.[17] As we have seen, conscious and subconscious drivers can be theorized and posited, their strengths speculatively compared. The forces that molded historical actors are as powerful as they are elusive.

For more from the author, subscribe and follow or read his books.

[1] Troy R. Saxby, Pauli Murray: A Personal and Political Life (Chapel Hill: The University of North Carolina Press, 2020), xiv-xv.

[2] Ibid., xiii, xvii, 68-76.

[3] Ibid, 145-146, 212-213, 249-251.

[4] Ibid., xiv, xvi.

[5] Ibid., xv-xvi.

[6] Ibid., 23.

[7] Ibid., 24.

[8] Ibid., 6-7, 9-12, 15-16, 20, 23,    

[9] Ibid., xvii.

[10] Ibid., 23.

[11] Kelly-Ann Allen, DeLeon L. Gray, Roy F. Baumeister, and Mark R. Leary, “The Need to Belong: A Deep Dive into the Origins, Implications, and Future of a Foundational Construct,” Educational Psychology Review 34 (August 2021): https://link.springer.com/article/10.1007/s10648-021-09633-6.

[12] Saxby, Pauli Murray, 21-22, 38.

[13] Ibid., 24.

[14] Ibid., 24, chapter 2, for instance 45-48.

[15] Ibid., 39, 70.

[16] Ibid., 65-68.

[17] Ibid., 35.

War is Peace, Freedom is Slavery, Ending Democracy is Saving It

George Orwell’s 1984 quickly introduces the reader to the three slogans of its fictional authoritarian government: war is peace, freedom is slavery, ignorance is strength. According to the common interpretations, these are not meant to be literal equivalents — to be at war is not to be at peace. Rather, as the novel suggests, they are propagandistic cause-effect relationships, tradeoffs. War, the State promises, will bring about peace. True freedom is found in slavery — if you submit to the Party, you will live a successful, comfortable, happy life. Ignorance, giving up personal and contrary ways of thinking, makes society stable, safe, united. The slogans present necessary evils, unpleasant means to noble ends: accepting war, slavery, and ignorance brings personal and national benefits. (The order reversal of the middle slogan is intriguing. We have, from the reader’s perspective, “bad is good,” “good is bad,” “bad is good.” Orwell chose not to pen “slavery is freedom,” which would have aligned with the others and made the “slavery brings freedom” interpretation even stronger. Still, any notion of “freedom bringing slavery” is difficult to reconcile with the other two, given that this propaganda is presenting terrible things as desirable. The Party isn’t going to tell citizens to watch out for slavery but embrace ignorance and war.) Winston Smith, of course, finds out the hard way what happens when war, slavery, and ignorance are not accepted.

In a time of rightwing attempts to overthrow free and fair elections, rising authoritarianism among the populace, and an American system too underdeveloped to handle anti-democratic threats like Trump, one can’t help but think of Orwell. We’ve seen in terrifying fashion how democracy requires the truth to survive, withering in ages of disinformation. Even language became concerning. Blatant falsities about an inauguration crowd size were infamously labeled “alternative facts,” not really doublethink, but reminiscent of how past facts were erased and replaced in the novel. Truth Social, a platform built for Trump and his lies, sounds uncomfortably like the Ministry of Truth, the propaganda division of Oceania whose pyramid-shaped building displays the Party’s three slogans. Of course, conservatives delight in noting that 1984 was a 1949 response to authoritarian socialism in the Soviet Union, and often whine about how woke cancel culture, COVID vaccines, masks, and lockdowns, or welfare and universal services represent the tyranny and submissive collectivity of which Orwell wrote. But they forget Orwell was a socialist who advocated for democratic socialism as frequently as he warned of communism, and they live in a strange world where every liberal (to say nothing of Leftist) policy or cultural shift warrants screams of 1984 but demagogic leaders, casual dismissals of legal and democratic norms, absurdities spewed for reasons of power, plots to ignore election results, violent attacks on the Capitol, authoritarian and nationalistic voters, and so on somehow are of little concern.

But clearly, while it may be most appropriate for the text, depending on one’s reading, the cause-effect interpretation of the slogans doesn’t best reflect our realities. (Though you do see hints of it at times. American war has long been framed as necessary for peace, even if it achieves the opposite, and other horrors.) A literal equivalent interpretation gets much closer. While it probably won’t be publicized and sloganeered in a cartoonish manner, authoritarianism appears to rely on parts of the populace living in parallel worlds. (The State would publicize tradeoffs and push you to accept them, but it would not advertise the fact that you believe falsities and contradictions.) Parallel worlds, built on conspiracy theories and lies, were of course a major reason German democracy collapsed in the 1930s. The Nazis blamed Jews and Communists for Germany’s problems, which justified Hitler’s dismantling of democratic processes and restriction of civil rights. This is how authoritarianism grows and triumphs. It is not that one part of the populace believes war is necessary for peace and another does not. One believes war is peace. It doesn’t realize or accept that it’s ignorant, enslaved, at war; it thinks it is peaceful, free, and strong (this is different from the novel, where everyone knows, for instance, that it is wartime, with news from the front broadcast everywhere; “Winston could not definitely remember a time when his country had not been at war”). One part of the population believes destroying democracy is saving it. The armed mob that broke into the Capitol, the conservatives decrying mass voter fraud (60% of Republicans, nearly 40% of the nation, still believe the 2020 election was stolen), and even some of the politicians sustaining the lunacy…they believe democracy is in danger as sincerely as liberals (and moderates and sane conservatives). It must be protected from those cheating Democrats, fraudulent votes, bad voting machines. Their own reality. Such dupes are completely detached from quality standards of evidence and reason (why would you trust a bad documentary or an article on Breitbart over the conclusions of Republican-controlled, recounted states, Trump’s own Justice Department and Department of Homeland Security, and some 60 federal court cases?), but they think they’re saving democracy. When they’re actually cutting its throat.

For more from the author, subscribe and follow or read his books.

No Suburban Housewife: The Other Women of the 1950s

The dominant social construction of womanhood from 1945 to 1960, which became the dominant historical image of women later on, was one of the suburban housewife and mother — white, middle-class, straight, and patriotic, she was content to cook, care for the home, and raise children.[1] But as Not June Cleaver, edited by historian Joanne Meyerowitz, demonstrates, the postwar era was far more complicated. Women were politicians, workers, union organizers, and strikers; they were Communists, peace activists, and secret abortionists; women were city-dwelling Mexican Americans, Chinese Americans, black Americans; they were lesbians with cultural creations, Beatniks who ran away from home, the poor just trying to survive, and tireless organizers pushing for civil rights and gender equality, whose efforts would expand in the 1960s.[2] Though an anthology with the works of many historians, Meyerowitz’s text argues that women had more agency and more diverse experiences and ideologies than the historiography acknowledged; it “aims…to subvert the persistent stereotype of domestic, quiescent, suburban womanhood.”[3] She further demonstrates that the postwar literature and “public discourse on women was more complex than portrayed” in works such as Betty Friedan’s famous The Feminine Mystique, which positioned women as well-trapped in the home, thanks to inculcating cultural messaging.[4] Yet, as we will see, magazines and other media could in fact push back against the gender ideal and show this other side of the age.[5] Let’s look closely at three papers in the text, each revealing how women broke the mold.

Donna Penn’s “The Sexualized Woman: The Lesbian, the Prostitute, and the Containment of Female Sexuality in Postwar America” examines the lives of lesbian women of the era and the larger society’s changing reactions to their existence. For a time adorned by the stereotype of the heterosexual wife, there was considerable effort — in films, books, articles by social scientists, and so on — expended on vilifying lesbianism in a harsher manner compared to prior decades, for instance by beginning to link gay women to the pre-established categorization of prostitutes as fallen women, sexual deviants in a criminal underworld.[6] “Many prostitutes,” one expert wrote, “are latent homosexuals insofar as they resort to sexual excesses with many men to convince themselves that they are heterosexual.”[7] Lesbians were often prostitutes, prostitutes were often lesbians, it was asserted — and prostitutes, as everyone knew, were of the wicked underbelly of society.[8] This was different from the dominant prewar image of lesbians as refined middle-class women with lifelong female partners, otherwise respectable.[9] Though some lesbians took assumptions of sexual depravity to heart, struggling with sexual identity under restrictive social norms and pressures, others pushed back against demonization.[10] Defiant appearances in public, building community at lesbian bars, writing lesbian pulp fiction and articles, and more signaled a right to exist and to live true to the self.[11] More intimately, a culture of “sexual ceremony and dialogue” developed that gave lesbians a coded language to express interest beyond the repressive gaze of the larger society, and which also subtlety subverted gender norms when butch women, who mirrored the man in heterosexual relationships, made giving pleasure, rather than receiving it, their “foremost objective.”[12]

In “The ‘Other’ Fifties: Beats and Bad Girls,” Wini Breines shows the extent to which women and girls sought to escape from their dull, prescriptive futures as homemakers. Rather than happy in their place, as the standard image of the postwar era suggests, some dreaded “a life where nothing ever happened. I looked around me and saw women ironing dresses and hanging out clothes and shopping for food and playing mah-jong on hot summer afternoons, and I knew I couldn’t bear to spend my life that way, day after drab day, with nothing ever happening. The world of women seemed to me like a huge, airless prison…”[13] So, like boys and men, girls and women became or imitated Beats, the free-spirited artists, writers, and musicians of New York City who rebelled against mainstream society, its conservatism, materialism, religiosity, male careerism, and so forth.[14] Women and teens enjoyed rock and roll, jazz, sex, intellectual discourse, racial integration and black culture, bad boys, drugs, artistic creativity, Buddhism, and other experiences that they described as “Real Life,” an existence “dramatic, unpredictable, possibly dangerous. Therefore real, infinitely more worth having.”[15] Not only did these exciting countercultural lives undermine the happy housewife trope, they contradicted the hegemonic ideal of girlhood — properly behaved, virginal, neatly dressed and done up, hanging out “around the malt shop” — found in magazines, novels, films, and other cultural outlets.[16] Rebellious females also contradicted the notion, pushed by social commentators, that problem children of this generation were exclusively boys, who, unlike girls, were expected to make something of themselves, but were failing to do so after falling into delinquency, hipsterism, doping, and the rest.[17] Although the stories of female Beatniks would not be well-captured until memoirs printed in the 1970s, the 1950s saw films like The Wild One and Rebel Without a Cause, which displayed girls’ interest in troublemakers and bad boys.[18]

Finally, there’s Deborah Gerson’s “Is Family Devotion Now Subversive? Familialism Against McCarthyism,” wherein the mainstream construction of American womanhood is shattered by women running their households without their husbands, organizing, and speaking up for Communism and free speech. When the Smith Act of 1940 eventually sent leaders of the Communist Party to prison or into hiding over their political and revolutionary beliefs, their wives formed the Families Committee of Smith Act Victims, which gave “financial, material, and emotional assistance” to each other, their children, and the prisoners.[19] Fundraising allowed for childcare, trips to visit fathers behind bars, birthday presents, and more.[20] But the Families Committee also existed to fight anticommunist policies and practices.[21] It denounced the imprisonment of Reds and the FBI’s continued harassment and surveillance of the wives and children.[22] In a sense, the Smith Act blew up the postwar ideal, creating single mothers who had to enter the workforce, become heads of households, and return to the world of organizing they had known as young Communist women.[23] The Families Committee seized the opportunity to publicly turn American ideology on its head, through pamphlets, articles, and letters.[24] To be a true American, a good mother, a healthy family in the 1950s was to be anticommunist — patriotic, loyal, conformist.[25] But the U.S. government was, in its persecution of dissenters, attacking families and ignoring stated American values.[26] “No home is safe, no family life secure, as long as our loved ones are persecuted and imprisoned for exercising their constitutional right to speak out for their political ideas,” the women wrote in one pamphlet.[27] It was the Communists, in other words, who were fighting for secure, whole families, and the First Amendment. (Language that centered families, one should note, was a new tack for the Communist Party, which long focused on how power impacted workers; and the Committee itself represented a greater leadership role for women in the CP.[28]) The all-female Families Committee continued its support network and its campaign of familial rhetoric until the late 1950s, when the Supreme Court ruled imprisonment over beliefs, even revolutionary ones as long as no specific plans for violence are made, to be unconstitutional, and Communist leaders were freed or returned from hiding.[29]

Overall, while Not June Cleaver reveals women’s diverse identities, perspectives, and activities, Meyerowitz of course does not deny the conservatism of the era, nor the domestic ideal.[30] But the work makes the case that dominant ways of living and meanings of womanhood (there were of course many white, middle-class, suburban housewives) were not as dominant as the historiography suggested. There were rebels and countercultures enough to toss out myths of homogeneity. There was sufficient diversity of postwar literature to question notions of textual ideological hegemony. We mentioned lesbian pulp fiction, blockbuster films with rebellious male and female teens, and articles by and about Communist women in newspapers. Meyerowitz, in her study of nearly 500 magazine articles from Reader’s Digest, Atlantic Monthly, Ebony, Ladies’ Home Journal, and more, found that “domestic ideals coexisted in ongoing tension with an ethos of individual achievement that celebrated nondomestic activity.”[31] “All of the magazines sampled advocated both” housewifery, motherhood, and other stereotypical experiences and women’s advancement beyond them.[32] Indeed, 99 articles “spotlighted women with unusual talents, jobs, or careers,” such as in politics or journalism.[33] Another 87 articles “focused on prominent entertainers.”[34] Compared to magazines of the 1930s and 40s, there was in fact less focus on the domestic sphere.[35] But glorification persisted of the woman — sometimes the career woman — who was a “good cook” and “never a lazy housewife,” who was beautiful, married, motherly, soft-spoken.[36] The postwar era, then, was less a regression for women who found new opportunities and independence during World War II (the ranks of working women actually grew after the troops came home[37]), less a time of a universal gender ideology and a concretized women’s place, and more a clash of recent progress, new ideas, and different experiences against the larger, traditionalist society.

For more from the author, subscribe and follow or read his books.

[1] Joanne Meyerowitz, ed., Not June Cleaver: Women and Gender in Postwar America, 1945-1960 (Philadelphia: Temple University Press, 1994), 1-3.

[2] Ibid., 3-11.

[3] Ibid., 1-2, 4, 11.

[4] Ibid., 2-3.

[5] Ibid., 229-252.

[6] Ibid., 358-372.

[7] Ibid., 370.

[8] Ibid, 370-371.

[9] Ibid., 369.

[10] Ibid., 372-378.

[11] Ibid., 375-378.

[12] Ibid., 374-376.

[13] Ibid., 389.

[14] Ibid., 382-402.

[15] Ibid, 391-392.

[16] Ibid., 385-386.

[17] Ibid., 382-383.

[18] Ibid., 396, 398.

[19] Ibid., 151.

[20] Ibid.

[21] Ibid., 157, 160.

[22] Ibid., 152, 157-158, 165.

[23] Ibid., 162, 155-156.

[24] Ibid., 164-168.

[25] Ibid., 152.

[26] Ibid., 152, 165.

[27] Ibid., 165.

[28] Ibid., 166, 170-171.

[29] Ibid., 165.

[30] Ibid., 4, 9.

[31] Ibid., 231-232.

[32] Ibid., 231.

[33] Ibid., 232-233.

[34] Ibid., 232.

[35] Ibid., 249.

[36] Ibid., 233.

[37] Ibid., 4.

Dr. King, Gandhi, and…Alice Paul

In Alice Paul and the American Suffrage Campaign, English scholars Katherine H. Adams and Michael L. Keene seek to lift American suffragist Alice Paul into history’s pantheon of nonviolent theorists and leaders, alongside Mahatma Gandhi, Martin Luther King, Jr., and others.[1] One might posit, particularly after the first few pages of the introduction, that the authors intend to elevate Paul into her proper place as a major figure in the fight for women’s right to vote, alongside Elizabeth Cady Stanton, Susan B. Anthony, Lucretia Mott, Carrie Chapman Catt, and Anna Howard Shaw, having been long ignored and unknown.[2] The work does this, certainly, but is not the first to do so. Adams and Keene make clear that prior works of the 1980s and 1990s at least partially accomplished this, and clarify what makes their 2008 text different: “It is time for a thorough consideration of her campaign theory and practice.”[3] They see a “blank space” in the history of Paul, the need for an examination of “her reliance on nonviolence” and “her use of visual rhetoric,” the foundations of her theory and practice, respectively.[4] Of course, a “consideration” is not a thesis, and the reader is left to ascertain one without explicit aid. After the parenthetical citations, this is the first clue, for those who did not examine the cover biographies, that the authors are not of the field of history. Fortunately, it grows increasingly clear that Adams and Keene are arguing Paul was one of world history’s great nonviolent theorists and activists, not simply that she was one of America’s great suffragists, seconding prior works.

The introduction, after the comments on the text’s purpose, notes that Paul “established the first successful nonviolent campaign for social reform in the United States, experimenting with the same techniques that Gandhi employed in South Africa and India.”[5] This is the first mark of her full significance. The book concludes with the reiteration that she “created the first successful nonviolent campaign for social change in the United States. Like Gandhi and Martin Luther King, she used every possibility of a nonviolent rhetoric to bring both a greater sense of self and civil rights to a disenfranchised group.”[6] In between, especially in the second chapter, on Paul’s theory, “like Gandhi” is used repeatedly. For instance, it or something analogous is employed on pages 35, 36, 37, 38, 39, 40, and 41. “Like Gandhi, [Paul] would not alter her course to placate unhappy adherents.”[7] (The parallel thinking and work is outlined, but the authors do not actually cite evidence concerning what influence Gandhi, leading passive resistance in South Africa until 1914 and in India after that, had on Paul, or vice versa, despite teasing that the two may have met in 1909.[8]) Not being Paul’s contemporary, King receives less attention. But by the end of the chapter and the book, the message is received: Paul’s name, her ideology and campaign, should be spoken in the same breath as other historical icons of nonviolent mass movements. There are few similar glowing comparisons to, say, Stanton or Anthony, further suggesting the authors’ primary intent.

The text is organized largely chronologically. The first chapter concerns Paul’s youth, education, and activism in England, while the second chapter, exploring Paul’s theory of nonviolence, is the most thematic. Then chapters three through nine focus on the different activities of the American campaign for women’s right to vote (“The Political Boycott,” “Picketing Wilson,” “Hunger Strikes and Jail”), following its escalation over time (1912-1920).[9] Of course, some of the activities span the decade — chapter three examines the Suffragist paper and its appeals, an ongoing effort rather than strictly an early one.

Adam and Keene use letters, newspapers, photographs, pamphlets, books, and other primary documents of the era to illuminate Paul’s campaign of powerful visuals, persistent presence, and bold acts of protest, as well as her commitment to peaceful resistance and disruption. The Suffragist publication is the most cited source, and Paul’s personal letters are oft-used as well.[10] At times, the authors also cite a plethora of secondary sources, perhaps more than average for historical texts — possibly another subtly different tack of the English academics. Five secondary sources are used on pages 28 and 29 alone, for instance. This is during an exploration of the goals, tactics, and philosophy of nonviolent action, and the effect is twofold. While it fleshes out the conclusions people like Paul, Gandhi, and King reached, it pulls the reader out of the historical moment. An example:

As Paul’s clashes with Wilson and the legislature escalated, she was keeping her movement in the public eye, but she also risked alienating those with the power to pass the bill. Increasingly strong nonviolent rhetoric could have the wrong effect, as William R. Miller notes: if campaigners “embarrass the opponent and throw him off balance,” they could “antagonize the opponent and destroy rather than build rapport.”[11]

The best works of history often use secondary sources, but this repeated structure, Paul’s strategies approved or critiqued by more modern texts on movement theory, begins to feel a bit ahistorical. It is looking at Paul through the judging lens of, in this case, Miller’s 1964 Nonviolence: A Christian Interpretation. It would have been better for Adams and Keene to use, if possible, Paul’s own writings and other primary sources to capture this idea of the cost-benefits of confronting power. Then secondary sources could be used to note that those in later decades increasingly came to accept what Paul and others had determined or theorized. Perhaps the authors were summarizing and validating that which they did not see summarized and validated in the 1910s, but this is done often enough that one suspects they were stuck in a mindset of working backward.

Nonetheless, the work’s sources powerfully accomplish its purpose, the elevation of Alice Paul. This exploration of her ideological foundations, her theory of passive resistance to change perceptions (and self-perceptions) of women, and her steadfast strength and leadership through a dangerous campaign secure her “place in history.”[12] Adams and Keene demonstrate how Paul’s Quaker background and reading of Thoreau and Tolstoy molded a devotion to nonviolent direct action and “witnessing,” or serving as an example for others.[13] And they show how closely practice — strategies and tactics — followed theory, a key to placing Paul alongside Gandhi and King. Under her direction, visual rhetoric was used to witness and make persuasive appeals, growing from “moderate to extreme forms of conventional action and then from moderate to extreme forms of nonviolent action,” all widely publicized for maximum impact.[14] It began with articles, cartoons, and photographs in publications like the Suffragist, as well as speeches and gatherings, to artful parades, coast-to-coast journeys, and lobbying, and then escalated to a boycott of Democrats for opposing women’s rights, a picket of the White House, and a badgering of President Wilson wherever he went. Upon arrest over picketing, the suffragists engaged in work and hunger strikes (Paul was force-fed). After release from prison, they burned Wilson, and his words, in effigy.[15] Rejecting the more violent methods of England’s suffragettes, Paul and the American activists were nevertheless abused by police, crowds, and prison overseers.[16] Fierce opposition stemmed from reactionary ideologies of womanhood (neither the vote nor direct action was thought the purview of proper ladies), nationalism (such strong denouncements of Wilson and pickets during World War I were deemed unpatriotic and offensive),[17] and likely power (victory could lead to domino effects — the fall of other repressive laws against women, as well as attempts by other subjugated groups to rise up). As Paul had envisioned, their struggle demonstrated to themselves and all Americans women’s strength and power, making it increasingly difficult to argue against suffrage over notions of women’s dependency and weakness.[18] Adams and Keene’s primary sources thoroughly depict all of these developments. For instance, an article in the Washington Star in early 1917, while suffragists tried not to freeze during their daily protest in the capital, demonstrated how direct action informed views on women: “Feminine superiority is again demonstrated. The President of the United States took cold during a few hours’ exposure to conditions the White House pickets had weathered for weeks.”[19]

Overall, Alice Paul and the American Suffrage Campaign is an excellent text for general readers. Due to its oddities, it may not be the best example of historical writing for undergraduate and graduate students — though that is not to say the story of Paul and the more militant American suffragists can be passed over. Adams and Keene’s thesis, though somewhat unconventional, is compelling and urgent. Alice Paul’s name must no longer be met with blank stares by the average American, but, like the name Gandhi or King, with recognition and respect for her many accomplishments.

For more from the author, subscribe and follow or read his books.

[1] Katherine H. Adams and Michael L. Keene, Alice Paul and the American Suffrage Campaign (Urbana: University of Illinois Press, 2008), xv-xvi. 

[2] Ibid., xi-xv.

[3] Ibid., xv-xvi.

[4] Ibid., xvi.

[5] Ibid., xvi.

[6] Ibid., 247.

[7] Ibid., 35.

[8] Ibid., 26.

[9] For a summary of the campaign, see ibid., xiv or 40.

[10] Ibid., Works Cited and 258.

[11] Ibid., 39.

[12] Ibid., 247.

[13] Ibid., 21-25.

[14] Ibid., 40.

[15] Ibid., xiv, xvi, 40, 246.

[16] Ibid., 201-204, for example.

[17] Ibid., 92-94, 126-127, 165-166, 167-172, 216.

[18] Ibid., xvi.

[19] Ibid., 166.

An Alternative Womanhood

Deborah Gray White’s 1985 text Ar’n’t I a Woman? Female Slaves in the Plantation South seeks to demonstrate that black womanhood — its meaning and function — in antebellum America differed substantially from white womanhood.[1] It is not only that the roles of black female slaves contrasted in many ways with those of white women, it is also the case, the Rutgers historian argues, that white society’s view of women’s nature shifted in dramatic ways when it came to black women, driven by racism and the realities of slavery.[2] Likewise, the slave experience meant black women (and men) had different perceptions of women’s nature and place.[3] If this sounds obvious, it is only due to the scholarship of White and those who followed. The relevant historiography in the mid-1980s was incomplete and, White argues, incorrect. “African-American women were close to invisible in historical writing,” and it was assumed that black women’s roles and womanhood mirrored those of white women.[4] Indeed, historians were inappropriately “imposing the Victorian model of domesticity and maternity on the pattern of black female slave life.”[5] Because white women were “submissive” and “subordinate” in white society, it was presumed that “men played the dominant role in slave society” as well.[6] Thus, female slaves recieved little attention, and beliefs that they did not assert themselves, resist slavery, do heavy (traditionally masculine) labor, and so on persisted.[7] Ar’n’t I a Woman? offers a more comprehensive examination of enslaved black women’s daily realities, sharpening the contrast with white women’s, and explores how these differences altered ideologies of womanhood.

White primarily uses interviews of elderly female ex-slaves conducted in the 1930s, part of the Federal Writers’ Project under the Works Projects Administration.[8] Enslaved and formerly enslaved women left behind precious few writings.[9] Anthropological comparison and writings about American female slaves from the era — plantation records, articles, pamphlets, diaries, slave narratives, letters, and so on — supplement the WPA interviews.[10] Organized thematically, the first chapter centers the white ideology of black women’s nature, while the remaining five chapters emphasize the realities of slavery for black females and their own self-perceptions, though there is of course crossover.

Given White’s documentation, it is interesting that historians and the American South perceived black women in such disparate ways. Historians put them in their “proper ‘feminine’ place” alongside Victorian white women.[11] They were imagined to fit that mold of roles and expectations — to be respectably prudish, for example.[12] But whites, in their expectations, positioned enslaved black women as far from white womanhood as possible. This is one part of the text where the primary sources powerfully support White’s claims. For Southerners and Europeans before them, black women had a different nature, being far more lustful than white women. The semi-nudity of African women and, later, enslaved women in the South, was one factor that led whites to view black women as more promiscuous, with the fact that whites determined slave conditions seemingly unnoticed.[13] To whites, the “hot constitution’d Ladies” of Africa were “continually contriving stratagems [for] how to gain a lover,” while slaves were “negro wenches” of “lewd and lascivious” character, not a one “chaste.”[14] Black women were “sensual” and “shameless.”[15] White women, on the other hand, were covered, respectable, chaste, prudish.[16] This was true womanhood; black women stood outside it. True, there existed a long history in Europe and America of women in general being viewed as more licentious than men, but White makes a compelling case that black women were placed in an extreme category.[17] They were not expected to be prudish or in other ways fit the Victorian model of womanhood, because they were seen more as beasts than women.[18] Racism wrought a different kind of sexism.[19]

Of course, Ar’n’t I a Woman? is about realities as much as it is expectations. The work argues that enslaved girls believed in their equality with boys, as opposed to the inferiority and weakness taught and held true by whites, that the slave community practiced something far closer to gender equality, and that “women in their role as mothers were the central figures in the nuclear slave family.”[20] What it meant to be a woman was quite different for enslaved black women — a woman was physically strong, the family head, an agent of resistance and decision, worthy of equality. “In slavery and in freedom,” White concludes, “we practiced an alternative style of womanhood.”[21] Some of this appears interpretive, however. “Most slave girls grew up believing that boys and girls were equal” is a conclusion based on oral and documentary evidence that slave children engaged in the precise same work and play, without categorization of masculine and feminine spheres.[22] But White does not quote former slaves or writings of the era explicitly asserting this belief. And the conclusion is far more confident than the prior “young girls probably grew up minimizing the difference between the sexes…”[23] While White’s interpretation is not unreasonable, more primary evidence is needed before shifting from supposition to assertion.

Overall, however, this is a vital text. It convincingly demonstrates how black womanhood was viewed differently by black women and white Southerners alike compared to white womanhood. Intersections of race and gender — how sexism was different for black women due to their race, and how racism against them was impacted by their sex — are well-explained and examined.[24] Showing the interplay between these beliefs and enslaved women’s roles, White makes a course correction for the field, which had entertained various myths. Equally important, she offers an intimate view of the terrors, drudgery, resistance, support systems, families, and much else experienced by black female slaves, which had been sorely lacking in the historiography. Short in length yet broad in scope, the work is highly readable for a general audience, and experiencing it is a powerful education.

For more from the author, subscribe and follow or read his books.

[1] Deborah Gray White, Ar’n’t I a Woman? Female Slaves in the Plantation South (New York: W.W. Norton & Company, 1999), 22.

[2] Ibid., 5-6, 14.

[3] Ibid., 14, 141.

[4] Ibid., 3, 21-22.

[5] Ibid., 21.

[6] Ibid.

[7] Ibid.

[8] Ibid., 24.

[9] Ibid., 22-24.

[10] Ibid., 23-24.

[11] Ibid., 22.

[12] Ibid.

[13] Ibid., 29-33.

[14] Ibid., 29-31.

[15] Ibid., 33.

[16] Ibid., 31, 22.

[17] Ibid., 27 and Carol F. Karlsen, The Devil in the Shape of a Woman: Witchcraft in Colonial New England (New York: W.W. Norton & Company, 1998), xiii-xiv. See chapter 5, especially pages 153-162, as well.

[18] White, Woman?, 31.

[19] Ibid., 5-6.

[20] Ibid., 118, 120, 142.

[21] Ibid., 190.

[22] Ibid., 118, 92-94.

[23] Ibid., 94.

[24] Ibid., 5-6.

How You Can Help Missouri State Reach an FBS Conference

Missouri State students and alumni have long been unhappy being stuck in the Missouri Valley Conference. Just look at the extremely active forums of Missouri State’s page on 247Sports.com, where you will find constant dreamers longing for a school of our size to move onward and upward.

Much of this centers around football. MSU basketball, baseball, and so on playing in a smaller, less-renowned D-I conference has never been ideal, but at least we can win conference championships and go on to compete for NCAA national titles. We have the chance to battle at the highest level. With football, we’re FCS, and have no such opportunity. Bears fans want to step up to the FBS. 

And the administration is starting to feel the vibe. In August 2021, athletics director Kyle Moats told the Springfield News-Leader, “We’re happy in the Valley” but wanted to have everything in place so that “if we ever got the offer, we’d be ready to go.” Ten years ago, you would have only gotten the first part of that quote.

A move to FBS is no pipe dream. Since 2000, 33 FCS schools have advanced: Massachusetts, Old Dominion, Appalachian State, Georgia Southern, and more. Before that were the likes of Boise State, UConn, Boston, and Marshall. Geographically, Missouri State is well-positioned to join the Sun Belt Conference, Conference USA, or the American Athletic Conference (the Mid-American Conference is also a possibility; Bears swimming and diving is already a member). While a Power 5 conference like the Big 12 or SEC won’t happen, at least for another century or two, MSU has good opportunities for advancement now.

But the university and its supporters must take crucial steps to encourage the necessary invite. We need, as Moats pointed out, upgrades to Plaster Stadium. We need to keep improving the fan experience. Supporters must keep donating through the Missouri State Foundation site and MSU’s GiveCampus page. We need to attend games of all sports, no matter how the season is going. The NCAA has attendance requirements for FBS schools, though enforcement does not appear strict these days. More importantly, studies show higher attendance increases the odds of victory. We need to win to be noticed. And if you can’t make a game, stream it on ESPN+, watch it on TV, etc. Show broadcasters you love the content. Do the little things to help enrollment, too. Buy a car decal, wear MSU gear, post on social media. It’s small acts carried out by tens of thousands of people that change the world.

The arguments against ditching The Valley have never outweighed the potential benefits. Bigger conferences can mean bigger costs, yes. Some wouldn’t want to see MSU fail in a bigger conference, or shift to one unfocused on basketball. This is all short-sighted thinking. The SBC, CUSA, or AAC is a gateway to a more excited fanbase, broader national exposure, a higher profile, increased revenue from enrollment and attendance gains and TV contracts, and so on. We’ll have good years and off years, but we already know we can compete at the highest level of any sport if we have the right pieces in place. University advancement is an upward spiral, but you have to start spinning. When MSU sports regularly play Navy, Rice, SMU, or App State, you’ll be glad you did.

This article originally appeared on Yahoo! and the Springfield News-Leader.

For more from the author, subscribe and follow or read his books.

The American Revolution: Birthplace of Feminism?

Historian Mary Beth Norton, in her 1980 text Liberty’s Daughters, argues that the American Revolution changed colonial women’s self-perceptions, activities, and familial relationships.[1] The tumultuous years of 1775 to 1783, and the decade or so that preceded them, reformed the private lives and identities of literate, middle- and upper-class white women in particular, those in the best position to leave behind writings chronicling their thoughts and lives — though Norton stresses that the war touched most all women, making it safe to assume its effects did as well to some degree.[2] Early and mid-eighteenth century women generally existed within a “small circle of domestic concerns,” believing, alongside men, in strictly defined permissible feminine behavior, proper roles for women, and their own inferiority and limited capabilities.[3] Politics, for instance, was “outside the feminine sphere.”[4] But in the 1760s and early 1770s, Norton posits, the extreme political climate in the colonies, the tensions and clashes with the British government and army, began to shake up the gender order and create new possibilities. Women began writing in their journals of the major events of the day, avidly reading newspapers, debating politics as men did, participating in boycotts and marches, and even seizing merchant goods.[5] They published articles and formed relief efforts and women’s organizations.[6] The Revolution, in other words, was women’s entry into public life and activism, with no more apologies or timidity when pushing into the male sphere of policy, law, and action.[7]

The war also changed women’s labor. Some worked with the colonial army as cooks, nurses, and laundresses, often because they needed stable income with husbands away.[8] More still took over the domestic leadership and roles of their absent husbands, managing farms and finances alike, and would later no longer be told they had not the sense or skills for it.[9] Political debate, revolutionary action, and household leadership with business acumen profoundly shifted women’s views of themselves. “Feminine weakness, delicacy, and incapacity” were questioned.[10] Equal female intelligence was affirmed.[11] Some women even applied the language of liberty, representation, and equality to critiques of women’s subservience.[12] While still constrained in countless ways, by the end of the century, these new ways of thinking had opened even more opportunities for women. More independent, they insisted they would choose their own husbands, delay marriage, or not marry at all; more confident in their abilities, they pushed for girls’ education and broke into the male field of teaching; and so on.[13]

Norton’s engaging text is organized thematically, with a tinge of the chronological. It charts the “constant patterns of women’s lives” in the first half, what stayed the same for American women from before the Revolution to after, and the “changing patterns” in the second, how their lives differed.[14] Norton describes this as “rather complex,” stemming from various modes of thought on many issues changing or remaining static at different times — they did not “fit a neat chronological framework.”[15] The result for the reader is mixed. On the one hand, the layout does allow Norton to demonstrate how women viewed themselves and society before the war, then chart ideological growth and offer causal explanations. This is helpful to the thesis. On the other hand, the first half contains a wealth of historical information that is, essentially, only tangential to the thesis. For if the text presents what did not change, as interesting and valuable as that is, this has little to do with the argument that the American Revolution altered women’s lives. For example, Norton explores views on infants, nursing, and weaning in the first half of the work.[16] As these were “constant” beliefs in this era, not impacted by dramatic events, they are not much explored in the second half. Thus, the reader may correctly consider much information to be irrelevant to the main argument. Of course, it is clear that Norton did not set out only to correct the historiography that concluded “the Revolution had little effect upon women” or ignored the question entirely; she also saw that a wide range of assumptions about eighteenth-century American women were wrong, which to correct would take her far beyond the scope of Revolution-wrought effects.[17] Inclusion of this secondary argument and its extra details makes Liberty’s Daughters a richer and even more significant historical work, but gears it toward history undergraduates, graduates, and professionals. A general audience text might have been slimmer with a fully chronological structure, focusing on select beliefs in the first half (pre-Revolution) that change in the second (political upheaval and war).

Norton uses letters, journal entries, newspaper articles, and other papers — primarily the writings of women — from hundreds of colonial families to build her case.[18] She presents documents from before, during, and after the war, allowing fascinating comparisons concerning women’s ways of thinking, activities, and demands from society. A potential weakness of the historical evidence — there are few — mirrors a point the historian makes in her first few pages. Literacy and its relation to class and race have already been mentioned, constituting a “serious drawback”: the sample is not “representative.”[19] Similarly, are there enough suggestions of new ways of thinking in these hundreds of documents to confidently make assertions of broad ideological change? In some cases yes, in others perhaps not. For example, Norton cites women’s views on their “natural” traits. Before the Revolution, “when women remarked upon the characteristics they presumably shared with others of their sex, they did so apologetically.”[20] One trait was curiosity. Norton provides just a single example of a woman, Abigail Adams, who felt compelled to “‘excuse’ the ‘curiosity…natural to me…’”[21] The question of curiosity then returns in the second half of the text, after the war has changed self-perceptions. Norton finds that women had abandoned the apologies and begun pushing back against male criticism of their nature by pointing out that men had such a nature as well, or by noting the benefits of derided traits.[22] Here the author offers two examples. “The sons of Adam,” Debby Norris wrote in 1778, “have full as much curiosity in their composition…”[23] Judith Sargent Murray, in 1794, declared that curiosity was the cure for ignorance, worthy of praise not scorn.[24] Clearly, one “before” and two “after” citations are not an adequate sample size and cannot be said to be representative of women’s views of curiosity. It is often only when one looks beyond the specific to the general that Norton’s evidence becomes satisfactory. Curiosity is considered alongside delicacy, vanity, helplessness, stupidity, and much else, and the mass accumulation of evidence of beliefs across topics and time convincingly suggests women’s views of their traits, abilities, and deserved treatment were changing.[25] One might say with more caution that connotations concerning curiosity shifted, but with greater confidence that women’s perceptions of their nature transformed to some degree.

Overall, Norton’s work is an important contribution to the field of American women’s history, correcting erroneous assumptions about women of the later eighteenth century, showing the war’s effects upon them, and offering sources some historians thought did not exist.[26] While one must be cautious of representation and sample size, in more than one sense, and while the thesis could have been strengthened with data tabulation (x number of letters in early decades mentioned politics, y number in later decades, z percentage contained apologies for entering the male sphere of concern, etc.), Norton provides a thorough examination and convincing argument based on a sufficient body of evidence. Few students will forget the new language found in primary documents after the outbreak of war, a metamorphosis from the commitment “not to approach the verge of any thing so far beyond the line of my sex [politics]” to “We are determined to foment a Rebelion, and will not hold ourselves bound by any Laws in which we have no voice, or Representation.”[27]

For more from the author, subscribe and follow or read his books.

[1] Mary Beth Norton, Liberty’s Daughters: The Revolutionary Experience of American Women, 1750-1800 (Ithaca, New York: Cornell University Press, 1996), xix, 298.

[2] Ibid., xviii-xx.

[3] Ibid, chapter 1, xviii. 

[4] Ibid., 170.

[5] Ibid., 155-157.

[6] Ibid., 178.

[7] Ibid., 156.

[8] Ibid., 212-213.

[9] Ibid., chapter 7, 222-224.

[10] Ibid., 228.

[11] Ibid., chapter 9.

[12] Ibid., 225-227, 242.

[13] Ibid., 295, chapter 8, chapter 9.

[14] Ibid., vii, xx.

[15] Ibid., xx. 

[16] Ibid., 85-92.

[17] Ibid., xviii-xix.

[18] Ibid., xvii.

[19] Ibid., xix.

[20] Ibid., 114.

[21] Ibid.

[22] Ibid., 239.

[23] Ibid.

[24] Ibid.

[25] Ibid., chapters 4 and 8 in comparison, and parts I and II in comparison.

[26] Ibid., xvii-xix.

[27] Ibid., 122, 226.

Protect Your Relationship From Politics at All Costs

There’s a delightful scene in Spiderman: Far From Home:

“You look really pretty,” Peter Parker tells MJ, his voice nearly shaking. They stand in a theatre as an orchestra warms up.

“And therefore I have value?” MJ replies, peering at her crush from the corner of her eye.

“No,” Peter says quickly. “No, that’s not what I meant at all, I was just –“

“I’m messing with you.” A devilish smile crosses her face. “Thank you. You look pretty, too.”

To me, the moment hints at the need to insulate love from politics. In my own experience and in conversations with others, I’ve come across the perhaps not-uncommon question of how, in an age when politics has ventured into (some would say infected or poisoned) every aspect of life, do partners prevent division and discomfort? There are probably various answers, because there are various combinations of human beings and ideologies, but I’ll focus on what interests me the most and what the above scene most closely speaks to: love on the Left.

For partnerships of Leftists, or liberals, or liberals and Leftists, political disagreements may be rare (perhaps less so for the latter). But arguments and tensions can arise even if you and your partner(s) fall on the same place on the spectrum, because we are all, nevertheless, individuals with unique perspectives who favor different reasoning, tactics, policies, and so on. If this has never happened to you in your current relationship, you’ve either found something splendidly exceptional or simply not given it enough time. I recently spoke to a friend, D, who is engaged to E. They are both liberals, but D is at times spoken to as if this wasn’t the case, as if an education is in order, even over things they essentially agree on but approach in slightly different ways. Arguments can ensue. For me personally, there exists plenty of fodder for disagreements with someone likeminded: I’m fiercely against a Democratic expansion of the Supreme Court, and have in other ways critiqued fellow Leftists. This is what nuanced, independent thinkers are supposed to do, but it can create those “Christ, my person isn’t a true believer” moments.

If partners choose to engage in political dialogue (more on that choice in a moment), it’s probably a fine idea for both to make a strong verbal commitment to give the other person the benefit of the doubt. That’s a rule that a scene from a silly superhero movie reminded me of. MJ offered this to Peter, while at the same time making a joke based in feminist criticism. She could have bit off his head in earnest. Had she been talking to a cat-caller on the street, a toxic stranger on the internet, a twit on Twitter, she probably would have. But this isn’t a nobody, it’s someone she likes. Her potential partner and relationship are thus insulated from politics. She assumes or believes that Peter doesn’t value her just for her looks. He isn’t made to represent the ugliness of men. There’s a grace extended to Peter that others may not get or deserve. Obviously, we tend to do this with people we know, like family and friends. We know they’re probably coming from a good place, they’ve earned that grace, and so on. (There may be a case to extend this mercy to all people, until compelled to retract it, among other solutions, in the interests of cooling the national temperature and keeping us from tearing each other to pieces, but we’ll leave that aside.)

But thinking and talking about all this, which we often fail to do, seems important. How do I protect my relationship from politics? Hey, could we give each other the benefit of the doubt? Arguments between likeminded significant others can be birthed or worsened by not assuming the best right from the start. Each person should suppose, for example, that an education is not in order. I call it seeing scaffolding beneath ideas. If your person posits a belief, whether too radical or reactionary, that shocks your conscience, your first instinct might be to argue, “That’s obviously wrong/terrible, due to Reasons 1, 2, 3, and 4.” You know, to bite your lover’s head off. But this isn’t some faceless idiot on the screen. Instead, assume they know those reasons already — because they probably do — and reached their conclusion anyway. Imagine that Reasons 1-4 are already there, the education is already there, forming the scaffolding to this other idea. Instead of immediately correcting them, ask them how they reached that perspective, given their understanding of Reasons 1-4 (if they’ve never heard of those, then proceed with an education). No progressive partner wants to be misrepresented, to hear that they only think this way because they don’t understand something, are a man and therefore think in dreadful male ways (like Peter and the joke), and so on: you think that because you’re a woman, white or black, straight or gay, poor or wealthy, too far Left or not far enough, not a true believer. Someone’s knowledge, beliefs, or identity-based perspective can be flawed, yes — suppose it’s not until proven otherwise. These things determine one’s mode of thought; suppose it’s in a positive way first. “Well, well, well, sounds like the straight white man wants to be shielded from critique!” God, yes. With your lover, I think it’s nice to be seen as a human being first. I certainly want to be seen as a human being before being seen as a man, for instance. I don’t want to represent or stand in for men in any fashion. A disgusting thought. Some will say that’s an attempt to stand apart from men to pretend my views aren’t impacted in negative ways by my maleness — to avoid the valid criticisms of maleness and thus myself. Perhaps so. But maybe others also wish to be seen as a human being before a woman, a human being before an African American, a human being before a Leftist. Because politics has engulfed everything, there are so few places left where this is possible. It may not be doable or even desirable to look at other people or all people in this way, but having one person to do it with is lovely. Or a few, for the polyamorous. It’s a tempting suggestion, to shield our love from politics, to transcend it in some way (Anne Hathaway, in an Interstellar line that was wildly inappropriate for her scientist character, said that love was the one thing that transcended time and space — ending with “politics” would have made more sense). One way of doing that is to assume the best in your partner, and see before you an individual beyond belief systems, beyond identity, beyond ignorance. Again, until forced to do otherwise. All this can be tough for Leftists and liberals, because we’re so often at each other’s throats, racing to be the most pure or woke, and so on. There exists little humility. We want to lecture, not listen. Debate, not discuss. It’s a habit that can bleed into relationships, but small changes can reduce unwanted tensions and conflict. (If it’s wanted, if it keeps things spicy, I apologize for wasting your time. Enjoy the make-up sex.)

I do not know if rightwing lovers experience comparable fights, but I imagine all this could be helpful to them as well. They have their own independent thinkers and failed true believers.

An even better way to protect your relationship from politics is to simply refuse to speak of such things. Purposefully avoid the disagreements. This may be best for those dating across the ideological divide (though offering the benefit of the doubt would still be best for the Right-Left pairings or groupings that choose to engage in discourse). This may be surprising, but this is generally my preferred method, whether I’m dating someone who thinks as I do or rather differently. (I of course have a proclivity for a partner who shares my values, but I have dated and probably still could date conservatives, if they were of the anti-Trump variety. Some people are too far removed from my beliefs to be of interest, which is natural. This article is not arguing one should stay with a partner who turns out to have terrible views or supports a terrible man. This is also why “respect each other’s views” is a guideline unworthy of mention. Apart from being too obvious, it at some point should not be done.) Perhaps it’s because so much of my work, writing, and reading has to do with politics. I would rather unplug and not discuss such things with a mate, nor with many close friends and family members. Though it happens every now and then. If partners together commit to making this a general policy, it can be quite successful. And why not? While I see the appeal of learning and growing with your person through meaningful discussion of the issues, it risks having something come between you, and having an oasis from the noise and nightmare sounds even better, just as loving your partner for who they are sounds much less stressful than trying to change them.

For more from the author, subscribe and follow or read his books.

The “Witches” Killed at Salem Were Women Who Stepped Out of Line

In The Devil in the Shape of a Woman, historian Carol F. Karlsen argues that established social attitudes toward women in seventeenth-century New England, and earlier centuries in Europe, explain why women were the primary victims of witch hunts in places like Salem, Fairfield, and elsewhere.[1] Indeed, she posits, women who willingly or inadvertently stepped out of line, who violated expected gender norms, were disproportionately likely to be accused in Puritan society. After establishing that roughly 80% of accused persons in New England from 1620 to 1725 were women, and that men represented both two-thirds of accusers and all of those in positions to decide the fates of the accused, Karlsen observes women’s deviant behaviors or states of affairs that drew Puritan male ire.[2] For instance: “Most witches in New England were middle-aged or old women eligible for inheritances because they had no brothers or sons.”[3] When husbands or fathers had no choice but to leave property to daughters and wives, this violated the favored and common patrilineal succession of the era. Further, women who committed the sins of “discontent, anger, envy, malice, seduction, lying, and pride,” which were strongly associated with their sex, failed to behave as proper Christian women and thus hinted at allegiances to the devil, putting them at risk of accusation.[4] The scholar is careful to note, however, that in the historical record accusers, prosecutors, juries, magistrates, and so on did not explicitly speak of such things as evidence of witchcraft.[5] But the trends suggest that concern over these deviations, whether subliminal or considered, played a role in the trials and executions.

Karlsen’s case is well-crafted. Part of its power is its simplicity: a preexisting ideology about women primed the (male and female) residents of towns like Salem to see witches in female form far more often than male. The fifth chapter could be considered the centerpiece of the work because it most closely examines the question of what a woman was — the view of her nature by the intensely patriarchal societies of Europe and how this view was adopted and modified, or left intact, by the Puritans. Christian Europe saw women as more evil than men.[6] They were of the same nature as Eve, who sought forbidden knowledge, betrayed God, and tempted man. Believed to be “created intellectually, morally, and physically weaker,” women were thought to have “more uncontrollable appetites” for sins like the seven above.[7] It is Karlsen’s exploration of this background that is foundational to the argument. If Christians had long seen women as more evil, a notion of witches-as-women in New England would have been a natural outgrowth (America’s early female religious dissenters, among other developments, added fuel to the fire).[8] The fact that associations between women and witchcraft existed in the European mind before the Puritans set foot in North America reinforces this.[9] Karlsen quotes fifteenth- and sixteenth-century writers: “More women than men are ministers of the devil,” “All witchcraft comes from carnal lust, which in women is insatiable,” “Because Christ did not permit women to administer his sacraments…they are given more authority than men in the administration of the devil’s execrations,” and so on.[10] Another penned that middle aged and older women received no sexual attention from men so they had to seek it from the devil.[11]

Indeed, Karlsen’s use of primary sources is appreciable. She extensively cites witchcraft trials in New England and works by ministers such as Cotton Mather, not only as anecdotal evidence but also, alongside public and family records, to tabulate data, primarily to show that women were special targets of the witch hunts and that most had or might receive property.[12] The author leaves little room for disputing that witch hunt victims were not quite model Puritan women, and that the Puritans believed that those who in any way stepped outside their “place in the social order were the very embodiments of evil,” and therefore had to be destroyed.[13] The work is organized along those lines, which is sensible and engaging. But a stumble occurs during a later dalliance with secondary sources.

One piece of the story — appearing on the last couple pages of the last chapter — stands out as underdeveloped. Karlsen posits that the physical ailments the Puritans blamed on possession, such as convulsions and trances, were psychological breaks, a “physical and emotional response to a set of social conditions,” indeed the social order itself.[14] The gender hierarchy and oppressive religious system were, in other words, too much to bear. Karlsen does cite anthropologists that have studied this phenomenon in other societies, where the minds of oppressed peoples, usually women, split from normalcy and enter states that allow them to disengage from and freely lash out at their oppressors, as, Karlsen argues, possessed New England women did.[15] But the causes of physical manifestations are such a significant part of the story that they deserve far more attention, indeed their own chapter (most of Karlsen’s final chapter explores the questions of who was most likely to be possessed, how they acted, and how the Puritans explained the phenomenon, though it is framed as a culturally-created power struggle early on).[16] This would allow Karlsen room to bring in more sources and better connect the New England story to other anthropological findings, and to flesh out the argument. For instance, she writes that convulsions and other altered states would have been “most common in women raised in particularly religious households,” but does not show that this was true for possessed women in New England.[17] How the ten men who were possessed fit into this hypothesis is unclear.[18] Things also grow interpretive, a perhaps necessary but always perilous endeavor: “…in their inability to eat for days on end, [possessed women] spoke to the depths of their emotional hunger and deprivation, perhaps as well to the denial of their sexual appetites.”[19] This is unsupported. In the dim light of speculation and limited attention, other causes of “possession,” such as historian Linnda Caporael’s ergotism theory (convulsions and hallucinations due to a fungus found in rye), remain enticing.[20] Minds are forced to remain open to causes beyond social pressures, and indeed to multiple, conjoining factors. How physical symptoms arose, of course, does not affect the thesis that prior ideology led to the targeting of women. The concern is whether the anthropological theory fit so well with Karlsen’s thesis — the targeting of women and the physical ailments being the results of a repressive society — that she both gravitated toward the latter and did not grant it the lengthier study it warranted.

Overall, Karlsen’s work is important. As she noted in her introduction, prior historians had given little focus to the role of gender in American witch hunts.[21] Their witch hunts had little to do with the suspicions about women’s nature or the dismay over women pushing against the gender hierarchy and religious order. Written in the late 1980s, The Devil in the Shape of a Woman represented a breakthrough and a turning point. It is a must read for anyone interested in the topic.

For more from the author, subscribe and follow or read his books.

[1] Carol F. Karlsen, The Devil in the Shape of a Woman: Witchcraft in Colonial New England (New York: W.W. Norton & Company, 1998), xiii-xiv. See chapter 5, especially pages 153-162, for European origins.

[2] Ibid., 47-48.

[3] Ibid., 117.

[4] Ibid., 119.

[5] Ibid., 153.

[6] Ibid., 155.

[7] Ibid.

[8] Ibid., 127-128 and chapter 6.

[9] Ibid., chapter 5.

[10] Ibid., 155-156.

[11] Ibid., 157.

[12] See for example ibid., 48-49, 102-103.

[13] Ibid., 181.

[14] Ibid., 248-251.

[15] Ibid., 246-247, with anthropologists cited on footnote 69, page 249, and footnote 71, page 251.

[16] Ibid., 231, 246.

[17] Ibid., 250.

[18] Ibid., 224.

[19] Ibid., 250.

[20] Linnda R. Caporael, “Ergotism: The Satan Loosed in Salem?,” Science 192, no. 4234 (1976): 21–26, http://www.jstor.org/stable/1741715.

[21] Karlsen, The Devil, xii-xiii.

Scotty’s Missing Finger

James Doohan’s Montgomery Scott wasn’t often the centerpiece of “Star Trek” storylines, but he could always be counted on to save the day by eking some kind of miracle out of the Enterprise’s transporters or warp engines. Doohan’s performance was lively, and “Scotty” lovable and charismatic, even if the Canadian actor’s for-television accent was once included on the BBC’s list of “Film Crimes Against the Scottish Accent.” According to The Guardian, Doohan based the voice on that of a Scottish soldier he met in World War II.

Indeed, Doohan was a soldier before he had any interest in acting. He joined the Canadian artillery after high school, right as the largest conflict in human history was brewing. He rose to the rank of lieutenant and was sent to Britain to prepare for Operation Overlord, the invasion of Normandy (Valour Canada). Long before Scotty saved Kirk, Spock, and his other comrades from all sorts of alien enemies and celestial phenomena, he led men into the fires of D-Day, June 6, 1944. 

Following a naval and aerial bombardment, Canadian units stormed Juno Beach. James Doohan and his men unknowingly ran across an anti-tank minefield, being too light to detonate the defenses (Snopes). Bullets piercing all around, they reached cover and advanced inland. Doohan made his first two kills of the war by silencing German snipers in a church tower in Graye Sur Mer. 

After securing their positions, Doohan and his troops rested that evening. But just before midnight, everything went wrong for our future chief engineer. Stepping away from the command post for a smoke, on his way back his body was riddled with at least half a dozen bullets. The middle finger of his right hand was torn off, four bullets hit his knee, and one hit his chest, but did minimal injury because it happened to strike the silver cigarette case in his breast pocket. But this was no German attack. It was friendly fire.

According to Valour Canada, James Doohan was shot by a Canadian sentry who mistook him in the night for a German soldier. This sentry has been described as “nervous” and “trigger-happy” (Snopes). Doohan later said that his body had so much adrenaline pumping through it after the shooting that he walked to the medical post without even realizing his knee had been hit.
Doohan survived the incident and the war, moved to the United States, and started acting in 1950 (IMDb). Sixteen years later, after small roles in “Gunsmoke,” “The Twilight Zone,” “The Man from U.N.C.L.E.,” and more, he landed the part that would bring him global fame. According to StarTrek.com, Doohan had a hand double to conceal the missing finger while filming close-ups on “Star Trek.” However, it is still obvious in many shots, stills of which fans have collected, for instance on this Stack Exchange.

For more from the author, subscribe and follow or read his books.

Will the NFL Convert to Flag Football in the Next Century?

A big part of the fun of American football is players smashing into each other. From the gladiatorial spectacles of Rome to today’s boxing, UFC/MMA, and football, watching contestants exchange blows, draw blood, and even kill one another has proved wildly entertaining. I know I have base instincts as well that enjoy, or are at least still engrossed by, brutal sport. I write “at least still” because the NFL has become harder to watch knowing the severe brain damage it’s causing.

This prompts some moral musings. The NFL certainly has the moral responsibility to thoroughly inform every player of the risks (and to not bury the scientific findings, as they once did). If all players understand the dangers, there is probably no ethical burden on them — morality is indeed about what does harm to others, but if all volunteer to exchange CTE-producing blows that’s fine. Beating up a random person on the street is wrong, but boxing isn’t, because it’s voluntary. In a scenario where some football players know the risks but not all, that’s a bit trickier. Is there something wrong about potentially giving someone brain damage who doesn’t know that’s a possibility, when you know? As for fans, is there a moral burden to only support a league (with purchases, viewership, etc.) that educates all its players on CTE? But say everyone is educated; if afterwards the NFL still has a moral duty to make the game safer through better pads and rules to reduce concussions, does it by extension also have the moral duty to end contact and tackles to eliminate concussions? There’s much to think about.

In any case, after head trauma findings could no longer be ignored, the NFL made, and continues to make, rule changes to improve safety (to limited effect thus far). Better helmets, elimination of head-to-head blows, trying to reduce kick returns, banning blindside blocks, and so on. At training camp, players are even wearing helmets over their helmets this year. Though some complain the game is being ruined, and others suggest the NFL is hardly doing enough, all can agree that the trend is toward player safety. Meanwhile, some young NFL players have quit as they’ve come to understand the risks. They don’t want disabilities and early death.

A parallel trend is the promotion of flag football. The NFL understands, Mike Florio notes, that if flag can be popularized all over the world then the NFL itself will become more international and make boatloads more money. It’s not really about safety (except perhaps for children). The organization helped get flag football into the World Games 2022 and promoted the journeys of the U.S. men and women’s teams, and is now trying for the 2028 Olympics. NFL teams have youth flag leagues, and Michael Vick, Chad Ochocinco, and Terrell Owens are playing in the NFL-televised American Flag Football League. The Pro Bowl is being replaced with a skills competition and a flag football game.

Troy Vincent, an NFL vice president, said recently, “When we talk about the future of the game of football, it is, no question, flag. When I’ve been asked over the last 24 months, in particular, what does the next 100 years look like when you look at football, not professional football, it’s flag. It’s the inclusion and the true motto of ‘football for all.’ There is a place in flag football for all.” He was careful to exclude the professional game here, focusing on opening the sport to girls, women, and poorer kids in the U.S. and around the world, but one wonders how long that exception will hold. If current trajectories continue, with a growth of flag and a reduction of ferocity in the NFL, one day a tipping point may be reached. It won’t happen easily if the NFL thinks such a change would cut into its profits, but it’s possible. It may not be in 50 years or 100, but perhaps after 200 or 500.

Changes in sports — the rules, the equipment, everything — may be concerning but should never be surprising. Many years ago, football looked rather different, after all. You know, when you couldn’t pass the ball forward, the center used his foot instead of his hands to snap, the point after was actually four points, you could catch your own punt and keep the ball, etc. The concussion crisis has of course also spurred calls to take the NFL back to pre-1940s style of play, getting rid of helmets and other protections to potentially improve safety. There’s evidence players protect their heads and those of others better when they don’t feel armored and invincible. This is another possible future. However, it’s also a fact that early football was much deadlier, and the dozens of boys and men who died each year playing it almost ended the sport in the early 20th century, so one may not want to get rid of too many modern pads and rules if we’re to keep tackle. An apparent contradiction like this means many factors are at play, and will have to be carefully parsed out. Perhaps a balance can be found — less armor but not too little — for optimal safety.

Though my organized tackle and flag experiences ended after grade school, with only backyard versions of each popping up here and there later on, I always considered flag just as fun to play. And while I think the flag of the World Games is played on far too narrow a field, and both it and the AFFL need field goals, kicks, light-contact linemen, and running backs (my flag teams had these), they’re both fairly entertaining (watch here and here). One misses the collisions and take-downs, but the throws, nabs, jukes, picks, and dives are all good sport. No, it’s not the same, but the future rarely is.

For more from the author, subscribe and follow or read his books.

Why Have There Been Authoritarian Socialist Countries But Not a Democratic Socialist One?

In Christianity and Socialism Both Inspired Murderous Governments and Tyrants. Should We Abandon Both?, we observed the flawed idea that authoritarian socialist nations like the Soviet Union started as democratic socialist societies. By recognizing that socialism since its inception has existed in different forms advocated by different people (bottom-up, democratic, peaceful vs. top-down, authoritarian, violent), just like Christianity and other religions past and present (peaceful missionary work, coexistence, and church-state separation vs. violent conquest, forced conversion, and authoritarian theocracy), and by looking at history, the slippery slope argument disintegrated.

The societal changes socialists push for have already been achieved, in ways large and small, without horrors all over the world, from worker cooperatives to systems of direct democracy to universal healthcare and education, public work programs guaranteeing jobs, and Universal Basic Income (see Why America Needs Socialism). These incredible reforms have occurred in democratic, free societies, with no signs of Stalinism on the horizon. The slippery slope fallacy is constantly applied to socialism and basically any progressive policy (remember, racial integration is communism), but it doesn’t have any more merit than when it is applied to Christianity [i.e. peaceful missionary work always leading to theocracy]. Those who insist that leaders and governments set out to implement these types of positive socialistic reforms but then everything slid into dictatorship and awfulness as a result basically have no understanding of history, they’re just completely divorced from historical knowledge. Generally, when you actually study how nations turned communist, you see that a Marxist group, party, or person already deeply authoritarian achieved power and then ruled, expectedly, in an authoritarian manner, implementing policies that sometimes resemble what modern socialists call for but often do not (for example, worker ownership of the workplace is incompatible with government ownership of the workplace; direct democratic decision-making is incompatible with authoritarian control; and so forth). It’s authoritarians who are most likely to use violence in the first place; anti-authoritarians generally try to find peaceful means of creating change, if possible. (Which can take much longer, requiring the built consensus of much of the citizenry. This is one reason authoritarian socialist countries exist but no true democratic socialist society. It’s quicker to just use force. The latter needs more time.)

Note that citations are provided in the original article. Now, all this was worth a bit more commentary. If you can show someone that, despite some socialistic reforms, there hasn’t been a democratic socialist (libertarian socialist, market socialist) nation yet in human history, only authoritarian socialist (communist) ones, that there was no devolution from one to the other, the next question is Why? Why has communism existed and succeeded, with State control of all workplaces, the abolition of the market, and totalitarianism, but not democratic socialism, where workers control their workplaces, the government offers universal healthcare, education, and jobs or income, and citizens enjoy participatory democracy?

The answer was touched upon at the end of the quote above. It’s about time and values. All this is a bit like asking why there hasn’t been a Mormon theocracy yet, or a nation with Mormonism as its official religion, or a country with a majority Mormon population (saying Tonga is majority LDS is a bit of a stretch). Mormonism, a sect of Christianity, began in the 1830s, at the same time socialism was born under Robert Owen, Saint-Simon, Fourier, and others (Marx was still a boy). There hasn’t been a nation with a (serious) majority Mormon citizenry because it hasn’t grown popular enough over the past 200 years. There has never been an LDS theocracy or an officially LDS nation because 1) the belief system has yet to become popular enough, or 2) there has been no group that has overthrown existing power structures through violence or been willing to use force and oppression after legitimately ascending to power. The same can be said of democratic socialism — neither option has occurred as of this moment. In contrast, number 2 was reached by authoritarian socialist leaders and groups, even if number 1 wasn’t beforehand. (Unlike Mormonism, traditional Christianity had both enough time and the right ideologues to achieve both high popularity in some places and to violently crush anyone who stood in its way in others. So did Islam.) This all makes a great deal of sense. As noted, if authoritarians are more likely to use violence, they have a fast-track to power. To the ability to swiftly enact societal transformations. And without the consensus of the population, they may have to rule with an iron fist to get everyone in line.

Radicals who are not authoritarian socialists, and are less likely to use force to get what they want (again, what they want is something rather different), have no such shortcut. The Frenchman Ernest Lesigne wrote in his 1887 poem “The Two Socialisms” that one socialism “has faith in a cataclysm,” whereas the other “knows that social progress will result from the free play of individual efforts.” Most democratic socialists have little interest in cataclysmic violent revolution; at most, only a great nonviolent national strike. Instead, they must educate the populace, change the minds of the majority. They must push for reforms. It takes far longer, but — not that democratic socialists desire this either — you won’t have to rule by terror when it’s all over. A slow, peaceful transition not only wins but requires the consent of the governed. And as mentioned in the beginning of the quote, this metamorphosis is underway. Places like Canada, Japan, New Zealand, and Europe are moving away from free market capitalism and toward social democracy, which is a stepping stone to democratic socialism. America has drifted as well, though not as far. If a couple centuries is not enough, we’ll see where we’re at in 500 years or 1,000. There is no magic number, no predictable date of victory. Just because democratic socialism hasn’t happened yet does not mean it won’t, nor does this fact discredit the idea — Mormonism is not untrue or bad because it is not yet hugely popular, any more than embryonic Christianity in A.D. 100. Capitalism took a very long time to become the dominant world system, replacing feudalism. The full realization of the next stage will experience the same.

For more from the author, subscribe and follow or read his books.

The Massive Gap in Our Understanding of Why KC Doesn’t Control Its Own Police Department

The local press has produced various articles on why Kansas City does not control its own police department, with mixed explanatory success. The fact that the governor of Missouri selects almost the entirety of the KC board of police commissioners, whereas all other cities in the state form their own boards, no doubt inspires many confused and angry internet searches. Local control is step one of police reform here — the KCPD will not reform itself, and the deeply conservative state legislature will be of no assistance; both these bodies are desperate to keep control out of the hands of the relatively progressive city council. Pieces on how this absurd state of affairs came to be offer valuable information, but what is also needed is an article on what we don’t know. This is admittedly risky, as it’s possible someone knows the answers to the nagging questions herein, but if that’s the case then his or her historical research is itself difficult to find, or at least it has been for me.

Some articles, such as the one from FOX 4, only speak of 1930s Kansas City and the need to wrest police control away from mob boss Tom Pendergast. The Beacon focuses solely on this, yet notes that first Pendergast had to weasel control away from the state, without further comment. More outlets barely seem to realize that the Pendergast story is less important if Kansas City had state control before that; The Pitch and KCUR write that state management began in 1874, when the KCPD was first formed, but still focus on 1932-1939, when Tom ran it. The Star does a little better, explaining that during the Civil War, Missouri was one of the slave states that did not join the Confederacy, but sought to prevent arms and munitions in St. Louis from being used for Union purposes by seizing control of the St. Louis police department (local control was given back in 2013). The same set-up — a governor-decided board — was then used for Kansas City in 1874.

Little more is said, though this isn’t fully the fault of the journalists. It could be that no historian, professional or amateur, has researched the circumstances of 1874. Why was the St. Louis model used for Kansas City? Who were the key players? What motivated them to take their positions, whether for or against? And many other questions. Journalists typically have little time to turn around a story; if historians haven’t done the work, which can take weeks, months, or years, the article may not be properly fleshed out. This explains the focus on Pendergast and St. Louis — that’s the information available.

Some may be satisfied with the knowledge that KC’s state of affairs has its roots in St. Louis’. That is all that’s needed, after all, to show a link between American white supremacy (the desire to aid the Confederacy, which sought to preserve slavery, by controlling armories) and our lack of local control. This connection is being used in the crucial legal push to reestablish local power. (I will never forgive FOX 4 in that last link, by the way, for its headline “Woman Sues KC Police Board,” as if Gwen Grant, president of the Urban League, was a complete nobody, like some “Woman Eaten By an Alligator in Florida.” Try “Activist,” “Organizer,” “ULKC President” or something.) That historical link is undeniable, but it bothers the historian, and probably a lot of readers, that no further context is available. We want to know more. Say, for example, that those who pushed for state control of KC forces in the 1870s had their own reasons that related to race. Clearly, the Civil War had been over for a decade, but what if — and this is completely made up — they thought the state would be better than the city at keeping black officers off the force? This would be important to know for its own sake, significantly altering the meaning of state control of our police, but could also service the campaign to correct the problem. It would make any “rooted in racism” statement even more powerful; it’s a much more direct connection. Alternatively, of course, there could be an entirely different context. What if — and this is again imaginary — the intentional modeling of KC’s board on St. Louis’ was far less nefarious. Perhaps there were good intentions, even if one disagrees with the policy. Getting control of the police away from the mob in the 1930s might be a later example of this. Maybe the 1874 decision was likewise independent of racial questions. Or what if the city council was so racist someone wanted outside control? We can imagine anything we want, because we don’t know. (Uncovering a more benign motive would certainly be used against the campaign — knowledge, as much as we crave and cherish it, can come with a cost.)

The assumption seems to be that what happened in 1874 was due to mere precedent. In other words, St. Louis had a governor-appointed board of police commissioners, so it was decided KC should be the same without much thought. This is entirely possible, but without further research it could be entirely wrong.

So let’s examine what we do know of the events. We know that in 1874, representative James McDaniels introduced House Bill 866, entitled “An act creating a board of police commissioners, and authorizing the appointment of a permanent police force for the City of Kansas.” Formation and outside administration came at the same time. This language is identical to the act passed for St. Louis in the Civil War era. H.B. 866 (which one can read in its entirety here) passed easily, 92-10. In the Missouri senate, it was then called up by Senator John Wornall, a name Kansas Citians will recognize if they’ve ever driven down a certain road. In that chamber, the vote was 21-0 in favor. This is in contrast to the St. Louis bill, passing 50-32 and 24-8, with plenty of debate, as The Star documented.

Who was James McDaniels? An 1874 book offering short biographies on the members of the Missouri legislature described him as “about twenty-seven years of age” and a native of Vermont. He was a real estate agent in Kansas City, and one of three representatives from Jackson County. The book describes him as a “progressive Democrat,” which marks him as a reformer. Progressives of the late 19th (and early 20th) century tended to seek government solutions to the problems wrought by industrialization and urbanization, like poverty, political machines (that’s what Pendergast had later), and corporate power. A progressive advocating a police force, state-controlled no less, could only be thought of as odd in the context of today’s sensibilities and meanings. With cities growing rapidly, and slums and crime a problem, a larger, more organized police force would have been seen as a fine way to create a better society, at least by someone like McDaniels. However, without more information, we simply do not know McDaniels’ true motives. Overall, his time in the legislature was brief; he was elected in 1872, served for a couple years, got H.B. 866 passed his final year, and disappeared.

What of the ten who voted against the bill? There were two representatives from St. Louis, Truman A. Post and Joseph T. Tatum. There was James B. Harper, Radical Republican from Putnam County, who fought for the Union in the Missouri militia. And there was the second representative from Jackson County, Republican Stephen P. Twiss (our third representative did not vote). Twiss grew up poor in Massachusetts but eventually became a lawyer and served in that state’s legislature, according to his (much longer) biography in the 1874 text. It is carefully noted that he “voted for the Hon. Charles Sumner in his second election to the United States Senate” (this was when legislators, not ordinary voters, chose U.S. senators). Sumner was head of the Radical Republicans, the anti-slavery advocates. Twiss moved to Kansas City after the Civil War and, after losing to the Democrat Wornall in a race for Missouri senate, was elected to the Missouri house. Why was Twiss against the bill (so fiercely he tried to repeal it in 1875), when McDaniels and Wornall were for it? As before, note that Republicans shooting down a police force and/or state control must not be thought of as strange here — Republicans and Democrats were very different ideologically in past centuries compared to the modern parties. Overall, eight Republicans voted No, alongside two Democrats. Other Republicans joined the mostly Democratic legislature to pass H.B. 866. So maybe that hints at something. Those generally against the bill were Republicans, who were generally against slavery. But whether the bill had any motives connected to post-war racial politics, we do not know.

I had hoped to offer more information than just the key players, but it became clear rather quickly that this would require weeks, months, or years. Perhaps I will circle back to this if I have the time and energy for such a project. So many vital questions linger — we still know next to nothing. Why did McDaniels base his bill on St. Louis’? Was it mere precedent and ease? “That’s how it was done before, and how it passed before, so why not?” Or were there political motives? Did the city council agree with the legislation? Did the more primitive police forces that existed before the formation of the KCPD, such as the sheriff and deputies, agree with it? Why did Twiss vote against it and try to have it repealed? Was he against a police force itself, against state control, or both? Or did he dislike some other aspect of the plan? Why did the third Jackson County rep, James R. Sheley, abstain from voting? Why did two St. Louis legislators vote Nay? Did they sympathetically oppose state power over another police force, frustrated by their own city’s experience, or was there another reason? Why did Republicans oppose the legislation? Does a large majority voting for the bill in both chambers mean there wasn’t much debate on it?

To answer these questions, one must put on the historian’s helmet. We’ll have to track down the journals and diaries of all those actors, in historical archives or by finding their descendants. Newspapers from the 1870s will have to be located and studied in the archives for stories of these votes and debates. We’ll need more government records, if they exist. And did any secondary sources, such as books, comment on these things later? These are huge ifs — even if these items were created, they may not have survived nearly 150 years. McDaniels, perhaps the most important person in the story, does not appear to have been a man of prominence, and will likely prove difficult to study. Twiss is a bit easier to find, as he became a judge and ran for KC mayor (and in the 1850s may have helped found the Republican Party). His (theoretical) documents could have been better preserved. Wornall’s, too. Hopefully this writing aids whoever undertakes this endeavor, whether my future self or someone else.

For more from the author, subscribe and follow or read his books.

‘Obi-Wan Kenobi’ Is Peak Lazy Writing

The Obi-Wan Kenobi finale is out, and the show can be awarded a 6/10, perhaps 6.5. This is not a dreadful score, but it isn’t favorable either. I give abysmal films or shows with no redeeming qualities a 1 or 2, though this is extremely rare; bad or mediocre ones earn a 3-5; a 6 is watchable and even enjoyable but not that great, a 7 is a straight-up good production, an 8 is great, and a 9-10 is rare masterpiece or perfection territory. The ranking encompasses everything: was it an interesting, original, sensible story? Do you care about what happens to the characters, whether evil or heroic or neutral? Was the acting, music, pacing, special effects, cinematography, and editing competent? Was the dialogue intelligent or was it painful and cliché? Did they foolishly attempt a CGI human face? And so on.

Understanding anyone’s judgement of a Star Wars film or show requires knowing how it compares to the others, so consider the following rankings, which have changed here and there over the years but typically not by much. I judge A New Hope and The Empire Strikes Back to be 10s. Return of the Jedi earns a 9, primarily for the ridiculous “plan” to save Han Solo from Jabba the Hutt that involves everyone getting captured, and for recycling a destroy-the-Death-Star climax. The Mandalorian (seasons 1-2), The Force Awakens, and Solo hover at about 7 for me. Solo is often unpopular, but I think I enjoyed its original, small-scale, train-robbery Western kind of story, which preceded The Mandalorian. The Force Awakens created highly lovable characters, but lost most of its points for simply remaking A New Hope. Rogue One is a 6 (bland characters, save one droid), The Last Jedi (review here) is a 5, Revenge of the Sith a 4.5, and The Phantom Menace, Attack of the Clones, and The Rise of Skywalker earn 4s if I’m in a pleasant mood, usually 3.5s. It’s an odd feeling, giving roughly the same rank to the prequels and sequels. They’re both bad for such different reasons. The former had creative, new stories, and there’s a certain innocence about them — but mostly dismal dialogue, acting, and characters (Obi-Wan Kenobi was, in Episodes II and III, a welcome exception). The sequels, at least in the beginning, had highly likable characters, good lines, and solid acting, but were largely dull copy-pastes of the original films. One trilogy had good ideas and bad execution, the other bad ideas and competent execution. One can consult Red Letter Media at its hilarious Mr. Plinkett reviews of the prequels and sequels to fully understand why I find them so awful.

Kenobi was actually hovering at nearly a 7 for me until the end of episode three. Ewan McGregor, as always, is wonderful, little Leia is cute enough, Vader is hell-bent on revenge — here are characters we can care about. The pace was slow and thoughtful, a small-scale kidnapping/rescue story. If you could ignore the fact that Leia doesn’t seem to know Kenobi personally in A New Hope, and that a Vader-Kenobi showdown now somewhat undermines the importance of their fight in that film, things were as watchable and worthwhile as a Mandalorian episode. Some lines and acting weren’t perfect, but a plot was forming nicely. I have become increasingly burnt out of and bored by Star Wars, between the bad productions and it just having nothing new to say (rebels v. empire, Sith v. Jedi, blasters and lightsabers, over and over and over again), but maybe we’d have a 7 on our hands by the end.

Then the stupid awakened.

At the end of part three, Vader lights a big fire in the desert, and Force-pulls Kenobi through it. He then puts out the fire with the Force for some reason. Soon a woman and a droid rescue Kenobi by shooting into the fuel Vader had used, starting a slightly-bigger-fire between protagonist and antagonist. Vader is now helpless to stop the slow-moving droid from picking up Kenobi and lumbering away. He doesn’t walk around the fire (this would have taken five seconds, it’s truly not that big). He doesn’t put out the flames as he did before (I guess 30% more fire is just too much for him). He doesn’t Force-pull Kenobi back to him again. He just stares stupidly as the object of all his rage, who he obsessively wants to torture and kill, gets slowly carried off (we don’t actually see the departure, as that would have highlighted the absurdity; the show cuts).

This is astonishingly bad writing. It’s so bad one frantically tries to justify it. Oh, Vader let him escape, all part of the plan. This of course makes no sense (they’ve been looking for Kenobi for ten years, so him evading a second capture is a massive possibility; it’s established that Vader’s goal is to find him and enact revenge, not enjoy the thrill of the hunt; and it’s never hinted at before or confirmed later that this was intentional). The simpler explanation is probably the correct one: it’s just braindead scene construction. Vader and Kenobi have to be separated, after all. Otherwise Kenobi’s history and the show’s over. There’s a thousand better ways to rescue Kenobi here, but if you’re an idiot you won’t even think of them — of if you don’t care, and don’t respect the audience, you won’t bother. (It’s very much like in The Force Awakens when Rey and Kylo are dueling and the ground beneath them splits apart, as the planet is crumbling, creating a chasm that can conveniently stop the fight — only it’s a million times worse. Now, compare all this to Luke and Vader needing to be separated in Empire. Rather than being caught or killed, Luke lets go of the tower with the only hand he has left and chooses to fall to his death. That’s a good separation. It’s driven by a character with agency and morals. It’s not a convenient Act of God or a suddenly neutered character, someone who doesn’t do what he just did a minute ago for no reason.)

Bad writing is when characters begin following the script, rather than the story being powered by the motivations of the characters. Had the characters’ wants, needs, decisions, actions, and abilities determined the course of events — like in real life — Vader would have put out the flames a second time, he and his twenty stormtroopers would have easily handled one droid and one human rescuer, and Obi-Wan would have been toast. But I guess Disney gave Vader the script. “Oh, I can’t kill him now, there’s three more episodes of this thing, plus A New Hope.” So he stood there staring through the flames like an imbecile.

Anyone who doubts this was bad writing simply needs to continue watching the show. Because the eighth grader crafting the story continues to sacrifice character realism at the altar of the screenplay.

In episode five, Vader uses the Force to stop a transport in mid-air. He slams it on the ground and tears off its doors to get to Kenobi. But surprise, it was a decoy! A second transport right next to this one takes off and blasts away. Vader is dumbfounded. Why does he not use the Force to stop this one? “Well, it was like 40 meters farther away.” “Well, he was surprised, see. And they got out of there quick.” OK, I guess. All this time I thought Vader was supposed to be powerful. It’s crucial to have limits to Force powers, and all abilities, but this is a pretty fine line between doable and impossible. “I can run a mile, but 1.1 will fucking kill me.” It’s strange fanboys would wildly orgasm over Vader’s awesome power to wrench a ship from the air and then excuse his impotence. Either we’re seeing real fire-size and ship-distance challenges Vader can’t meet or the writing here is just sub-par. There are other, more realistic ways to get out of this jam. At least when Kenobi and Leia had to escape the bad guys in the prior episode, snow speeders came along and shot at the baddies (though don’t get me started on how three people fit into a snow speeder cockpit designed for one).

But that’s not even the worst of it. Minutes later two characters violate their motivations. In this episode, it is revealed Third Sister Reva is out to kill Vader, a smart twist and good character development. She attempts to assassinate him, but he runs her through with a lightsaber. Then the Grand Inquisitor, who Reva had run through in an earlier episode, appears. (How did he survive this? You think the show is going to bother to say? Of course it doesn’t. The writers don’t care. Alas, lightsabers suddenly seem far less intimidating.) Vader and the Grand Inquisitor decide to leave her “in the gutter.” They do not finish the kill, they simply walk away. Darth Vader, who snaps necks when you lose ships on radar or accidentally alert the enemy to your presence, doesn’t kill someone who tried to assassinate him! The Grand Inquisitor essentially was assassinated by Reva — wouldn’t he want some revenge for being stabbed through the stomach and out the spine with a lightsaber? “Oh, they’re just leaving her to die” — no. The Grand Inquisitor didn’t die, remember? He and Vader do, it just happened. To be kabobbed in this universe isn’t necessarily fatal (naturally, Reva survives, again without explanation). Is it all just a master plan to inspire Reva to go do or be something? Or is it bad writing, with Reva needing to be shown mercy by Sith types because she’s still in the show?

Happily, the Kenobi finale was strong. It was emotional and sweet, and earns a ranking similar to the first couple episodes. Consternation arose, of course, when Vader buries Kenobi under a mountain of rocks and then walks away! Wouldn’t you want to make sure he’s dead? Can’t you feel his presence when he’s close by and alive? Fortunately, this was not the end of their battle. Kenobi breaks out and attacks Vader. This time their separation makes sense given character traits — Kenobi wounds Vader and, being a good person who never wanted to kill his old apprentice, walks away. Similarly, Reva over on Tatooine tries to kill Luke (though it’s not fully clear why — she’s been left for dead by Vader, then finds out Luke and Obi-Wan have some sort of relationship, so she decides to kill the boy to…hurt Obi-Wan? Please Vader because she hurt Obi-Wan or killed a Force-sensitive child?) Luke escapes death not from some stupid deus ex machina or Reva acting insane. Though Reva appears to be untroubled by torturing Leia earlier on, a real missed opportunity by the filmmakers, we at least understand that as a youngling who was almost slaughtered by a Sith that she might hesitate to do the same to Luke.

In conclusion, series that blast the story in a direction that requires characters, in out-of-character ways, to go along with it will always suffer. As another example, The Walking Dead, in addition to forgetting to have a main character after a while and in general overstaying its welcome, was eventually infected with this. (There’s no real reason for all the main characters to cram into an RV to get Maggie to medical care in season 6, leaving their town defenseless; but the writers wanted them to all be captured by Negan for an exciting who-did-he-kill cliffhanger. There’s no reason Carl doesn’t gun Negan down when he has the chance in season 7, as he planned to do, right after proving his grit by massacring Negan’s guards; but Negan is supposed to be in future episodes.) Obviously, other Star Wars outings have terrible writing (and are worse overall productions), from Anakin and Padmé’s love confession dialogue or sand analysis in Attack of the Clones…to The Rise of Skywalker‘s convenient finding of McGuffins that conveniently reveal crucial information…to the creatively bankrupt plagiarism of the sequels. But I do not believe I have ever seen a show like Kenobi, one that puts heroes in a jam — a dramatic height, a climax — and so lazily and carelessly gets them out of it.

For more from the author, subscribe and follow or read his books.

Did U.S. Policing Evolve from Slave Patrols? Well…Sort Of

How American Policing Started with Carolina Slave Catchers” and similar headlines need asterisks. There are big elements of truth in them, but also a betrayal of the nuance found in the historical scholarship on which they are based. There is also the problem of lack of context, which perhaps inappropriately electrifies meaning. American policing starting with slave patrols is a powerful idea, but does it become less so when, for example, we study what policing looked like around the globe — and in the American colonies — before slave patrols were first formed in the early 18th century?

Obviously, permanent city forces tasked with enforcing laws and maintaining order have existed around the world since ancient times. There was a police unit in Rome established by the first emperor, China had its own forms of policing long before Western influence, and so on. As human communities grew larger, more complex systems (more personnel, permanent bodies, compensation, training, weaponry) were deemed necessary to prevent crime and capture criminals.

Small bands and villages could use simpler means to address wrongdoing. In traditional societies, which were kin-based, chiefs, councils, or the entire community ran the show, one of unwritten laws and intimate mediation or justice procedures. Larger villages and towns where non-kin lived and worked together typically established groups of men to keep order; for example, “among the first public police forces established in colonial North America were the watchmen organized in Boston in 1631 and in New Amsterdam (later New York City) in 1647. Although watchmen were paid a fee in both Boston and New York, most officers in colonial America did not receive a salary but were paid by private citizens, as were their English counterparts.” There were also constables and sheriffs in the 1630s. True, American society has virtually always been a slave society, but similar groups were formed elsewhere before the African slave trade began under the Portuguese in the 16th century. There were “patrolmen, sergeants and constables” on six-month contracts in Italy in the 14th and 15th centuries. There were sheriffs, constables, and coroners (who investigated deaths) in England in medieval times. Before the 1500s, armed men paid (whether by individuals or government) to prevent and respond to trouble in cities had been around in the West for about 4,500 years — as well as in China, African states, and elsewhere (India, Japan, Palestine, Persia, Egypt, the Islamic caliphates, and so on).

This is not to build a straw man. One might retort: “The argument is that modern policing has its roots in slave patrols.” Or “…modern, American policing…” Indeed, that is often the way it is framed, with the “modern” institution having its “origins” in the patrolling groups that began in the first decade of the 1700s.

But the historians cited to support this argument are actually more interested in showing how slave patrols were one (historically overlooked) influence among many influences on the formation of American police departments — and had the greatest impact on those in the South. A more accurate claim would be that “modern Southern police departments have roots in slave patrols.” This can be made more accurate still, but we will return to that shortly.

Crime historian Gary Potter of Eastern Kentucky University has a popular 2013 writing that contains a paragraph on this topic, a good place to kick things off:

In the Southern states the development of American policing followed a different path. The genesis of the modern police organization in the South is the “Slave Patrol” (Platt 1982). The first formal slave patrol was created in the Carolina colonies in 1704 (Reichel 1992). Slave patrols had three primary functions: (1) to chase down, apprehend, and return to their owners, runaway slaves; (2) to provide a form of organized terror to deter slave revolts; and, (3) to maintain a form of discipline for slave-workers who were subject to summary justice, outside of the law, if they violated any plantation rules. Following the Civil War, these vigilante-style organizations evolved in[to] modern Southern police departments primarily as a means of controlling freed slaves who were now laborers working in an agricultural caste system, and enforcing “Jim Crow” segregation laws, designed to deny freed slaves equal rights and access to the political system.

Here the South is differentiated from the rest of the nation — it “followed a different path.” This echoes others, such as the oft-cited Phillip Reichel, criminologist from the University of Northern Colorado. His important 1988 work argued slave patrols were a “transitional,” evolutionary step toward modern policing. For example, “Unlike the watches, constables, and sheriffs who had some nonpolicing duties, the slave patrols operated solely for the enforcement of colonial and State laws.” But that was not to say other factors beyond the South, beyond patrols, also molded the modern institution. It’s simply that “the existence of these patrols shows that important events occurred in the rural South before and concurrently with events in the urban North that are more typically cited in examples of the evolution of policing in the United States.” In his 1992 paper, “The Misplaced Emphasis on Urbanization and Police Development,” Reichel again seeks to show not that slave patrols were the sole root of U.S. policing, but that they need to be included in the discussion:

Histories of the development of American law enforcement have traditionally shown an urban‐North bias. Typically ignored are events in the colonial and ante‐bellum South where law enforcement structures developed prior to and concurrently with those in the North. The presence of rural Southern precursors to formal police organizations suggests urbanization is not a sufficient explanation for why modern police developed. The argument presented here is that police structures developed out of a desire by citizens to protect themselves and their property. Viewing the development of police in this manner avoids reference to a specific variable (e.g., urbanization) which cannot explain developments in all locations. In some places the perceived need to protect persons and property may have arisen as an aspect of urbanization, but in others that same need was in response to conditions not at all related to urbanization. 

In other words, different areas of the nation had different conditions that drove the development of an increasingly complex law enforcement system. A common denominator beyond the obvious protection of the person, Reichel argues, was protection of property, whether slaves in the South or mercantile/industrial interests in the North, unique needs Potter explores as well.

Historian Sally Hadden of Western Michigan University, cited frequently in articles as well, is likewise measured. Her seminal Slave Patrols: Law and Violence in Virginia and the Carolinas makes clear that Southern police continued tactics of expired slave patrols (such as “the beat,” a patrol area) and their purpose, the control of black bodies. But, given that Hadden is a serious historian and that her work focuses on a few Southern states, one would be hard-pressed to find a statement that positions patrols as the progenitor of contemporary policing in the U.S. (In addition, the Klan receives as much attention, if not more, as a descendant of patrols.) Written in 2001, she is complaining, like other scholars, that “most works in the history of crime have focused their attention on New England, and left the American south virtually untouched.” She even somewhat cautions against the connections many articles make today between patrol violence and 21st century police violence (how one might affect the other, rather than both simply being effects of racism, is for an article of its own):

Many people I have talked with have jumped to the conclusion that patrolling violence of an earlier century explains why some modern-day policemen, today, have violent confrontations with African Americans. But while a legacy of hate-filled relations has made it difficult for many African Americans to trust the police, their maltreatment in the seventeenth, eighteenth, or nineteenth centuries should not carry all the blame. We may seek the roots of racial fears in an earlier period, but that history does not displace our responsibility to change and improve the era in which we live. After all, the complex police and racial problems that our country continues to experience in the present day are, in many cases, the results of failings and misunderstandings in our own time. To blame the 1991 beating of Rodney King by police in Los Angeles on slave patrollers dead nearly two hundred years is to miss the point. My purpose in writing this text is a historical one, an inquiry into the earliest period of both Southern law enforcement and Southern race-based violence. Although the conclusions below may provide insight into the historical reasons for the pattern of racially targeted law enforcement that persists to the current day, it remains for us to cope with our inheritance from this earlier world without overlooking our present-day obligation to create a less fearful future.

It may be worthwhile now to nail down exactly what modern policing having roots in slave patrols means. First, when the patrols ended after the Confederate defeat, other policing entities took up or continued the work of white supremacist oppression. Alongside the Ku Klux Klan, law enforcement would conduct the terrors. As a writer for TIME put it, after the Civil War “many local sheriffs functioned in a way analogous to the earlier slave patrols, enforcing segregation and the disenfranchisement of freed slaves.” An article on the National Law Enforcement Officers Memorial Fund (!) website phrased it: “After the Civil War, Southern police departments often carried over aspects of the patrols. These included systematic surveillance, the enforcement of curfews…” Second, individuals involved in slave patrols were also involved in the other forms of policing: “In the South, the former slave patrols became the core of the new police departments.” Patrollers became policemen, as Hadden shows. Before this, there is no doubt there was crossover between slave patrol membership and the three other forms of policing in colonial America, sheriffs, constables, and watchmen. Third, patrols, as Reichel noted, had no non-policing duties, plus other differences like beats, steps toward contemporary police departments (though they weren’t always bigger; patrols had three to six men, like Boston’s early night watch). Clearly, slave patrols had a huge influence on the modern city police forces of the South that formed in the 1850s, 1860s, and later. (Before this, even the term “police” appears to have been applied to all four types of law enforcement, including patrols, though not universally — in the words of “a former slave: the police ‘were for white folks. Patteroles were for niggers.'” But after the war, Hadden writes in the final paragraph of her book, many blacks saw little difference “between the brutality of slave patrols, white Southern policemen, or the Klan.”)

Notice that the above are largely framed as post-war developments. Before the war, patrols, sheriffs, constables, and watchmen worked together, with plenty of personnel crossover, to mercilessly crush slaves. But it was mostly after the war that the “modern” police departments appeared in the South, with patrols as foundations. Here comes a potential complication. The free North was the first to form modern departments, and did so before the war: “It was not until the 1830s that the idea of a centralized municipal police department first emerged in the United States. In 1838, the city of Boston established the first American police force, followed by New York City in 1845, Albany, NY and Chicago in 1851, New Orleans and Cincinnati in 1853, Philadelphia in 1855, and Newark, NJ and Baltimore in 1857” (New Orleans and Baltimore were in slave states, Newark in a semi-slave state). This development was due to growth (these were among the largest U.S. cities), disorder and riots, industrialization and business interests and labor conflict, and indeed “troublesome” immigrants and minorities, among other factors.

That point is raised by conservatives to suggest that if Northern cities first established the police departments we know today, how can one say slave patrols had an influence? A tempting counter might be: these states hadn’t been free for long. Slavery in New York didn’t end until 1827. While that is true, the North did not have patrols. “None of the sources I used indicated that Northern states used slave patrols,” Reichel told me in an email, after I searched in vain for evidence they did. Northern sheriffs, constables, and watchmen enforced the racial hierarchy, of course, but slave patrols were a Southern phenomenon. One can rightly argue that patrol practices in the South influenced police forces in the North, but that’s not quite the strong “root” we see when studying Southern developments.

This is why boldly emphasizing that modern departments in Southern states originated with patrols is somewhat tricky. It’s true enough. But who would doubt that Southern cities would have had police departments anyway? This goes back to where we began: policing is thousands of years old, and as cities grow and technology and societies change, more sophisticated policing systems arise. The North developed them here first, without slave patrols as foundations. Even if the slave South had never birthed patrols, its system of sheriffs, constables, and watchmen would surely not have lasted forever — eventually larger police forces would have appeared as they did in the North, as they did in Rome, as they did wherever communities exploded around the globe throughout human history. New Orleans went from 27,000 residents in 1820 to 116,000 in 1850! Then 216,000 by 1880. System changes were inevitable.

Consider that during the 18th and early 19th centuries, more focused, larger, tax-funded policing was developing outside the United States, in nations without slave patrols, nations both among and outside the Euro-American slave societies. In 1666, France began building the first modern Western police institution, with a Lieutenant General of Police paid from the treasury and overseeing 20 districts in Paris — by “1788 Paris had one police officer for every 193 inhabitants.” The French system inspired Prussia (Germany) and other governments. There was Australia (1790), Scotland (1800), Portuguese Brazil (1809), Ireland (1822), and especially England (1829), whose London Metropolitan Police Department was the major model for the United States (as well as Canada’s 1834 squad in Toronto). Outside the West, there were (and always had been, as we saw) evolving police forces: “By the eighteenth century both Imperial China and Mughal India, for example, had developed policing structures and systems that were in many ways similar to those in Europe,” before European armies smothered most of the globe. Seventeenth, eighteenth, and nineteenth century Japan, one of the few nations to stave off European imperialism and involuntary influence, was essentially a police state. A similar escapee was Korea, with its podocheong force beginning in the 15th century. As much as some fellow radicals would like the West to take full credit for the police, this ignores the historical contributions (or, if one despises that phrasing, developments) of Eastern civilizations and others elsewhere. Like the North, the South was bound to follow the rest of the world.

It also feels like phrasing that credits patrols as the origin of Southern departments ignores the other three policing types that existed concurrently (and in the North were enough to form a foundation for the first modern institutions, later copied in the South). Sheriffs, constables, and watchmen were roots as well, even if one sees patrols as the dominant one. (Wondering if the latter had replaced the three former, which would have strengthened the case of the patrols as the singular foundation of Southern law enforcement, I asked Sally Hadden. She cautioned against any “sweeping statement.” She continued: “There were sheriffs, definitely, in every [Southern] county. In cities, there were sometimes constables and watchmen, but watchmen were usually replaced by patrols — but not always.”) Though all were instruments of white supremacy, they were not all the same, and only one is now in the headlines. In their existence and distinctiveness, they all must receive at least some credit as the roots of Southern institutions — as our historians know, most happenings have many causes, not one.

“Many modern Southern police departments largely have roots in slave patrols but would have arisen regardless” is probably the most accurate conclusion. Harder to fit in a headline or on a protest sign, but the nuanced truth often is.

For more from the author, subscribe and follow or read his books.

The Nativity Stories in Luke and Matthew Aren’t Contradictory — But the Differences Are Bizarre

In The Bible is Rife with Contradictions and Changes, we saw myriad examples of different biblical accounts of the same event that cannot all be true — they contradict each other. But we also saw how other discrepancies aren’t contradictions if you use your imagination. The following example was too long to examine in that already-massive writing, so we will do so now.

It’s interesting that while the authors of both Matthew and Luke have Jesus born in Bethlehem and then settle down in Nazareth, the two stories are dramatically different, in that neither mentions the major events of the other. For example, the gift-bearing Magi arrive, King Herod kills children, and Jesus’ family flees to Egypt in Matthew, but Luke doesn’t bother mentioning any of it. Luke has the ludicrous census (everyone in the Roman Empire returning to the city of their ancestors, creating mass chaos, when the point of a census is to see where people live currently), the full inn, the shepherds, and the manger, but Matthew doesn’t.

These stories can be successfully jammed together. But it takes work. In Matthew 2:8-15, Joseph, Mary, and Jesus are in Bethlehem but escape to Egypt to avoid Herod’s slaughter. Before fleeing, the family seems settled in the town: they are in a “house” (2:11) beneath the fabled star, and Herod “gave orders to kill all the boys in Bethlehem and its vicinity who were two years old and under, in accordance with the time he had learned from the Magi” visitors concerning when the star appeared (2:16, 2:7). This is a bit confusing, as all boys from born-today to nearly three years old is a big range for someone who knows an “exact time” (2:7). But it suggests that Jesus may have been born a year or two ago, the star was over his home since his birth, and the Magi had a long journey to find him. Many Christian sites will tell you Jesus was about two when the wise men arrived. In any event, when Herod gives this order, the family travels to Egypt and remains there until he dies, then they go to Nazareth (2:23).

In Luke 2:16-39, after Jesus is born in Bethlehem the family goes to Jerusalem “when the time came for the purification rites required by the Law of Moses” (2:22). This references the rites outlined in Leviticus 12 (before going to Jerusalem, Jesus is circumcised after eight days in Luke 2:21, in accordance with Leviticus 12:3). At the temple they sacrifice two birds (Luke 2:24), following Leviticus 12:1-8 — when a woman has a son she does this after thirty-three days to be made “clean.” Then, “When Joseph and Mary had done everything required by the Law of the Lord, they returned to Galilee to their own town of Nazareth” (Luke 2:39). Here they simply go to Nazareth when Jesus is about a month old. No mention of a flight to Egypt, no fear for their lives — everything seems rather normal. “When the time came for the purification rites” certainly suggests they did not somehow occur early or late.

So the mystery is: when did the family move to Nazareth?

Both stories get the family to the town, which they must do because while a prophesy said the messiah would be born in Bethlehem, Jesus was a Nazarene. But the paths there are unique, and you have to either build a mega-narrative to make it work — a larger story that is not in the bible, one you must invent to make divergent stories fit together — or reinterpret the bible in a way different than the aforementioned sites.

In this case, Option 1 is to say that when Luke 2:39 says they headed for Nazareth, this is where the entire story in Mathew is left out. They actually go back to Bethlehem, have the grand adventure to Egypt, and then go to Nazareth much later. This is a serious twist of the author’s writing; you have to declare the gospel doesn’t mean what it says, that narrative time words like “when” are meaningless (in the aforementioned article I wrote of us having to imagine “the bible breaks out of chronological patterns at our convenience”).

Option 2 is that they go to Nazareth after the rites as stated. Then at some point they go back to Bethlehem, have the Matthew adventure, and end up back in Nazareth. Maybe they were visiting relatives. Maybe they moved back to Bethlehem — after Herod dies it seems as if the family’s first thought is to go back there. Matthew 2:22-23: “But when [Joseph] heard that Archelaus was reigning in Judea in place of his father Herod, he was afraid to go there. Having been warned in a dream, he withdrew to the district of Galilee, and he went and lived in a town called Nazareth.” So perhaps it’s best to suppose they went to Nazareth after the temple, moved back to Bethlehem, hid in Egypt, and went again to Nazareth. Luke of course doesn’t mention any of this either; the family heads to Nazareth after the temple rites and the narrative jumps to when Jesus is twelve (2:39-42).

Option 3 is that Jesus’ birth, the Magi visit, Herod’s killing spree, the family’s flight, Herod’s death, and the family’s return all occur in the space of a month. This of course disregards and reinterprets any hints that Jesus was about two years old. But it allows the family to have Matthew’s adventure and make it back to Jerusalem for the scheduled rites (which Matthew doesn’t mention), then go to Nazareth. One also must conclude that 1) the Magi didn’t have to travel very far, if the star appeared when Jesus was born, or 2) that the star appeared to guide them long before Jesus was born (interpret Matthew 2:1-2 how you will). It’s still odd that the only thing Luke records between birth and the temple is a circumcision, but Option 3, as rushed as it is, may be the best bet. That’s up to each reader to decide, for it’s all a matter of imagination.

Luke’s silence is worth pausing to consider. The Bible is Rife with Contradictions and Changes outlined the ramifications of one gospel not including a major event of another:

Believers typically insist that when a gospel doesn’t mention a miracle, speech, or story it’s because it’s covered in another. (When the gospels tell the same stories it’s “evidence” of validity, when they don’t it’s no big deal.) This line only works from the perspective of a later gospel: Luke was written after Matthew, so it’s fine if Luke doesn’t mention the flight to Egypt to save baby Jesus from Herod. Matthew already covered that. But from the viewpoint of an earlier text this begins to break down. It becomes: “No need to mention this miracle, someone else will do that eventually.” So whoever wrote Mark [the first gospel] ignored one of the biggest miracles in the life of Jesus, proof of his divine origins [the virgin birth story]? Or did the author, supposedly a disciple, not know about it? Or did gospel writers conspire and coordinate: “You cover this, I’ll cover that later.” Is it just one big miracle, with God ensuring that what was unknown or ignored (for whatever reason, maybe the questionable “writing to different audiences” theory) by one author would eventually make it into a gospel? That will satisfy most believers, but an enormous possibility hasn’t been mentioned. Perhaps the story of Jesus was simply being embellished — expanding over time, like so many other tales and legends (see Why God Almost Certainly Does Not Exist).

In truth, it is debatable whether Matthew came before Luke. Both were written around AD 80-90, so scholars disagree over which came first. If Matthew came first, Luke could perhaps be excused for leaving out the hunt for Jesus and journey to Egypt, as surprising as that might be. If Luke came first, it’s likely the author of Matthew concocted a new tale, making Jesus’ birth story far more dramatic and, happily, fulfilling a prophesy (Matthew 2:15: “And so was fulfilled what the Lord had said through the prophet: ‘Out of Egypt I called my son'”). If they were written about the same time and independently, with the creators not having read each other’s work, they were likewise two very different stories.

Regardless of order and why the versions are different, one must decide how to best make the two tales fit — writers not meaning what they write, the holy family moving back and forth a bunch, or Jesus not being two when the Magi arrived with gold, frankincense, and myrrh.

For more from the author, subscribe and follow or read his books.

Like a Square Circle, Is God-Given Inherent Value a Contradiction?

Can human beings have inherent value without the existence of God? The religious often say no. God, in creating you, gives you value. Without him, you have no intrinsic worth. (Despite some inevitable objectors, this writing will use “inherent” and “intrinsic” value interchangeably, as that is fairly common with this topic. Both suggest some kind of immutable importance of a thing “in its own right,” “for its own sake,” “in and of itself,” completely independent of a valuer.) Without a creator, all that’s left is you assigning worth to yourself or others doing so; these sentiments are conditional, they can be revoked (you may commit suicide, seeing yourself of no further worth, for example); they may be instrumental, there being some use for me assigning you value, such as my own happiness; therefore, such value cannot be intrinsic — it is extrinsic. We only have inherent importance — unchangeable, for its own sake — if lovingly created by God in his own image.

The problem is perhaps already gnawing at your faculties. God giving a person inherent value appears contradictory. While one can argue that an imagined higher power has such divine love for an individual that his or her worth would never be revoked, and that God does not create us for any use for himself (somewhat debatable), the very idea that inherent value can be bestowed by another being doesn’t make sense. Inherent means it’s not bestowed. Worth caused by God is extrinsic by definition. God is a valuer, and intrinsic value must exist independently of valuers.

As a member of the Ethical Society of St. Louis put it:

+All human life has intrinsic value

-So we all [have] value even if God does not exist, right?

+No, God’s Love is what bestows value onto His creations. W/o God, everything is meaningless.

-So human life has *extrinsic* value then, right?

+No. All human life has intrinsic value.

That’s well phrased. If we think about what inherent value means (something worth something in and of itself), to have it humans would need to have it even if they were the only things to ever have existed.

If all this seems outrageous, it may be because God-given value is often thought of differently than self- or human-given value; it is seen as some magical force or aura or entity, the way believers view the soul or consciousness. It’s a feature of the body — if “removed [a person] would cease to be human life,” as a Christian blogger once wrote! When one considers one’s own value or that of a friend, family member, lover, home, money, or parrot, it’s typically not a fantastical property but rather a simple mark of importance, more in line with the actual definition of value. This human being has importance, she’s worth something. Yes, that’s the discussion on value: God giving you importance, others giving you importance, giving yourself importance. It’s not a physical or spiritual characteristic. A prerequisite to meaningful debate is agreeing on what you’re talking about, having some consistency and coherence. There’s no point in arguing “No person can have an inherent mystical trait without God!” That’s as obvious as it is circular, akin to saying you can’t have heaven without God. You’re not saying anything at all. If we instead use “importance,” there’s no circular reasoning and the meaning can simply be applied across the board. “No person can have inherent importance without God” is a statement that can be analyzed by all parties operating with the same language.

No discourse is possible without shared acceptance of meaning. One Christian writer showcased this, remarking:

Philosopher C. I. Lewis defines intrinsic value as “that which is good in itself or good for its own sake.” This category of value certainly elevates the worth of creation beyond its usefulness to humans, but it creates significant problems at the same time.

To have intrinsic value, an object would need to have value if nothing else existed. For example, if a tree has intrinsic value, then it would be valuable if it were floating in space before the creation of the world and—if this were possible—without the presence of God. Lewis, an atheist, argues that nothing has intrinsic value, because there must always be someone to ascribe value to an object. Christians, recognizing the eternal existence of the Triune God in perpetual communion[,] will recognize that God fills the category of intrinsic value quite well.

What happened here is baffling. The excerpt essentially ends with “And that ‘someone’ is God! God can ascribe us value! Intrinsic value does exist!” right after showing an understanding (at least, an understanding of the opposing argument) that for a tree or human being to possess inherent value it must do so if it were the only thing in existence, if neither God nor anything else existed! Intrinsic value, to be real, must exist even if God does not, the atheist posits, holding up a dictionary. “Intrinsic value exists because God does, he imbues it,” the believer says, either ignoring the meaning of intrinsic and the implied contradiction (as William Lane Craig once did), or not noticing or understanding them. Without reaching shared definitions, we just talk past each other.

In this case, it is hard to say whether the problem is lack of understanding or the construction of straw men. This is true on two levels. First, the quote doesn’t actually represent what Lewis wrote on in the 1940s. He in fact believed human experiences had intrinsic value, that objects could have inherent value, sought to differentiate and define these terms in unique ways, and wasn’t making an argument about deities (see here and here if interested). However, in this quote Lewis is made to represent a typical atheist. What we’re seeing is how the believer sees an argument (not Lewis’) coming from the other side. This is helpful enough. Let’s therefore proceed as if the Lewis character (we’ll call him Louis to give more respect to the actual philosopher) is a typical atheist offering a typical atheist argument: nothing has intrinsic value. Now that we are pretending the Christian writer is addressing something someone (Louis) actually posited, probably something the writer has heard atheists say, let’s examine how the atheist position is misunderstood or twisted in the content itself.

The believer sees accurately, in Sentences 1/2, that the atheist thinks intrinsic value, to be true, must be true without the existence of a deity. So far so good. Then in Sentence 3 everything goes completely off the rails. Yes, Louis the Typical Atheist believes intrinsic value is impossible…because by definition it’s an importance that must exist independently of all valuers, including God. God’s exclusion was made clear in Sentences 1/2. It’s as if the Christian writer notices no connection between the ideas in Sentences 1/2 and Sentence 3. The first and second sentences are immediately forgotten, and therefore the atheist position is missed or misconstrued. It falsely becomes an argument that there simply isn’t “someone” around to “ascribe” intrinsic value! As if all Louis was saying was “God doesn’t exist, so there’s no one to ascribe inherent worth.” How easy to refute, all one has to say is “Actually, God does exist, so there is someone around!” (Sentence 4). That is not the atheist argument — it is that the phrase “intrinsic value” doesn’t make any coherent sense: it’s an importance that could only exist independently of all valuers, including God, and therefore cannot exist. Can a tree be important if it was the only thing that existed, with no one to consider it important? If your answer is no, you agree with skeptics that intrinsic value is impossible and a useless phrase. Let’s think more on this.

The reader is likely coming to see that importance vested by God is not inherent or intrinsic. Not unless one wants to throw out the meaning of words. A thing’s intrinsic value or importance cannot come from outside, by definition. It cannot be given or created or valued by another thing, otherwise it’s extrinsic. So what does this mean for the discussion? Well, as stated, it means we’re speaking nonsense. If God can’t by definition grant an individual intrinsic value, nor other outsiders like friends and family, nor even yourself (remember, you are a valuer, and your inherent value must exist independently of your judgement), then intrinsic value cannot exist. It’s like talking about a square circle. Inherent importance isn’t coherent in the same way inherent desirability isn’t coherent, as Matt Dillahunty once said. You need an agent to desire or value; these are not natural realities like color or gravity, they are mere concepts that cannot exist on their own.

To be fair, the religious are not alone in making this mistake. Not all atheists deny inherent value; they instead base it in human existence, uniqueness, rationality, etc. Most secular and religious belief systems base intrinsic value on something. Yet the point stands. Importance cannot be a natural characteristic, it must be connoted by an agent, a thinker. The two sides are on equal footing here. If the religious wish to continue to use — misuse — inherent value as something God imbues, then they should admit anyone can imbue inherent value. Anyone can decree a human being has natural, irrevocable importance in and of itself for whatever reason. But it would be less contradictory language, holding true to meaning, to say God assigns simple value, by creating and loving us, in the same way humans assign value, by creating and loving ourselves, because of our uniqueness, and so forth.

“But if there’s no inherent value then there’s no reason to be moral! We’ll all kill each other!” We need not waste much ink on this. If we don’t need imaginary objective moral standards to have rational, effective ethics, we certainly don’t need nonsensical inherent value. If gods aren’t necessary to explain the existence of morality; and if we’re bright enough to know we should believe something is true because there’s evidence for it, not because there would be bad consequences if we did not believe (the argumentum ad consequentiam fallacy); and if relativistic morality and objective morality in practice have shown themselves to be comparably awful and comparably good; then there is little reason to worry. Rational, functioning morality does not need “inherent” values created and imbued by supernatural beings. It just needs values, and humans can generate plenty of those on their own.

For more from the author, subscribe and follow or read his books.

Purpose, Intersectionality, and History

This paper posits that primary sources meant for public consumption best allow the historian to understand how intersections between race and gender were used, consciously or not, to advocate for social attitudes and public policy in the United States and the English colonies before it. This is not to say utilization can never be gleaned from sources meant to remain largely unseen, nor that public ones will always prove helpful; the nature of sources simply creates a general rule. Public sources like narratives and films typically offer arguments.[1] Diaries and letters to friends tend to lack them. A public creation had a unique purpose and audience, unlikely to exist in the first place without an intention to persuade, and with that intention came more attention to intersectionality, whether in a positive (liberatory) or negative (oppressive) manner.

An intersection between race and gender traditionally refers to an overlap in challenges: a woman of color, for instance, will face oppressive norms targeting both women and people of color, whereas a white woman will only face one of these. Here the meaning will include this but is expanded slightly to reflect how the term has grown beyond academic circles. In cultural and justice movement parlance, it has become near-synonymous with solidarity, in recognition of overlapping oppressions (“True feminism is intersectional,” “If we fight sexism we must fight racism too, as these work together against women of color,” and so on). Therefore “intersectionality” has a negative and positive connotation: multiple identities plagued by multiple societal assaults, but also the coming together of those who wish to address this, who declare the struggle of others to be their own. We will therefore consider intersectionality as oppressive and liberatory developments, intimately intertwined, relating to women of color.

Salt of the Earth, the 1954 film in which the wives of striking Mexican American workers ensure a victory over a zinc mining company by taking over the picket line, is intersectional at its core.[2] Meant for a public audience, it uses overlapping categorical challenges to argue for gender and racial (as well as class) liberation. The film was created by blacklisted Hollywood professionals alongside the strikers and picketers on which the story is based (those of the 1950-1951 labor struggle at Empire Zinc in Hanover, New Mexico) to push back against American dogma of the era: normalized sexism, racism, exploitation of workers, and the equation of any efforts to address such problems with communism.[3] Many scenes highlight the brutality or absurdity of these injustices, with workers dying in unsafe conditions, police beating Ramon Quintero for talking back “to a white man,” and women being laughed at when they declare they will cover the picket line, only to amaze when they ferociously battle police.[4]

Intersectionality is sometimes shown not told, with the protagonist Esperanza Quintero facing the full brunt of both womanhood and miserable class conditions in the company-owned town (exploitation of workers includes that of their families). She does not receive racist abuse herself, but, as a Mexican American woman whose husband does, the implication is clear enough. She shares the burdens of racism with men, and those of exploitation — with women’s oppression a unique, additional yoke. In the most explicit expositional instance of intersectionality, Esperanza castigates Ramon for wanting to keep her in her place, arguing that is precisely like the “Anglos” wanting to put “dirty Mexicans” in theirs.[5] Sexism is as despicable as racism, the audience is told, and therefore if you fight the latter you must also fight the former. The creators of Salt of the Earth use intersectionality to argue for equality for women by strategically tapping into preexisting anti-racist sentiment: the men of the movie understand that bigotry against Mexican Americans is wrong from the start, and this is gradually extended to women. The audience — Americans in general, unions, the labor movement — must do the same.

A similar public source to consider is Toni Morrison’s 1987 novel Beloved. Like Salt of the Earth, Beloved is historical fiction. Characters and events are invented, but it is based on a historical happening: in 1850s Ohio, a formerly enslaved woman named Margaret Garner killed one of her children and attempted to kill the rest to prevent their enslavement.[6] One could perhaps argue Salt of the Earth, though fiction, is a primary source for the 1950-1951 Hanover strike, given its Hanover co-creators; it is clearly a primary source for 1954 and its hegemonic American values and activist counterculture — historians can examine a source as an event and what the source says about an earlier event.[7] Beloved cannot be considered a primary source of the Garner case, being written about 130 years later, but is a primary source of the late 1980s. Therefore, any overall argument or comments on intersectionality reflect and reveal the thinking of Morrison’s time.

In her later foreword, Morrison writes of another inspiration for her novel, her feeling of intense freedom after leaving her job to pursue her writing passions.[8] She explains:

I think now it was the shock of liberation that drew my thoughts to what “free” could possibly mean to women. In the eighties, the debate was still roiling: equal pay, equal treatment, access to professions, schools…and choice without stigma. To marry or not. To have children or not. Inevitably these thoughts led me to the different history of black women in this country—a history in which marriage was discouraged, impossible, or illegal; in which birthing children was required, but “having” them, being responsible for them—being, in other words, their parent—was as out of the question as freedom.[9]

This illuminates both Morrison’s purpose and how intersectionality forms its foundation. “Free” meant something different to women in 1987, she suggests, than to men. Men may have understood women’s true freedom as equal rights and access, but did they understand it also to mean, as women did, freedom from judgment, freedom not only to make choices but to live by them without shame? Morrison then turns to intersectionality: black women were forced to live by a different, harsher set of rules. This was a comment on slavery, but it is implied on the same page that the multiple challenges of multiple identities marked the 1980s as well: a black woman’s story, Garner’s case, must “relate…to contemporary issues about freedom, responsibility, and women’s ‘place.’”[10] In Beloved, Sethe (representing Garner) consistently saw the world differently than her lover Paul D, from what was on her back to whether killing Beloved was justified, love, resistance.[11] To a formerly enslaved black woman and mother, the act set Beloved free; to a formerly enslaved man, it was a horrific crime.[12] Sethe saw choice as freedom, and if Paul D saw the act as a choice that could not be made, if he offered only stigma, then freedom could not exist either. Recognizing the unique challenges and perspectives of black women and mothers, Morrison urges readers of the 1980s to do the same, to graft a conception of true freedom onto personal attitudes and public policy.

Moving beyond historical fiction, let us examine a nonfiction text from the era of the Salem witch trials to observe how Native American women were even more vulnerable to accusation than white women. Whereas Beloved and Salt of the Earth make conscious moves against intersectional oppression, the following work, wittingly or not, solidified it. Boston clergyman Cotton Mather’s A Brand Pluck’d Out of the Burning (1693) begins by recounting how Mercy Short, an allegedly possessed servant girl, was once captured by “cruel and Bloody Indians.”[13] This seemingly out of place opening establishes a tacit connection between indigenous people and the witchcraft plaguing Salem. This link is made more explicit later in the work, when Mather writes that someone executed at Salem testified “Indian sagamores” had been present at witch meetings to organize “the methods of ruining New England,” and that Mercy Short, in a possessed state, revealed the same, adding Native Americans at such meetings held a book of “Idolatrous Devotions.”[14] Mather, and others, believed indigenous peoples were involved in the Devil’s work. Further, several other afflicted women and girls had survived Native American attacks, further connecting the terrors.[15]

This placed women like Tituba, a Native American slave, in peril. Women were the primary victims of the witch hunts.[16] Tituba’s race was an added vulnerability (as was, admittedly, a pre-hysteria association, deserved or not, of Tituba with magic).[17] She was accused and pressured into naming other women as witches, then imprisoned (she later recanted).[18] A Brand Pluck’d Out of the Burning was intended to describe Short’s tribulation, as well as offer some remedies,[19] but also to explain its cause. Native Americans, it told its Puritan readers, were heavily involved in the Devil’s work, likely helping create other cross-categorical consequences for native women who came after Tituba. The text both described and maintained a troubling intersection in the New England colonies.

A captivity narrative from the previous decade, Mary Rowlandson’s The Sovereignty and Goodness of God, likewise encouraged intersectional oppression. This source is a bit different than A Brand Pluck’d Out of the Burning because it is a first-hand account of one’s own experience; Mather’s work is largely a second-hand account of Short’s experience (compare “…shee still imagined herself in a desolate cellar” to the first-person language of Rowlandson[20]). Rowlandson was an Englishwoman from Massachusetts held captive for three months by the Narragansett, Nipmuc, and Wompanoag during King Philip’s War (1675-1676).[21] Her 1682 account of this event both characterized Native Americans as animals and carefully defined a woman’s proper place — encouraging racism against some, patriarchy against others, and the full weight of both for Native American women. To Rowlandson, native peoples were “dogs,” “beasts,” “merciless and cruel,” creatures of great “savageness and brutishness.”[22] They were “Heathens” of “foul looks,” whose land was unadulterated “wilderness.”[23] Native society was animalistic, a contrast to white Puritan civilization.[24]

Rowlandson reinforced ideas of true womanhood by downplaying the power of Weetamoo, the female Pocassett Wompanoag chief, whose community leadership, possession of vast land and servants, and engagement in diplomacy and war violated Rowlandson’s understanding of a woman’s proper role in society.[25] Weetamoo’s authority was well-known by the English.[26] Yet Rowlandson put her in a box, suggesting her authority was an act, never acknowledging her as a chief (unlike Native American men), and emphasizing her daily tasks to implicitly question her status.[27] Rowaldson ignored the fact that Weetamoo’s “work” was a key part of tribal diplomacy, attempted to portray her own servitude as unto a male chief rather than Weetamoo (giving possessions first to him), and later labeled Weetamoo an arrogant, “proud gossip” — meaning, historian Lisa Brooks notes, “in English colonial idiom, a woman who does not adhere to her position as a wife.”[28] The signals to her English readers were clear: indigenous people were savages and a woman’s place was in the domestic, not the public, sphere. If Weetamoo’s power was common knowledge, the audience would be led to an inevitable conclusion: a Native American woman was inferior twofold, an animal divorced from true womanhood.

As we have seen, public documents make a case for or against norms of domination that impact women of color in unique, conjoining ways. But sources meant to remain private are often less useful for historians seeking to understand intersectionality — as mentioned in the introduction, with less intention to persuade comes less bold or rarer pronouncements, whether oppressive or liberatory. Consider the diary of Martha Ballard, written 1785-1812. Ballard, a midwife who delivered over eight hundred infants in Hallowell, Maine, left a daily record of her work, home, and social life.[29] The diary does have some liberatory implications for women, subverting ideas of men being the exclusive important actors in the medical and economic spheres.[30] But its purpose was solely for Ballard — keeping track of payments, weather patterns, and so on.[31] There was little need to comment on a woman’s place, and even less was said about race. Though there do exist some laments over the burdens of her work, mentions of delivering black babies, and notice of a black female doctor, intersectionality is beyond Ballard’s gaze, or at least beyond the purpose of her text.[32]

Similarly, private letters often lack argument. True, an audience of one is more likely to involve persuasion than an audience of none, but still less likely than a mass audience. And without much of an audience, ideas need not be fully fleshed out nor, at times, addressed at all. Intersectional knowledge can be assumed, ignored as inappropriate given the context, and so on. For instance, take a letter abolitionist and women’s rights activist Sarah Grimké wrote to Sarah Douglass of the Philadelphia Female Anti-Slavery Society on February 22, 1837.[33] Grimké expressed sympathy for Douglass, a black activist, on account of race: “I feel deeply for thee in thy sufferings on account of the cruel and unchristian prejudice…”[34] But while patriarchal norms and restrictions lay near the surface, with Grimké describing the explicitly “female prayer meetings” and gatherings of “the ladies” where her early work was often contained, she made no comment on Douglass’ dual challenge of black womanhood.[35] The letter was a report of Grimké’s meetings, with no intention to persuade. Perhaps she felt it off-topic to broach womanhood and intersectionality. Perhaps she believed it too obvious to mention — or that it would undercut or distract from her extension of sympathy toward Douglass and the unique challenges of racism (“Yes, you alone face racial prejudice, but do we not both face gender oppression?”). On the one hand, the letter could seem surprising: how could Grimké, who along with her sister Angelina were pushing for both women’s equality and abolition for blacks at this time, not have discussed womanhood, race, and their interplays with a black female organizer like Douglass?[36] On the other, this is not surprising at all: this was a private letter with a limited purpose. It likely would have looked quite different had it been a public letter meant for a mass audience.

In sum, this paper offered a general view of how the historian can find and explore intersectionality, whether women of color facing overlapping challenges or the emancipatory mindsets and methods needed to address them. Purpose and audience categorized the most and least useful sources for such an endeavor. Public-intended sources like films, novels, secondary narratives, first-person narratives, and more (autobiographies, memoirs, public photographs and art, articles, public letters) show how intersectionality was utilized, advancing regressive or progressive attitudes and causes. Types of sources meant to remain private like diaries, personal letters, and so on (private photographs and art, some legal and government documents) often have no argument and are less helpful. From here, a future writing could explore the exceptions that of course exist. More ambitiously, another might attempt to examine the effectiveness of each type of source in producing oppressive or liberatory change: does the visual-auditory stimulation of film or the inner thoughts in memoirs evoke emotions and reactions that best facilitate attitudes and action? Is seeing the intimate perspectives of multiple characters in a novel of historical fiction most powerful, or that of one thinker in an autobiography, who was at least a real person? Or is a straightforward narrative, the writer detached, lurking in the background as far away as possible, just as effective as more personal sources in pushing readers to hold back or stand with women of color? The historian would require extensive knowledge of the historical reactions to the (many) sources considered (D.W. Griffith’s Birth of a Nation famously sparked riots — can such incidents be quantified? Was this more likely to occur due to films than photographs?) and perhaps a co-author from the field of psychology to test (admittedly present-day) human reactions to various types of sources scientifically to bolster the case.

For more from the author, subscribe and follow or read his books.

[1] Mary Lynn Rampolla, A Pocket Guide to Writing in History, 10th ed. (Boston: Bedford/St. Martin’s, 2020), 14.

[2] Salt of the Earth, directed by Herbert Biberman (1954; Independent Productions Corporation).

[3] Carl R. Weinberg, “‘Salt of the Earth’: Labor, Film, and the Cold War,” Organization of American Historians Magazine of History 24, no. 4 (October 2010): 41-45.

  Benjamin Balthaser, “Cold War Re-Visions: Representation and Resistance in the Unseen Salt of the Earth,” American Quarterly 60, no. 2 (June 2008): 347-371.

[4] Salt of the Earth, Biberman.

[5] Ibid.

[6] Toni Morrison, Beloved (New York: Vintage Books, 2004), xvii.

[7] Kathleen Kennedy (lecture, Missouri State University, April 26, 2022).

[8] Morrison, Beloved, xvi.

[9] Ibid, xvi-xvii.

[10] Ibid., xvii.

[11] Ibid., 20, 25; 181, 193-195. To Sethe, her back was adorned with “her chokecherry tree”; Paul D noted “a revolting clump of scars.” This should be interpreted as Sethe distancing herself from the trauma of the whip, reframing and disempowering horrific mutilation through positive language. Paul D simply saw the terrors of slavery engraved on the body. Here Morrison subtly considers a former slave’s psychological self-preservation. When Sethe admitted to killing Beloved, she was unapologetic to Paul D — “I stopped him [the slavemaster]… I took and put my babies where they’d be safe” — but he was horrified, first denying the truth, then feeling a “roaring” in his head, then telling Sethe she loved her children too much. Then, like her sons and the townspeople at large, Paul D rejected Sethe, leaving her.

[12] Ibid., 193-195.

[13] Cotton Mather, A Brand Pluck’d Out of the Burning, in George Lincoln Burr, Narratives of the New England Witch Trials (Mineola, New York: Dover Publications, 2012), 259.

[14] Ibid, 281-282.

[15] Richard Godbeer, The Salem Witch Hunt: A Brief History with Documents (New York: Bedford/St. Martin’s, 2018), 83.

[16] Michael J. Salevouris and Conal Furay, The Methods and Skills of History (Hoboken, NJ: Wiley-Blackwell, 2015), 211.

[17] Godbeer, Salem, 83.

[18] Ibid., 83-84.

[19] Burr, Narratives, 255-258.

[20] Ibid., 262.

[21] Mary Rowlandson, The Sovereignty and Goodness of God by Mary Rowlandson with Related Documents, ed. Neal Salisbury (Boston: Bedford Books, 2018).

[22] Ibid., 76-77, 113-114.

[23] Ibid., 100, 76.

[24] This was the typical imperialist view. See Kirsten Fischer, “The Imperial Gaze: Native American, African American, and Colonial Women in European Eyes,” in A Companion to American Women’s History, ed. Nancy A. Hewitt (Malden MA: Blackwell Publishing, 2002), 3-11.

[25] Lisa Brooks, Our Beloved Kin: A New History of King Philip’s War (New Haven: Yale University Press, 2018), chapter one.

[26] Ibid., 264.

[27] Ibid.

   Rowlandson, Sovereignty, 81, 103.

[28] Brooks, Our Beloved Kin, 264, 270.

[29] Laurel Thatcher Ulrich, A Midwife’s Tale: The Life of Martha Ballard, Based on Her Diary, 1785-1812 (New York: Vintage Books, 1999).

[30] Ibid., 28-30.

[31] Ibid., 168, 262-263.

[32] Ibid., 225-226, 97, 53.

[33] Sarah Grimké, “Letter to Sarah Douglass,” in Kathryn Kish Sklar, Women’s Rights Emerges within the Antislavery Movement, 1830-1870 (New York: Bedford/St. Martin’s, 2019), 94-95.

[34] Ibid., 95.

[35] Ibid., 94.

[36] Ibid., 84-148.

U.S. Segregation Could Have Lasted into the 1990s — South Africa’s Did

The 1960s were not that long ago. Many blacks who endured Jim Crow are still alive — as are many of the whites who kept blacks out of the swimming pool. When we think about history, we often see developments as natural — segregation was always going to fall in 1968, wasn’t it? Humanity was evolving, and had finally reached its stage of shedding legal racial separation and discrimination. That never could have continued into the 1970s, 80s, and 90s. We were, finally, too civilized for that.

South Africa provides some perspective. It was brutally ruled by a small minority of white colonizers for centuries, first the Dutch (1652-1815) and then the British (1815-1910). The population was enslaved until 1834. White rule continued from 1910 to 1992, after Britain made the nation a dominion (self-governing yet remaining part of the empire; full independence was voted for by whites in 1960). The era known as apartheid was from 1948-1992, when harsher discriminatory laws and strict “apartness” began, but it is important to know how bad things were before this:

Scores of laws and regulations separated the population into distinct groups, ensuring white South Africans access to education, higher-paying jobs, natural resources, and property while denying such things to the black South African population, Indians, and people of mixed race. Between union in 1910 and 1948, a variety of whites-only political parties governed South Africa… The agreement that created the Union denied black South Africans the right to vote… Regulations set aside an increasing amount of the most fertile land for white farmers and forced most of the black South African population to live in areas known as reserves. Occupying the least fertile and least desirable land and lacking industries or other developments, the reserves were difficult places to make a living. The bad conditions on the reserves and policies such as a requirement that taxes be paid in cash drove many black South Africans—particularly men—to farms and cities in search of employment opportunities.

With blacks pushing into cities and for their civil rights, the government began “implementing the apartheid system to segregate the country’s races and guarantee the dominance of the white minority.” Apartheid was the solidification of segregation into law. Legislation segregated public facilities like buses, stores, restaurants, hospitals, parks, and beaches. Further, one of the

…most significant acts in terms of forming the basis of the apartheid system was the Group Areas Act of 1950. It established residential and business sections in urban areas for each race, and members of other races were barred from living, operating businesses, or owning land in them—which led to thousands of Coloureds, Blacks, and Indians being removed from areas classified for white occupation… [The government] set aside more than 80 percent of South Africa’s land for the white minority. To help enforce the segregation of the races and prevent Blacks from encroaching on white areas, the government strengthened the existing “pass” laws, which required nonwhites to carry documents authorizing their presence in restricted areas…

Separate educational standards were established for nonwhites. The Bantu Education Act (1953) provided for the creation of state-run schools, which Black children were required to attend, with the goal of training the children for the manual labour and menial jobs that the government deemed suitable for those of their race. The Extension of University Education Act (1959) largely prohibited established universities from accepting nonwhite students…

[In addition,] the Prohibition of Mixed Marriages Act (1949) and the Immorality Amendment Act (1950) prohibited interracial marriage or sex…

The created conditions were predictable: “While whites generally lived well, Indians, Coloureds, and especially Blacks suffered from widespread poverty, malnutrition, and disease.”

Then, in 1970, blacks lost their citizenship entirely.

Apartheid ended only in the early 1990s due to decades of organizing, protest, civil disobedience, riots, and violence. Lives were lost and laws were changed — through struggle and strife, most explosively in the 1970s and 80s, a better world was built. The same happened in the U.S. in the 1950s and 60s. But our civil rights struggle and final victory could easily have occurred later as well. The whites of South Africa fighting to maintain apartheid all the way until the 1990s were not fundamentally different human beings than American whites of the same era. They may have held more despicable views on average, been more stuck in the segregationist mindset, but they were not different creatures. Varying views come from unique national histories, different societal developments — different circumstances. Had the American civil rights battle unfolded differently, we could have seen Jim Crow persist past the fall of the Berlin Wall. Such a statement feels like an attack on sanity because history feels natural — surely it was impossible for events to unfold in other ways — and due to nationalism, Americans thinking themselves better, more fundamentally good and civilized, than people of other nations. Don’t tell them that other countries ended slavery, gave women the right to vote, and so on before the United States (and most, while rife with racism and exclusion, did not codify segregation into law as America did; black Americans migrated to France in the 19th and 20th centuries for refuge, with Richard Wright declaring there to be “more freedom in one square block of Paris than in the entire United States”). If one puts aside the glorification of country and myths of human difference and acknowledges that American history and circumstances could have gone differently, the disturbing images begin to appear: discos keeping out people of color, invading Vietnam with a segregated army, Blockbusters with “Whites Only” signs.

For more from the author, subscribe and follow or read his books.

‘Beloved’ as History

In one sense, fiction can present (or represent, a better term) history as an autobiography might, exploring the inner thoughts and emotions of a survivor or witness. In another, fiction is more like a standard nonfiction work, its omniscient gaze shifting from person to person, revealing that which a single individual cannot know and experience, but not looking within, at the personal. Toni Morrison’s 1987 Beloved exemplifies the synthesis of these two commonalities: the true, unique power of fiction is the ability to explore the inner experiences of multiple persons. While only “historically true in essence,” as Morrison put it, the novel offers a history of slavery and its persistent trauma for the characters Sethe, Paul D, Denver, Beloved, and more.[1] It is posited here that Morrison believed the history of enslavement could be more fully understood through representations of the personal experiences of diverse impacted persons. This is the source of Beloved’s power.

One way to approach this is to consider different perspectives of the same event or those similar. To Sethe, her back was adorned with “her chokecherry tree”; Paul D noted “a revolting clump of scars.”[2] This should be interpreted as Sethe distancing herself from the trauma of the whip, reframing and disempowering horrific mutilation through positive language. Paul D simply saw the terrors of slavery engraved on the body. Here Morrison subtly considers a former slave’s psychological self-preservation. As another example, both Sethe and Paul D experienced sexual assault. Slaveowners and guards, respectively, forced milk from Sethe’s breasts and forced Paul D to perform oral sex.[3] Out of fear, “Paul D retched — vomiting up nothing at all. An observing guard smashed his shoulder with the rifle…”[4] “They held me down and took it,” Sethe thought mournfully, “Milk that belonged to my baby.”[5] Slavery was a violation of personhood, an attack on motherhood and manhood alike. Morrison’s characters experienced intense pain and shame over these things; here the author draws attention to not only the pervasive sexual abuse inherent to American slavery but also how it could take different forms, with different meanings, for women and men. Finally, consider how Sethe killed her infant to save the child from slavery.[6] Years later, Sethe was unapologetic to Paul D — “I stopped him [the slavemaster]… I took and put my babies where they’d be safe” — but he was horrified, first denying the truth, then feeling a “roaring” in his head, then telling Sethe she loved her children too much.[7] Then, like her sons and the townspeople at large, Paul D rejected Sethe, leaving her.[8] This suggests varying views on the meaning of freedom — death can be true freedom or the absence of it, or perhaps whether true freedom is determining one’s own fate — as well as ethics and resistance and love; a formerly enslaved woman and mother may judge differently than a formerly enslaved man, among others.[9]

Through the use of fiction, Morrison can offer diverse intimate perspectives, emotions, and experiences of former slaves, allowing for a more holistic understanding of the history of enslavement. This is accomplished through both a standard literary narrative and, in several later chapters, streams of consciousness from Sethe, Denver, Beloved, and an amalgamation of the three.[10] Indeed, Sethe and Paul D’s varying meanings and observations here are a small selection from an intensely complex work with several other prominent characters. There is much more to explore. It is also the case that in reimagining and representing experiences, Morrison attempts to make history personal and comprehensible for the reader, to transmit the emotions of slavery from page to body.[11] Can history be understood, she asks, if we do not experience it ourselves, in at least a sense? In other words, Beloved is history as “personal experience” — former slaves’ and the reader’s.[12]

For more from the author, subscribe and follow or read his books.

[1] Toni Morrison, Beloved (New York: Vintage Books, 2004), xvii.

[2] Ibid., 20, 25.

[3] Ibid., 19-20, 127.

[4] Ibid., 127.

[5] Ibid., 236.

[6] Ibid., 174-177.

[7] Ibid., 181, 193-194.

[8] Ibid., 194-195.

[9] Morrison alludes, in her foreword, to wanting to explore what freedom meant to women: ibid., xvi-xvii.

[10] Ibid., 236-256.

[11] Morrison writes that to begin the book she wanted the reader to feel kidnapped, as Africans or sold/caught slaves experienced: ibid., xviii-xix. 

[12] Ibid., xix.

The MAIN Reasons to Abolish Student Debt

Do you favor acronyms as much as you do a more decent society? Then here are the MAIN reasons to abolish student debt:

M – Most other wealthy democracies offer free (tax-funded) college, just like public schools; the U.S. should have done the same decades ago.

A – All positive social change and new government programs are “unfair” to those who came before and couldn’t enjoy them; that’s how time works.

I – Immense economic stimulus: money spent on debt repayment is money unspent in the market, so end the waste and boost the economy by trillions.

N – Neighbors are hurting, with skyrocketing costs of houses, rent, food, gas, and more, with no corresponding explosion of wages; what does Lincoln’s “government for the people” mean if not one that makes lives a little better?

For more from the author, subscribe and follow or read his books.

‘Salt of the Earth’: Liberal or Leftist?

Labor historian Carl R. Weinberg argues that the Cold War was fought at a cultural level, films being one weapon to influence American perspectives on matters of class and labor, gender, and race.[1] He considers scenes from Salt of the Earth, the 1954 picture in which the wives of striking Mexican American workers ensure a victory over a zinc mining company by taking over the picket line, that evidence a push against hierarchical gender relations, racial prejudice, and corporate-state power over unions and workers.[2] Cultural and literary scholar Benjamin Balthaser takes the same film and explores the scenes left on the cutting room floor, positing that the filmmakers desired a stronger assault against U.S. imperialism, anti-communism at home and abroad (such as McCarthyism and the Korean War), and white/gender supremacy, while the strikers on which the film was based, despite their sympathetic views and militancy, felt such commentary would hurt their labor and civil rights organizing — or even bring retribution.[3] Balthaser sees a restrained version born of competing interests, and Weinberg, without exploring the causes, notices the same effect: there is nearly no “mention of the broader political context,” little commentary on communism or America’s anti-communist policies.[4] It is a bit odd to argue Salt of the Earth was a cultural battleground of the Cold War that had little to say of communism, but Weinberg falls roughly on the same page as Balthaser: the film boldly takes a stand for racial and gender equality, and of course union and workers’ rights, but avoids the larger ideological battle, capitalism versus communism. They are correct: this is largely a liberal, not a leftist, film.

This does not mean communist sympathies made no appearance, of course: surviving the editing bay was a scene that introduced the character of Frank Barnes of “the International” (the Communist International), who strongly supported the strike and expressed a willingness to learn more of Mexican and Mexican American culture.[5] Later, “Reds” are blamed for causing the strike.[6] And as Weinberg notes, the Taft-Hartley Act, legislation laced with anti-communist clauses, is what forces the men to stop picketing.[7] Yet all this is as close as Salt comes to connecting labor, racial, and women’s struggles with a better world, how greater rights and freedom could create communism or vice versa. As Balthasar argues, the original script attempted to draw a stronger connection between this local event and actual/potential political-economic systems.[8] The final film positions communists as supporters of positive social changes for women, workers, and people of color, but at best only implies that patriarchy, workplace misery or class exploitation, and racism were toxins inherent to the capitalist system of which the United States was a part and only communism could address. And, it might be noted, the case for such an implication is slightly weaker for patriarchy and racism, as the aforementioned terms such as “Reds” only arise in conversations centered on the strike and the men’s relationships to it.

True, Salt of the Earth is a direct attack on power structures. Women, living in a company town with poor conditions like a lack of hot water, want to picket even before the men decide to strike; they break an “unwritten rule” by joining the men’s picket line; they demand “equality”; they mock men; they demand to take over the picket line when the men are forced out, battling police and spending time in jail.[9] Esperanza Quintero, the film’s protagonist and narrator, at first more dour, sparkles to life the more she ignores her husband Ramon’s demands and involves herself in the huelga.[10] By the end women’s power at the picket line has transferred to the home: the “old way” is gone, Esperanza tells Ramon when he raises a hand to strike her.[11] “Have you learned nothing from the strike?” she asks. Likewise, racist company men (“They’re like children”) and police (“That’s no way to talk to a white man”) are the villains, as is the mining company that forces workers to labor alone, resulting in their deaths, and offers miserable, discriminatory pay.[12] These struggles are often connected (intersectionality): when Esperanza denounces the “old way,” she compares being put in her place to the “Anglos” putting “dirty Mexicans” in theirs.[13] However, it could be that better working conditions, women’s rights, and racial justice can, as the happy ending suggests, be accomplished without communism. Without directly linking progress to the dismantling of capitalism, the film isolates itself from the wider Cold War debate.

For more from the author, subscribe and follow or read his books.

[1] Carl R. Weinberg, “‘Salt of the Earth’: Labor, Film, and the Cold War,” Organization of American Historians Magazine of History 24, no. 4 (October 2010): 42.

[2] Ibid., 42-44.

[3] Benjamin Balthaser, “Cold War Re-Visions: Representation and Resistance in the Unseen Salt of the Earth,” American Quarterly 60, no. 2 (June 2008): 349.

[4] Weinberg, “Salt,” 43.

[5] Salt of the Earth, directed by Herbert Biberman (1954; Independent Productions Corporation).

[6] Ibid.

[7] Ibid.

[8] Balthaser, “Cold War,” 350-351. “[The cut scenes] connect the particular and local struggle of the Mexican American mine workers of Local 890 to the larger state, civic, and corporate apparatus of the international cold war; and they link the cold war to a longer U.S. history of imperial conquest, racism, and industrial violence. Together these omissions construct a map of cold war social relations…”

[9] Salt of the Earth, Biberman.

[10] Ibid.

[11] Ibid.

[12] Ibid.

[13] Ibid.