“Progress Caused All of the Black Schools to Close”: Desegregation and Community Erosion

This study uses oral histories from North Carolina to posit that it was not uncommon for former students and educators of color in the late 1990s and early 2000s to judge school desegregation as having wrought significant negative side effects. These interviews — conducted by others, not this author — reveal much nuance and complexity in opinion, as the subjects place positive changes alongside harmful ones. The value of racial integration is lifted up, as one might expect, though this paper does not extensively cover those comments.[1] It rather seeks to build a more holistic understanding of the perceptions of African Americans and to a lesser extent Native Americans at the turn of the millennium. Of course, talk of bad things about integration and good things about segregation can be as discomforting as it is surprising, at least to those outside certain communities and age groups. But like the scientist whose results betray her hypothesis, the historian is only more fascinated, and follows the truth wherever it goes. And “integration as harmful” can become less counterintuitive rather quickly. After all, desegregation in the 1950s and ’60s suddenly placed children of color under white authority and in dangerous situations. Former students described disproportionate expulsions and punishments for black students.[2] Black youths faced all manner of racist abuse and nastiness, and some may have dropped out of school completely as a result.[3] Black communities faced a violent white backlash, causing some individuals to wonder whether integration was worth it.[4] Perceived downsides to desegregation, whether short- or long-term, should not be so surprising. The changes explored herein concern the relationships, institutions, close communities, and other positives lost to blacks and Native Americans during and after this process.

Historians have documented black views on the benefits of schooling under segregation, including teachers who were extremely demanding but also felt like part of the family.[5] James Atwater, a black student in Chapel Hill, North Carolina, from the 1930s to the 1950s, spoke of such connectivity in a 2001 interview. He was asked about Lincoln, the black school described by interviewer Jennifer Nardone as “a very tight knit community” with “a sense of family.”[6] Atwater explained that “it comes back to the physical”: teachers lived across the street from students, attended church with them, knew parents well, hosted gatherings at their homes, and so on. “So there was,” Atwater remembered, “the kind of relationship that one wouldn’t normally have with a teacher if the teacher had been living in another town, or been living in another part of town.” And as Lincoln was first through twelfth grade, students and educators would know each other long before and long after they shared a classroom. Integration would disrupt these patterns.

Atwater was out of school by the time of the 1954 Brown v. Board of Education ruling, but had “misgivings” about the desegregation order.[7] “Would that mean that all schools would simply be integrated or would it mean that black schools would disappear?” This was a question about who ran schools and colleges — a black school was operated by black leaders and educators; it did not merely have a majority black student body. “I think that that was the general fear among many people, many African Americans,” Atwater said. “Does this mean that [there] won’t be any more black university presidents? Does this mean there won’t be any more black principals? And any more black teachers[?]” Integration wrought “mixed feelings” due to concerns that power in and ownership of institutions would simply pass to whites. “Obviously, there’s going to be duplication” of roles when city schools are merged (and, we can add, building closings). “How fair would the process be, in determining who goes and who stays[?]” Many black educators indeed lost their jobs.[8] Integration would mean black students had fewer black teachers; leaving behind or losing beloved educators was quite painful for children.[9]

Desegregation resulted in the closing of many black schools. Arthur Griffin, who grew up in Charlotte, believed Second Ward, his high school that shut down in 1969, was a “casualty” of a “desegregation lawsuit,” and urban renewal that forced black families to move to other parts of town.[10] Significantly, the poorer, unequal condition of black schools like Second Ward — a result of the Jim Crow segregation that civil rights lawsuits aimed to address — spurred integration policies that favored the desires of white families. “You’d have white students from Myers Park coming to Second Ward, and students from Second Ward going to Myers Park,” Griffin remembered, speaking of the potential desegregation plan in Charlotte. “And I think, like in many other decisions back then, [white] folks just said, ‘No, we’re not going to a school that looks like this.’ Because [our] school was not in great repair, didn’t have nearly the things that Myers Park High School had.” Whites insisted their transferred children attend a better school, but one farther away than Second Ward. This transpired, and Second Ward closed down. Griffin felt “a sense of betrayal and loss.” Asked in 1999 why Second Ward had been so special to him, Griffin described the school as akin to “roots” — the older kids had attended, he had never envisioned going anywhere else, the sports tradition was important, he had a wealth of fond memories, etc. Second Ward “was a family”: people came together, cared for one another, and strove for self-improvement. The building itself was like a family member. It was a cornerstone of the community — integration broke many such cornerstones. “Progress,” Griffin said, “caused all of the black schools to close.”

Latrelle McAllister, another African American student in Charlotte, shared the concerns over school closures.[11] Alumni, she recalled, would still be fundraising for and involved with her high school decades after graduating. One would be involved long before attending as well, as children experienced regular athletics events and band performances. Everyone went to West Charlotte High. “There is a rich heritage. There is a broad base of support for this institution.” McAllister’s “first civil rights protest” was to prevent the closing of her high school in the early 1970s; the “whole community…gathered” to save the school. This is again notable — a civil rights march was necessary to counter the (white-run and -favored) effects of civil rights and integration. Progress had its downsides, given who retained power and oversaw policy changes. “A lot of the historically black schools had been closed,” but activism preserved West Charlotte.

While McAllister saw the benefits of integration, such as equitable funding and greater cultural tolerance, she said in 1998 that “​​there is probably in the black community, and certainly in our household, an ongoing debate about the degree to which integration helps our children or hurts our children,” as students did not have the same caring, lasting, family-esque relationships with teachers that they used to.[12] The teacher-parent relationships were likewise lost, damaging efforts to keep students on the right path: “if I got in trouble…I could be sure that my mother would know about it or my father would know about it and that something would be done about it. There’s not that type of support [today]. There’s not that village that we talk about that’s important in raising and nurturing and shaping young minds.” (Another student recalled teachers coming to student homes for discussions, rather than waiting on parent-teacher conferences.[13]) Scholar Gloria Ladson-Billings of the National Academy of Education and the University of Wisconsin notes that the civil rights era freed better-off blacks to move to other parts of town, meaning disconnected, distant teachers regardless of color: low-income, urban students’ “Black teachers are driving in from the suburbs the same way their White teachers are.”[14] Integration, then, separated people.[15] Student from teacher, teacher from family. Busing students hours away from their neighborhoods was yet another component of the new divides and dynamics created by desegregation.

West Charlotte was “one big family,” recalled former student Alma Enloe.[16] Competition and fights between students were rare. Teachers knew everyone’s name, even in such a large school, and made time for students one-on-one. Enloe appears to see the school as an extension of a close, trusting, family-like community: “When we were growing up any parent in the neighborhood could say something to you if you had done something wrong, even spank you, and nothing was said by the parents because they knew if another parent chastised you that way…you had done something wrong.” Teachers and principals were part of this system. Historian of education Vanessa Siddle Walker writes that they operated with “parentlike authority” and “almost complete autonomy.”[17] Educators were seen as virtual mothers and fathers, students as their own children.[18] Enloe speaks of teacher strictness alongside their caring attitudes, implicitly linking the two.[19] Other interviewees did the same.[20]

Enloe regrets the fights, guns, and other troubles in the public schools of 1998, and notes that some blame integration: “Some people would look at us and say when we were just the blacks by ourselves, at least we knew who the enemy was, saying white people. But now, everybody is together and don’t nobody know who’s the enemy or what, so everybody is just fighting everybody.”[21] Enloe more blames the decline of “stronger,” sterner parenting, but perhaps leaves room for the notion that segregation fostered such parenting by creating a close-knit, self-reliant community, which was lost after integration and progress on civil rights: “[Now] ​​we don’t have that togetherness. Everybody is just pulling apart…” Scholar James D. Anderson of the University of Illinois recently argued that when teachers no longer lived in the school neighborhoods and knew families intimately, students had less respect for them, less of a connection and desire to please.[22] We might add that white supremacy and violence surely created an immense pressure on black Americans to set high behavioral and academic standards for children (“Be twice as good” is a well-known mantra[23]), in order to survive. Teachers battled to create advancement opportunities for students and the community as a whole.[24] This is not to glorify segregation or ignore the well-established link between the disproportionate poverty wrought by slavery/Jim Crow and today’s troubled schools, children, or neighborhoods, but rather to consider an additional possible factor, one that the individuals in these sources take seriously: integration weakened school-family relationships, community cohesion or closeness, and certain standards and expectations, contributing to social problems. In a similar vein to Enloe, former student Stella Nickerson said that a black high school “was very important. It was a connection. It was something that the communities could say was definitely theirs.”[25] But, she argues, integration erased a sense of ownership (and generated bad experiences), leading to less parental involvement in the following decades.

Let us now briefly observe the parallels and intersections with the indigenous experience of integration. Native Americans could see influxes of black and white students into their schools as detrimental to a sense of community and therefore optimal learning. This is revealed in an oral history offered by James A. Jones, a principal in Prospect, North Carolina, in the 1970s and ’80s.[26] In 2003, Jones recalled that the area was formerly “nearly a hundred percent” Native American, like “a little Indian reservation.” “We like to kind of keep it that way,” he admitted. It remained “a very close knit community. There were a lot of family relations, family connections in this Prospect community. And it’s very deep. It goes way back to probably the eighteenth century… Most of the land…has been inherited from our ancestors. It’s just been passed down, passed down, passed down…” Redistricting in the 1970s shuffled indigenous students to different schools and brought increasing but modest numbers of black students to Prospect School, sparking consternation and resistance. The principal before Jones resigned. Troopers accompanied black learners to ensure peaceful integration. More white students enrolled as well. Mergers in the 1980s continued to change where students went to school and how close administrators and educators were to students and their families.

Jones directly connects this familiarity with academic and behavioral success. Prospect produced doctors, lawyers, and managers — “I attribute this to the fact that our teachers, most of the teachers knew every parent… I could walk in the classrooms, and I could name ninety percent of those kids’ parents, because I taught…a lot of their parents. If a problem surfaced, I said, ‘Do you want me to talk to your mother and daddy about you?’ ‘No, Mr. Jones. No.’ That eliminated the [need for] discipline right there…”[27] But the changes of the 1970s and ’80s had altered that arrangement. Jones blames higher truancy and dropout rates on the new disconnect between teachers and families. Further, Native American students sent to other schools did not feel like they belonged, violence and fears of it grew more common, and so on. “Crossing the lines” and “racial issues” had taken a toll, and not “left [the] Prospect community happy.” Jones’ interviewer, Malinda Maynor, remarked that “all the ways that children have benefited from greater access and inclusion” occurred alongside the serious new problems Jones described. Clearly, blacks and Native Americans shared concerns that racial integration had caused teacher-family segregation, leading to negative outcomes for students.

To conclude, schools were of massive importance to communities of color, and integration could represent a serious threat. Vanessa Siddle Walker cites instances of direct opposition to desegregation among African Americans.[28] Four decades later, some former students and educators of color remembered desegregation as a mixed bag. There was discrimination and mistreatment. Schools were closed and jobs were lost. At formerly all-black schools, integration changed or pushed aside school traditions.[29] Institutions no longer felt like cornerstones of communities. Teachers no longer lived with and knew families intimately. That lack of familiarity and trust, and perhaps the end of white supremacy in general, changed discipline and how hard students were pushed to excel. The bonds of community were loosened. Social ills followed. Determining, in some empirical way, the full truth of these last perceptions is beyond the scope of this study. There are many factors that could explain changing relationships, behavioral norms, academic success, social problems, and so on, and broad changes have also occurred in predominantly white communities and the nation as a whole over the decades. But there is fertile ground for historians and sociologists to explore these observed effects of desegregation, and should causal veracity be established, we need not be so surprised.

For more from the author, subscribe and follow or read his books.


[1] See for instance Oral History Interview with Burnice Hackney, February 5, 2001, interview K-0547, Southern Oral History Program Collection (#4007) in the Southern Oral History Program Collection, Southern Historical Collection, Wilson Library, University of North Carolina at Chapel Hill. Retrieved from https://docsouth.unc.edu/sohp/K-0547/K-0547.html. Hackney praises the better facilities black students had access to.

[2] Ibid.

Oral History Interview with Arthur Griffin, May 7, 1999, interview K-0168, Southern Oral History Program Collection (#4007) in the Southern Oral History Program Collection, Southern Historical Collection, Wilson Library, University of North Carolina at Chapel Hill. Retrieved from https://docsouth.unc.edu/sohp/K-0168/K-0168.html.

[3] Oral History Interview with Sheila Florence, January 20, 2001, interview K-0544, Southern Oral History Program Collection (#4007) in the Southern Oral History Program Collection, Southern Historical Collection, Wilson Library, University of North Carolina at Chapel Hill. Retrieved from https://docsouth.unc.edu/sohp/K-0544/K-0544.html.

[4] Shifting from North Carolina to Arkansas for a moment, Melba Pattillo Beals, Warriors Don’t Cry (New York: Simon Pulse, 2007) is a fine source for both the mistreatment of black students in integrating spaces and for the hesitancy about integration among blacks in the South. There was intense worry that the attempt at integration would cause a white backlash against African Americans in Little Rock, which indeed occurred, with worsening violence, vandalism, firings, and exclusion from public spaces. Melba Beals, one of the nine integrating students, is deserted by black friends and given the cold shoulder by some black neighbors — the Little Rock Nine are “meddling children” (p. 218) who should never have created such an explosive situation. Beals’ friend, expressing fear of coming to her house, which might be targeted, summed up the reaction: “You gotta get used to the fact that you’all are just not one of us anymore. You stuck your necks out, but we’re not willing to die with you” (p. 145). What the Nine did brought danger to everyone else, against their will; the Nine endured some criticism and exclusion in the black community as a result. Other black residents of course supported integration and were proud of the students and their bravery. The point is that there existed differing views on whether integrating Central High was a wise or worthwhile thing to do.

[5] Vanessa Siddle Walker, Their Highest Potential: An African American School Community in the Segregated South (Chapel Hill: University of North Carolina Press, 1996), 3.

[6] Oral History Interview with James Atwater, February 28, 2001, interview K-0201, Southern Oral History Program Collection (#4007) in the Southern Oral History Program Collection, Southern Historical Collection, Wilson Library, University of North Carolina at Chapel Hill. Retrieved from https://docsouth.unc.edu/sohp/K-0201/K-0201.html.

[7] Ibid.

[8] Interview with Burnice Hackney.

[9] Gloria Ladson-Billings and James D. Anderson, “Policy Dialogue: Black Teachers of the Past, Present, and Future,” History of Education Quarterly 61, no. 1 (February 2021): 95-96.

[10] Interview with Arthur Griffin.

[11] Oral History Interview with Latrelle McAllister, June 25, 1998, interview K-0173, Southern Oral History Program Collection (#4007) in the Southern Oral History Program Collection, Southern Historical Collection, Wilson Library, University of North Carolina at Chapel Hill. Retrieved from https://docsouth.unc.edu/sohp/K-0173/K-0173.html.

[12] Ibid.

[13] Oral History Interview with Nate Davis, February 6, 2001, interview K-0538, Southern Oral History Program Collection (#4007) in the Southern Oral History Program Collection, Southern Historical Collection, Wilson Library, University of North Carolina at Chapel Hill. Retrieved from https://docsouth.unc.edu/sohp/K-0538/K-0538.html.

[14] Ladson-Billings and Anderson, 95.

[15] Interview with Latrelle McAllister.

[16] Oral History Interview with Alma Enloe, May 18, 1998, interview K-0167, Southern Oral History Program Collection (#4007) in the Southern Oral History Program Collection, Southern Historical Collection, Wilson Library, University of North Carolina at Chapel Hill. Retrieved from https://docsouth.unc.edu/sohp/K-0167/K-0167.html.

[17] Siddle Walker, 3.

[18] Ibid., 134-135.

[19] Interview with Alma Enloe.

[20] Oral History Interview with Stella Nickerson, January 20, 2001, interview K-0554, Southern Oral History Program Collection (#4007) in the Southern Oral History Program Collection, Southern Historical Collection, Wilson Library, University of North Carolina at Chapel Hill. Retrieved from https://docsouth.unc.edu/sohp/K-0554/K-0554.html.

[21] Interview with Alma Enloe.

[22] Ladson-Billings and Anderson, 95.

[23] Interview with Stella Nickerson. “We’ve always had to work harder and prove ourselves more…”

Interview with Burnice Hackney. “You still have got to work harder at whatever you do. You still might come in second.”

[24] Ladson-Billings and Anderson, 95, 97, 100.

[25] Interview with Stella Nickerson.

[26] Oral History Interview with James Arthur Jones, November 19, 2003, interview U-0005, Southern Oral History Program Collection (#4007) in the Southern Oral History Program Collection, Southern Historical Collection, Wilson Library, University of North Carolina at Chapel Hill. Retrieved from https://docsouth.unc.edu/sohp/U-0005/U-0005.html.

[27] Ibid.

[28] Siddle Walker, 4.

[29] Oral History Interview with John Love, February 17, 1999, interview K-0172, Southern Oral History Program Collection (#4007) in the Southern Oral History Program Collection, Southern Historical Collection, Wilson Library, University of North Carolina at Chapel Hill. Retrieved from https://docsouth.unc.edu/sohp/K-0172/K-0172.html.

Begrudgingly Acknowledged Country Bangers

When someone says they hate country music, they’re typically referring, whether they know it or not, to the neotraditionalist “young country” that arose in the late 1980s and came to full force in the 1990s and early 2000s. You know, the George Strait, Reba McEntire, Alan Jackson, Garth Brooks, Tim McGraw, Kenny Chesney, Brooks & Dunn, Toby Keith, Faith Hill, LeAnn Rimes, Shania Twain, and Brad Paisley era. Luke Combs and other younger artists help keep the style going, though it has evolved a bit under the influence of other genres.

As someone who consumes copious amounts of folk and hip-hop, and enjoys rock as well, this dominant brand of country is fairly torturous. This despite enjoying it when I was a teenager (I was a religious conservative surrounded by many fans of country, and had somewhat limited exposure to certain genres). This also despite trying to appreciate art in all its forms. I try to appreciate young country for what it is, but most songs can only be “enjoyed” ironically, hate-screaming I’d like to check you for ticks, she thinks my tractor’s sexy, my love affair with water, and we’ll put a boot up your ass, it’s the American way!

Of course, there’s plenty of dumb, low-brow nonsense in hip-hop, pop, and so on as well. For my personal tastes, country suffers from a content problem and a sound problem. Now, the content can be absurd, but the major themes aren’t all fundamentally bad. I reject blind patriotism and nationalism, but farms, trucks, beer, small towns, cowboy boots, dancing, that’s all good stuff to sing about. But the musical style is largely grating to my ear, like death metal or some pop. Christmas music is a good parallel — I like snow, decorations, presents, lights, and holiday feasts, but find the sound of such songs annoying and childish.

All that said, all genres have their bangers. Country is no exception. That’s why I’ve created a “Begrudgingly Acknowledged Country Bangers” playlist. Most songs, of course, don’t come from the young country branch. Just a couple. Others are more 1970s and ’80s — a far superior style — and some are newer songs but influences from other forms have clearly taken them a step away from Kenny Chesney. Finally, these are predominantly mainstream songs you’d hear at a country bar. This was to prevent me from simply finding the “good country” outside popular, contemporary tastes. You won’t see many alternative country artists like Lyle Lovett or rockabilly legends like Johnny Cash, both of whom are of course incredible and represent my favorite artists of the genre.

For more from the author, subscribe and follow or read his books.

Citizenship, Criticism, and Communism

In the 1940s and ’50s, Americans engaged in an intense debate over the content of school textbooks, particularly social studies texts. Fears of communism and socialism spurred a conservative backlash against anything that smacked of collectivism or unpatriotic criticism of the United States.[1] Dangerous books were poisoning the minds of schoolchildren, turning them into Reds, and had to be removed from classrooms.[2] Study of the controversy sparks an interesting question. How did contemporaries understand the relationship between citizenship and dissent? Could one remain a good citizen if engaging in critique of American society? The answers to this question diverged along political lines. Those who might answer no tended to be conservative, with the hypothetical yes associated with leftists. However, how most citizens — left, center, and right — felt remains open to interpretation.

Textbooks and series accused of “subversion” included American Government (Frank Magruder), Building America (National Education Association), and Man and His Changing Society (Harold Rugg).[3] Alongside George Counts, William H. Kilpatrick, John Dewey, and others, Rugg was a social reconstructionist and progressive reformer who believed education could build a better society. Reconstructionists engaged in leftwing critiques of capitalism and other realities. Rugg’s widely used series included 1931’s An Introduction to Problems of American Culture. In the introduction, student attention was drawn to ideas like conflicting reports in newspapers, censorship, influence of environment and exposure on belief, job loss from automation, a rapidly changing society, poverty, who controls the government and the press, and more.[4] Youth were asked to “study the needs and try to learn how to improve the community in which you live.”[5] The book went on to explore the booms and busts of the capitalist economy and associated miseries: “even in times of prosperity millions are out of work.”[6] “Why,” Rugg asks, “should there be unemployment and starvation in the richest country in the world?… There are many reasons, but the most important ones can be summed up in one phrase — LACK OF PLANNING.”[7] The U.S. needed a “national plan of producing goods and providing jobs for all.”[8] Rugg uses multiple chapters to lay out his vision, shifting from facts concerning the state of American society to unabashed editorializing of a flair ranging from New Dealist to socialistic. He advocates for public ownership of select industries, wealth redistribution, and other methods of State intervention to solve the “outstanding problems of American civilization and culture.”[9] While he does not call for the nationalization of all industries, which would have placed him more firmly in the communist or socialist camp, Rugg nods approvingly at the Soviet Union’s centrally planned production, and in sections on politics is sure to include Socialists alongside Republicans and Democrats as potential candidates for elected office.[10]

In a 1940 article in The New York Times, Rugg, perhaps feeling the pressure of a fierce attack on his books, vehemently denied that he was a communist or socialist, saying he supported “free discussion,” insisting education create “citizens who understand the forces at play in our own land and abroad, and who are concerned to do something about them. I feel sure that they can solve America’s problems and build a magnificent civilization…”[11] This was key to making “the American way work.”[12] A good citizen recognized national faults and worked to fix them. In the aforementioned textbook, Rugg connected his proposed right to a job with the American way. The Constitution vowed to “promote the general welfare” and the Declaration of Independence spoke of “Life, liberty, and the pursuit of happiness — these are inalienable rights! What can guarantee them more securely than to provide a job for everyone?”[13] George Shuster, president of Hunter College, wrote that Rugg’s textbooks pushed the idea that America has “changed and must necessarily go on changing…”[14] Reform was an American tradition. Progressive education made “young people interested in helping to make their world a better world,” which of course required understanding (That Men May Understand was Rugg’s book-length response to critics) and acknowledgement of social issues — like “economic problems” that might be addressed by government intervention.[15] Others agreed that introspection and criticism were healthy, such as a committee of school officials and citizens in Michigan that castigated textbook censorship, declaring, according to historian of education Jonathan Zimmerman, that “textbooks should examine ‘accomplishments and failures’ in American history, so that students would develop the analytical abilities that democratic citizenship demanded.”[16] 

To critics, Rugg’s content represented “Treason in the Textbooks,” to quote the headline of a 1940 article by journalist O. K. Armstrong.[17] Planning meant “strict government control of individual and group activities,” Armstrong wrote. Collectivism was the “bitter foe” of democracy, the birthmother of totalitarianism. The “insidious destruction of American ideals by way of the minds and hearts of American boys and girls” had to be stopped.[18] For many conservatives, good citizenship required an uncritical, flattering approach to American society and history. Zimmerman writes, “In the white-hot politics of the Cold War, the suggestion that America needed any reform was ‘subversive’” to some observers, aiding the communists and their cause.[19] Some California lawmakers explicitly called for “a constructive, positive approach” with a focus on the “good things” the United States had to offer.[20] The American Legion, a veterans’ organization and a leading crusader against subversive textbooks, published “A School Program for Positive Americanism” in its magazine, written by the superintendent of Chicago’s public schools in 1941.[21] “Sinister” ideas had to be purged from school materials and replaced with those that “inculcated” a “love of country” and a “loyal desire” to serve “the best interests of the community, State, and nation.”[22] (Note the contradictory tension between opposing collectivism and stressing that the individual must put community and country first, a common feature of this period, alongside condemnations of indoctrination by those intent on utilizing it for different purposes.[23]) The “teaching of patriotism” and “respect” for the American way of life was needed for “true American citizenship.”[24] Students must study and celebrate American institutions, heroes, founding documents, flags, patriotic songs, and so on.

This approach of course was already a big part of public education. The Chicago superintendent reported proudly that his commissions were finding no subversive materials in his schools, which somewhat undermined the threat and framed education as already serving his function (“We have found no material that…seeks to cast doubt upon the importance of the patriotism of our American heroes and their services to mankind”).[25] Indeed, widely used readers for young students produced in the 1930s featured patriotic songs, drawings of children carrying flags, and stories of dutiful and loyal Americans.[26] Using possession to indicate importance and stress obedience, one basic reader explained that a citizen is a “person who lives in a country and belongs to it.”[27] There were obligations of service in exchange for liberty. The United States “protects you and gives you many things to make you happy. But your country cannot be great and free and happy unless its boys and girls do their part,” do “what your country needs…”[28] Another book left the value and benefit of machines unquestioned, quite different from the concern over automation leaving workers unemployed.[29] Although it should be noted these texts were for younger readers than Rugg’s Problems. The point is that patriotic texts were a major presence in public schools, alongside those that addressed the pain of the Depression and other social ills.

Clearly, the meaning of good citizenship tended to differ by political ideology, with staunch conservatives far less likely to tolerate questioning and dissent as a component of citizenship compared to staunch left-wingers, who saw no contradiction. But perhaps these are the extremes — whether most Americans of the period saw proper citizenship as incompatible with criticism is debatable. On the one hand, Rugg’s books were wiped off the face of American education.[30] Education historian Adam Laats notes that Rugg’s books sold 152,000 copies in 1940 but only 40,000 the next year, with many cities and school boards eliminating them.[31] The conservative “attacks took their toll.”[32] “Significant numbers of Americans” opposed subversive material and sought “to make schools and society more patriotic, more friendly to capitalism,” achieving real “success.”[33] However, Jonathan Zimmerman argues that while Rugg’s books were largely defeated, conservative activism failed to topple most other accused texts.[34] He writes that multitudes of citizens, veterans, school boards, businesses, and committees, as well as Congress and “almost every legislature” that took up the issue, refused to support content censorship.[35] Despite widespread opposition to communism and support for patriotism, a distinction was made, Zimmerman suggests, between the former and much critical material.[36] Perhaps a mix of nuanced thinking, disdain for censorship, fealty to familiar or beloved books, memories of the Depression, and other factors contributed. Laats suggests the Rugg controversy was more about the man than the books.[37] Indeed, far-left teachers may have had it worse than texts.[38] Zimmerman concludes: “By 1954, if not earlier, both the critics and the defenders of American textbooks declared that the campaign against the books had failed.”[39] Given the inconsistency of textbook fates, historians must continue to study the period and its controversy, seeking new ways to measure general American sentiment.

For more from the author, subscribe and follow or read his books.


[1] Jonathan Zimmerman, Whose America?: Culture Wars in the Public Schools (Boston: Harvard University Press, 2002). See chapter four.

[2] Ibid.

[3] Zimmerman, 83-84, and Adam Laats, The Other School Reformers: Conservative Activism in American Education (Boston: Harvard University Press, 2015), chapter three.

[4] Harold Rugg, An Introduction to Problems of American Culture (Boston: Ginn and Co., 1931). See the introduction.

[5] Ibid., 14.

[6] Ibid., 181.

[7] Ibid., 185.

[8] Ibid., 195.

[9] Ibid., 217, 594, 595-598.

[10] Ibid., 3-4, 265, 596-597.

[11] “Rugg Defends His Textbooks, Long Attacked,” New York Times, April 5, 1940.

[12] Ibid.

[13] Rugg, 196.

[14] George Shuster, “Dr. Harold Rugg Replies to His Critics,” The New York Times Book Review, April 27, 1941.

[15] Ibid.

[16] Zimmerman, 95.

[17] O. K. Armstrong, “Treason in the Textbooks,” The American Legion Magazine 29 (September 1940): 8-9, 51, 70-72.

[18] Ibid., 72.

[19] Zimmerman, 85.

[20] Ibid.

[21] William Johnson, “A School Program for Positive Americanism,” The American Legion Magazine 31 (September 1941): 12-13, 50-52.

[22] Ibid., 12.

[23] Irene Corbally Kuhn, “Your Child is Their Target,” The American Legion Magazine 52 (June 1952): 18.

[24] Ibid., 13.

[25] Ibid., 13.

[26] “Little American Citizens,” William Elson and William Gray, Elson-Gray Basic Readers Book Four (Boston: Scott, Foresman, and Company, 1936).

[27] Ibid, 68.

[28] Ibid.

[29] “Workers and their Work,” William Elson and William Gray, Elson-Gray Basic Readers Book Five (Boston: Scott, Foresman, and Company, 1936), 276, 306.

[30] Zimmerman, 78-79.

[31] Laats, 75.

[32] Ibid., 75, 119-120.

[33] Ibid., 76, 121.

[34] Zimmerman, 79.

[35] Zimmerman, 101-103.

[36] Ibid., 101.

[37] Laats, 121.

[38] Zimmerman, 83.

[39] Ibid., 101.

Indoctrination and Knowledge: Native American Youth in the Federal Boarding Schools

In the late nineteenth century, the United States government funded and created boarding schools to purge Native American children of their tribal identities and cultures.[1] Such children, through enticement or force, were transported hundreds or even thousands of miles from their reservations and deposited at institutions like the Carlisle Indian Industrial School in Pennsylvania, the first of its kind (opened 1879). With the United States now spanning the continent, officials and advocates saw federal boarding schools as a method by which a rogue element within American borders – indigenous nations – could be eradicated, removed from their land, absorbed into mainstream society through re-education. The mission, write historian Jacqueline Fear-Segal and sociologist Susan Rose, was “to impose ‘civilization’ through total immersion” and “prepare Native youth for assimilation and American citizenship.”[2] While this was the intention, it can be argued that students generally did not learn to be “Americans” as defined by white visionaries, but did see value in the skills and knowledge attained at these institutions, highlighting a tension between utility or intellectual, technological growth and cultural preservation.

With Native American youth cleaved from their homes, families, and traditions, white educators implemented their curricula to “kill the Indian” and “save the man,” to quote Richard Henry Pratt, founder of the Carlisle school, in 1892.[3] We know from Pratt’s writings that teaching English, industriousness, and self-sufficiency was highly important. This would cure Native Americans’ “chronic condition of helplessness” and enable them to live and work alongside whites.[4] His school sent youth “out into our communities,” Pratt explained, “to show by their conduct and ability that the Indian is no different from the white or the colored.”[5] This is a reference to the “outing” program, which entailed sending students to work as farmhands, maids, and so on during the summers (children often did not go home for many years).[6] Students, Pratt declared, in “joining us and becoming part of the United States,” should also be made “loyal to the government… Carlisle has always planted treason to the tribe and loyalty to the nation at large.”[7] “Teaching American citizenship” was crucial.[8] White customs, habits, and life purposes beyond nationalism and the world of work should likewise be inculcated. Pratt uses the terms “civilize” and “assimilate,” so necessary to end indigenous people’s (albeit environment-based, not innate) “savagery.”[9]

In 1890, the U.S. secretary of the interior handed down guidelines for indigenous boarding schools, stating Americanization required “training of the hand in useful industries; the development of the mind in independent and self-directing power of thought; the impartation of useful practical knowledge; the culture of the moral nature, and the formation of character.”[10] Students experienced a highly regimented environment — “military-style,” to quote Fear-Segal and Rose.[11] The secretary wrote that pupils should be forced to attend religious services and punished for using any language but English. “Grave violations of rules” resulted in “corporal punishment or imprisonment in the guardhouse.”[12] Donald Warren of Indiana University writes that the curriculum featured reading, writing, and speaking English, arithmetic, and U.S. history and government.[13] Schoolwork also included industrial training, farming, mechanics, housekeeping, singing, and mastery of instruments.[14] Proper personal care, hygiene, and dress (they wore uniforms) were important, as were manners and etiquette. “They should be taught the sports and games enjoyed by white youth,” the secretary continued, from baseball to marbles.[15] “The girls should be instructed in simple fancy work, knitting, netting, crocheting…”[16] Each boarding school was to display the American flag daily.

Primary sources left by Native Americans illuminate reactions, including what students rejected or took to heart. Zitkála-Šá of the Yankton Dakota, a student at Carlisle, wrote of “unjustifiable frights and punishments” and “extreme indignities” like having her hair shorn, being tied down, teachers striking students, and incomprehensible rules.[17] Originally excited to journey east to attend school, once there her “spirit tore itself in struggling for its lost freedom,” and she “rebelled.”[18] She broke rules, ran and hid from teachers, did chores improperly, and tore frightening pictures of the devil in a Christian story book. When tuberculosis swept through Carlisle, Zitkála-Šá grew suspicious of both the quality of care sick youths received and the religious rites (“superstitious ideas”) pressed upon the dying.[19] To Zitkála-Šá, the strict routine of this “civilizing machine” was a “harness” causing “pain.”[20] She was “actively testing the chains which tightly bound my individuality.”[21] After three years, she returned home to South Dakota and felt lost, in “chaos,” being “neither a wild Indian nor a tame one” after her experience.[22] She wept seeing young people on the reservation wearing white America’s clothing and speaking English, and wanted to burn her mother’s bible.

Resistance to assimilation, despair over cultural erosion, and crises of identity were common among Native American youth who attended federal boarding schools. Fear-Segal and Rose write that “some found [Carlisle] traumatic and begged to go home or ran away; others completed their Carlisle schooling but lived with stress and disturbance upon their return.”[23] Only seven percent of students graduated from this school; most were discharged. “The vast majority did not assimilate into mainstream society,” the scholars write, but returned to their nations, often feeling as Zitkála-Šá did.[24] The researchers cite the example of Plenty Horses of the Sicangu Lakota — “When I returned to my people, I was an outcast among them. I was no longer an Indian. I was not a white man…” — but also note that indoctrination could work, at least for a time, as with Sun Elk from the Taos Pueblo: “After a while we also began to say Indians were bad. We laughed at our own people and their blankets and cooking pots and sacred societies and dances.”[25] Native American children could be made ashamed of who they were. Nevertheless, with most returning to reservations, and with accounts of resistance and “feeling caught between two cultures,” one can posit that students largely did not learn to be so-called true Americans, who would leave their reservations and integrate with white society.[26]

Yet students recognized the advantages of skills and experience they gained at boarding schools. There was much use value in certain knowledge and practices – which may have made some lifestyle changes feel less threatening to the larger culture. Despite her painful experiences, Zitkála-Šá wrote that after graduating, “I was the proud owner of my first diploma.”[27] She then went to college, against her mother’s wishes. Zitkála-Šá, then, both lamented cultural erosion or transformation and continued participating in its mechanisms. Perhaps there was a distinction between entering the white world to be educated in new ways (more acceptable) and bringing white ways of living, speaking, and thinking back to the reservation (less acceptable). But Zitkála-Šá’s mother may reveal a different tension. She is deeply suspicious of whites, with their “lies” and violent conquests, but is “influenced…to take a farther step from her native way of living” by replacing her wigwam with a log home.[28] Perhaps the difference was in fact that certain things added to indigenous life were generally more palatable (knowledge, diplomas, forms of shelter) than others. Individuals would of course have different views on what was agreeable, with a mother pushing a bible and a daughter wanting to burn it. There was, Donald Warren writes, little “agreement on the need to choose between tribal and white cultures” – some were more open to mixing than others – but there was “growing acknowledgement that learning English and preparing for employment in the U.S. economy banked useful assets…”[29] Some pursuits and practices from white society were judged to be sensible, or even matters of survival — Zitkála-Šá “will need an education when she is grown,” her mother mused when deciding about Carlisle, “for then there will be fewer real Dakotas, and many more palefaces.”[30] The gun may be the best example of a novelty too practical to ignore. Ohíye S’a of the Santee Dakota wrote of hating the idea of wearing white America’s clothing, but this was long after he started using a gun![31] Usefulness may have impacted the degree to which practices and technology were seen as threats to tradition.

Letters, alumni surveys, and other primary documents suggest other students saw utility in education at white institutions. Vincent Natalish became a civil engineer in New York, took courses at Columbia and the Massachusetts Institute of Technology, and later sought to enroll his son at Carlisle.[32] Elizabeth Wind became a nurse in Wichita, Kansas, and also tried to send her boy to Carlisle.[33] Mary North, writing to the alumni association in 1912, praised “dear old Carlisle” which “taught us so many useful things [and] helped us so much in our living and working on our farm, which we love better than living any other place.”[34] Her family tried “to live like the good white people live.”[35] Martha Napawat reported to the school later in life that she wanted “to be a good example of Carlisle. You tell the white people that it does pay to educate the Indian… I am trying to keep a house like a white woman.”[36] “Great improvement in Indians” could be seen, for example the transition from teepees to houses.[37] Writings to school officials and former figures of power are open to questions of sincerity (did former students simply tell boarding schools what they wanted to hear?), but voluntarily sending one’s child to Carlisle is indicative of the perceived value of such an education, at minimum for survival in an increasingly white world, an idea scholars have touched upon.[38]

In sum, the experience of indigenous children in federal boarding schools was complex. Cultural erasure, oppression, trauma, resistance, interest in learning, and cultural adaptation all occurred together. Pratt’s mission to “release these people from their tribal relations,” “citizenizing and absorbing them” into the larger American society through education did not succeed. The most important thing students were to learn – that Native American societies had little value and should be abandoned – went largely unlearned. Still, white schools offered training and knowledge that students found useful and engaging; such learning was brought back to reservations, and to a lesser extent turned into careers beyond the reservations. Boarding schools, among much else, did change students and indigenous nations. The field should continue to refine its understanding of the degree of this change, and explore whether new ways of living were viewed with hostility in inverse correlation to utility, which could reveal a new layer of Native American agency.[39]

For more from the author, subscribe and follow or read his books.


[1] Jacqueline Fear-Segal and Susan Rose, eds., Carlisle Indian Industrial School: Indigenous Histories, Memories, and Reclamations (Lincoln: University of Nebraska Press, 2016). See the introduction.

[2] Ibid.

[3] Richard Henry Pratt, Official Report of the Nineteenth Annual Conference of Charities and Correction (1892), 46–59. Reprinted in Richard H. Pratt, “The Advantages of Mingling Indians with Whites,” Americanizing the American Indians: Writings by the “Friends of the Indian” 1880–1900 (Cambridge, Mass: Harvard University Press, 1973), 260–271. Online at http://historymatters.gmu.edu/d/4929/.

[4] Ibid.

[5] Ibid.

[6] Fear-Segal and Rose, Carlisle.

[7] Pratt, Official Report.

[8] Ibid.

[9] Ibid.

[10] “Rules for Indian Schools,” U.S. Bureau of Indian Affairs, Annual Report of the Commissioner of Indian Affairs, 1890 (Washington, D.C., 1890), cxlvi, cl-clii. In Sol Cohen, ed., Education in the United States: A Documentary History (Westport, CT: Greenwood Publishing Group, 1977), 3:1756.

[11] Fear-Segal and Rose, Carlisle.

[12] “Rules for Indian Schools,” 1759.

[13] Donald Warren, “American Indian Histories as Education History,” History of Education Quarterly 54, no. 3 (2014): 263. http://www.jstor.org/stable/24482179.

[14] “Rules for Indian Schools,” 1757-1760.

[15] Ibid.

[16] Ibid.

[17] Zitkála-Šá, “The School Days of an Indian Girl,” American Indian Stories (Washington: Hayworth Publishing House, 1921), 47-80. Online at http://digital.library.upenn.edu/women/zitkala-sa/stories/school.html.

[18] Ibid.

[19] Ibid.

[20] Ibid.

[21] Ibid.

[22] Ibid.

[23] Fear-Segal and Rose, Carlisle.

[24] Ibid.

[25] Ibid.

[26] Fear-Segal and Rose, Carlisle; Pratt, Official Report.

[27] Zitkála-Šá, “School Days.”

[28] Zitkála-Šá, “Impressions of an Indian Childhood,” American Indian Stories (Washington: Hayworth Publishing House, 1921), 7-45. Online at http://digital.library.upenn.edu/women/zitkala-sa/stories/impressions.html.  

[29] Warren, 269.

[30] Zitkála-Šá, “Impressions.”

[31] Ohíye S’a (Charles Eastman), Indian Boyhood (New York: McClure, Philips, and Co., 1902). Online at http://www.gutenberg.org/files/337/337-h/337-h.htm#link2H_4_0031. See chapter 12.

[32] Superintendent to Vincent Natalish, December 17, 1915, and Vincent Natalish to Oscar Tipps, December 14, 1915, in “Vincent Natalish (Nah-tail-eh) Student File,” Carlisle Indian School Digital Resource Center, accessed March 1, 2024. https://carlisleindian.dickinson.edu/student_files/vincent-natalish-nah-tail-eh-student-file. See pages 21-25 of the PDF.

[33] Superintendent to Mrs. Paul B. Diven, January 3, 1911, and Betty W. Diven to Moses Friedman, January 26, 1914, in “Elizabeth Wind (Ro-nea-we-ia) Student File,” Carlisle Indian School Digital Resource Center, accessed March 1, 2024. https://carlisleindian.dickinson.edu/student_files/elizabeth-wind-ro-nea-we-ia-student-file. See pages 10 and 17 of the PDF.

[34] Mary L. N. Tasso to Officers of the Alumni Association, February 1, 1912, in “Mary North Student File,” Carlisle Indian School Digital Resource Center, accessed March 1, 2024. https://carlisleindian.dickinson.edu/student_files/mary-north-student-file. See pages 10-11 of the PDF.

[35] Ibid.

[36] Mary Napawat Thomas Returned Student Survey, in “Mary Napawat Student File,” Carlisle Indian School Digital Resource Center, accessed March 1, 2024. https://carlisleindian.dickinson.edu/student_files/martha-napawat-student-file. See pages 6-7 of the PDF.

[37] Ibid.

[38] For more on survival and education, see Bryan McKinley Jones Brayboy, “Culture, Place, and Power: Engaging the Histories and Possibilities of American Indian Education,” History of Education Quarterly 54, no. 3 (2014): 395–402. http://www.jstor.org/stable/24482187.

[39] Warren’s “confining binary” of “victims or agents” may be further eroded if perceived usefulness impacted decisions about encroaching white cultural elements. See Warren, 261.

Joe Biden, With Enthusiasm

In November I’ll be voting for Joe Biden with some enthusiasm. From the Leftist perspective, there are things to criticize (Israel, immigrant detention, typical disappointingly liberal stuff) but also moments of pleasant surprise (Biden’s push to abolish student debt, ending the war in Afghanistan, marijuana pardons, big money to families under the American Rescue Plan and Child Tax Credit, huge infrastructure and climate investments). Good policies — and despite thus far fruitless bribery investigations by Republicans, Biden seems like a decent enough person, minus the creepy uncle handsiness around women and occasional lie or embellishment.

I’ve been somewhat surprised at Biden’s low approval rating. (And somewhat pleased. The last thing you want is Democratic voters and officials comfortable, confident Biden will win. You want them in a panic, to ensure turnout.) To me he seems relatively inoffensive, a job done just fine. A conversation revolves around his age and faculties, but I can’t take it seriously. If he jumbles words and gets momentarily confused like a typical grandpa, that does not automatically mean the careful decisions he makes (with his team and advisors, mind you) are compromised or faulty, nor does it change the nature of his person or politics.

It would have been delightful if Biden had blown everything up and stepped aside for someone young, more progressive, a woman or person of color, to really excite the base. Something fresh, without question creating better odds of victory. But he’s our man, so very well. I cannot get worked up enough to reject or disapprove of someone so vanilla and “just fine” and solidly adequate.

Trump, of course, is an awful man with extremist policies, a demagogue whose pathological lying, authoritarian flair, and general imbecility threaten the democratic functioning of society. We’ve enjoyed seeing Trump and his followers arrested and tried for their crimes. All that goes away when a Republican returns to the White House. Trump will win a stay of prosecution, order the Justice Department to drop its charges, try to pardon himself, or do the Two-Step Shuffle (Trump resigns as president, his vice president ascends and pardons him and makes him the new VP and then resigns, returning Trump to the presidency). Pardons will be issued for accomplices and January 6 rioters. No one will be held accountable for anything. The rightwing extremism, madness, and undermining of the rule of law and democracy will resume. These are the stakes.

On November 5, do your fucking job.

For more from the author, subscribe and follow or read his books.

Fascinating Moments in Early U.S. History (Part 2: A New Century and Andrew Jackson)

We return with a second installment of Fascinating Moments in Early U.S. History. See the first article here.

When two Hamiltons argued you shouldn’t be punished for saying true things (plus, you should have the right to a jury)

We must rewind a little for this one. In the Peter Zenger trial of 1735, before the creation of the United States, Philadelphia attorney Andrew Hamilton argued that there needed to be a higher bar in British law to mark written speech as “seditious libel” — it should not solely stem from whether a public official’s reputation was damaged. Yes, you could get in trouble just for this! Zenger, a publisher, was charged with seditious libel after criticizing the royal governor of New York, who felt his reputation was impugned.

Hamilton argued that it mattered whether the statement was true or false, what the author’s intent was. “The Words themselves must be libelous,” Hamilton said in court, “that is, false, scandalous, and seditious or else we are not guilty.” A guilty verdict, he continued, would imply that the words Zenger published were false, when they were in fact “notoriously known to be true.” After reaffirming one’s right to criticize the government, Hamilton said that “it is Truth alone which can excuse or justify any Man for complaining of a bad Administration” but “nothing ought to excuse a Man who raises a false Charge…” If the law treated these things the same, then it was the “bare Printing and Publishing a Paper” that was libelous, the mere release of “Informations.” People deserved the right to print true things, even if someone’s reputation took a hit.

Hamilton further argued that juries, not judges, should decide whether something was libelous, not just who published it, saying that “leaving it to the Judgment of the Court, whether the Words are libelous or not, in Effect renders Juries useless.” Yes, in this era the judges decided what was libelous in a given case and the juries merely decided who printed it! Hamilton questioned why juries were neutered for this particular crime. “I cannot see, why in our Case the Jury have not at least as good a Right to say, whether our News Papers are a Libel, or no Libel as another Jury has to say, whether killing of a Man is Murder or Manslaughter…” Decisionmaking power had to be taken from the judge and distributed to the people of the jury, for the sake of liberty and proper trials.

In the end, Hamilton essentially asked the jury to engage in jury nullification — even though a judge found Zenger’s published materials to be seditious libel, and Zenger confessed to publishing them, the jury found Zenger not guilty of publication and set him free.

But the law went unchanged, so everything had to happen again later, after the American founding. In the Harry Croswell case of 1804 in New York, the well-known Alexander Hamilton drew on Andrew Hamilton’s arguments. When a jury found Croswell, another printer and journalist, guilty of publishing what a judge ruled to be seditious libel against President Jefferson and others, Alexander Hamilton showed during appeal that what had been written was true. Allowing truth to be published without punishment, regardless of reputational harm to the targeted politicians, was the only “way to preserve liberty, and bring down a tyrannical faction. If this right was not permitted to exist in vigor and in exercise, good men would become silent; corruption and tyranny would go on, step by step…” Of course, Hamilton added, it would not do to have “a press wholly without control” — those who published falsities should still face consequences.

Hamilton insisted that juries decide these cases, for the same reasons one would support political democracy. It spread out power. A “fluctuating body…selected by lot” was safer for liberty and justice than a “permanent body of magistrates” who were part of the governmental system, would be influenced by the opinions of politicians, and, though this is at best only implied, may even have personal incentives to keep seditious libel laws more extreme. After all, one could write against a judge as easily as a president or governor. Judges held positions of power, just like those being attacked in the papers. “Men are not to be implicitly trusted, in elevated stations,” Hamilton said. “The experience of mankind teaches us, that persons have often arrived at power by means of flattery and hypocrisy; but instead of continuing humble lovers of the people, have changed into their most deadly persecutors.” This line may have referred simply to politicians, but came directly after comments on judges. It cleverly justifies both free criticism in the press and removing decisionmaking power from judges. The court might “make a libel of any writing whatsoever” if the judicial system continued to, pace Andrew Hamilton in the 1730s, “render nugatory the function of the jury.” 

Alexander Hamilton also praised the infamous Sedition Act of 1798, a federal seditious libel law that had expired in 1801. While it allowed Americans to be charged for defaming public officials or making rebellious, critical statements against the government in D.C., what Madison and the Republicans labeled a serious betrayal of the First Amendment but the courts ruled constitutional, it also vested power in juries and allowed the truth as a defense. If you could prove what you said was true, you could be declared not guilty. Therefore Hamilton called the act “honorable, a worthy and glorious effort in favor of public liberty.” The noble principles of this federal law should thus be applied to New York state law.

The appellate judges deciding the case deadlocked; Croswell’s conviction stood. But in 1805, the New York legislature modified state seditious libel laws, building in the Hamiltons’ reforms.

When Napoleon’s defeat indirectly caused a depression in the United States

The Panic of 1819, a financial crisis that caused a massive economic downturn in the United States, was caused by myriad factors. But the fall of Napoleon and the recovery of Europe played a role. After the Napoleonic wars, Europe’s agricultural production increased and American farms had a more difficult time selling their goods. Prices fell, along with profits. Therefore farmers struggled to repay loans to financial institutions, which had been extending credit far and wide during the earlier economic boom. When farmers could not pay their debts, the banks responded by turning to institutions that owed them debts: smaller banks. Banks demanded that other banks repay their loans, which banks could not do due to the lack of payment from individual borrowers. Banks were under considerable strain without individual repayments and with their own debts being called up — the national banking system, for instance, was still trying to pay off the Louisiana Purchase. Financial institutions began to go under, which continued the cascade. Fearful of losing their money if their bank was the next to go, Americans rushed to make withdraws, cleaning out their banks and causing them to fail. The Panic was a significant event in early nineteenth-century America, putting an end to the Era of Good Feelings — a time of economic and imperial growth — and plunging the country into an economic depression that lasted several years.

When states bypassed the free market and supercharged the economy

In the early nineteenth century, Democrats at the state level were interested in using government to create more prosperous economies — “state mercantilism.” This was in contrast to the federal level, where Democrats alongside Republicans were largely for free markets, free trade, and other anti-mercantilist policies. One aspect of state mercantilism was large construction projects that would increase and quicken the flow of goods within the United States, creating more profits for businesses, farmers, shipping companies, and so forth. When federal funding for such projects (such as those proposed by Henry Clay) failed, as a result of the Democratic Congress believing such things to be unconstitutional for the national government to do, the states moved forward on their own.

One major state investment was in railroads. Maryland, pushed by business interests in Baltimore, chartered the first American railroad firm, the Baltimore and Ohio Railroad Company. The B&O went west from Baltimore to the Ohio River and later to St. Louis. To get railroad projects off the ground, states would give land to railroad companies, at times seizing it from residents using eminent domain. The companies would then construct the lines on free land and sell off parts of the land back to Americans and other businesses. In addition to offering such sizable subsidies, states allowed and supported monopolies — one rail company would own the line being constructed across a state. The rail companies enjoyed no competition, set high rates, and raked in the revenue. They also sold company shares. With state aid, major railroads snaked from Baltimore, Philadelphia, and New York out across the U.S. and toward the Mississippi.

Canals were another major state investment. If connections could be made between water systems, goods could reach their destinations faster, generating higher company profits. So states funded the construction of canals, for instance Pennsylvania connected the Susquehanna River with the Ohio. The Erie Canal is possibly the most well-known and most successful project, with New York state, funded by New York business interests, linking the Mohawk River to Lake Erie. Goods could travel from the Atlantic Ocean to the Great Lakes; the Erie Canal created a major trade network between the Middle Region of the U.S. and the Old Northwest. As with railroads, canal projects hugely benefited the economies of American states, as well as that of the nation as a whole.

When the Shawnee leader castigated dependence, cultural corruption, and racial mixing

In the early nineteenth century, Shawnee leader Tenskwatawa, “The Prophet,” asserted that whites had corrupted Native American societies. American and European goods and technology had replaced traditional practices and eroded self-sufficiency. “Our men forgot how to hunt without noisy guns. Our women don’t want to make fire without steel…” Indigenous people, Tenskwatawa declared, had become dependent on the enemy: “now a People who never had to beg for anything must beg for everything!” Native Americans had once been “pure” and “strong.” Now they were weak and defiled. Whiskey had consumed them, and women had married white men to create “half-breeds.” It was not enough to protect indigenous land from further white encroachment (which is in fact blamed on cultural exchange). Tribes had to return to the “old ways,” their cultures purified and people made strong again. The Prophet demanded Native women abandon white husbands and their children to preserve racial homogeneity, a rejection of alcohol and food introduced or produced by whites, and a return to traditional clothing, hunting, tools, and so on (although guns were too useful for self-defense against whites, and should be kept). Commerce and interaction would cease entirely. Society could then be sacred and self-sufficient once more, as willed by the Creator.

When Andrew Jackson framed Indian Removal as a good thing for Native Americans, a humanitarian act

The Indian Removal Act of 1830 seized Native American land in existing states east of the Mississippi and forcibly moved indigenous people westward into American territories. It was proposed by President Andrew Jackson and passed by his party, the Democrats. Tribes that were rounded up and marched into what would become Oklahoma and Kansas (usually worse land than what they were leaving) included the Seminoles, Cherokee, Creek, Choctaw, Chickasaw, Sac, and Fox. All this represented a serious American betrayal of its promises and treaty commitments, and a major crime against humanity. Native peoples had been told they could remain in their lands if they engaged in agriculture and in other ways adapted to American society. The forced marches, which came after plenty of violence, were a humanitarian disaster. Do you recall the infamous “Trail of Tears”? Native Americans perished from the cold and from starvation. The Indian Removal Act is therefore remembered by most as a devastating and immoral policy. But at the time, it was lifted up as a wonderful thing for Native Americans. This was necessary to national identity and self-perception.

The framing of these events is different today, because there is a stronger need to justify oppression when it’s taking place; two centuries later, the narrative can shift a bit. When you set about to crush someone, you justify what you are doing as morally right. So Jackson, in his December 6, 1830 speech to Congress, declared that removal “will separate the Indians from immediate contact with settlements of whites; free them from the power of the States; enable them to pursue happiness in their own way and under their own rude institutions; will retard the progress of decay, which is lessening their numbers, and perhaps” even help them become “civilized.” Expulsion from their homes was the best thing for Native Americans. African Americans received similar ideological treatment in the United States: slavery was good for blacks (who were too stupid and uncivilized to care for themselves), giving them shelter, food, clothing, Christianity, and so on. Oppressors have to rationalize, to frame their deeds as humanitarian, not destructive. The descendants of oppressors, looking at the past, judge things slightly differently. While you can still find some who try to stress the victim benefits of slavery or Indian Removal to protect the image and moral character of the United States, most Americans would probably call these things wrong, tragic, and so on. At the least, they were bad for Native Americans and others. That is a different framing than Jackson’s. 

However, the full modern framing is still problematic and has not left behind the nationalistic ideology of the 1820s and earlier. Americans may reflexively call the mistreatment of native nations wrong or harmful, but there are two large caveats to this, which impede any further distancing from Jackson. First, the wrongs of Indian Removal, slavery, and so on are likely to be considered “mistakes.” The ideological position remains that the United States — its presidents, government, and people — is fundamentally good. In the past mistakes were made, but America always had good intentions (highly Jacksonian). This is rather different than acknowledging the self-interest, greed, conscious betrayal, racism, cruelty, violence, and so on necessary to devise and carry out Indian Removal. The idea that the United States might have had bad intentions is unthinkable for many people. Messing up while still being inherently good is a more palatable, patriotic image than committing premeditated crimes as a result of being inherently flawed, and many cling to it. This may be changing as leftwing sentiments and criticism grow more popular, but it holds true.

The second asterisk is that while Indian Removal would likely be labeled wrong or hurtful by most Americans today, it may more have the flavor of a “necessary evil.” It is very difficult for citizens to imagine a United States that doesn’t look precisely as it does today. The U.S. was meant to have its present borders and power — Manifest Destiny is still very much alive. The Trail of Tears and indigenous expulsion may have been bad, but they had to happen for America to fully possess the lands east of the Mississippi. If explorers and colonists hadn’t beat back native populations, the U.S. may not have existed; if Americans hadn’t done the same, the U.S. would not have fulfilled its destiny. History had to unfold as it did, with all its awfulness, so the U.S. could grow larger, more powerful, and be the greatest nation in the world. It would be highly interesting to conduct a poll, inquiring whether one would prefer a smaller, less influential, less powerful United States that hadn’t conducted Indian Removal, slavery, or other acts…or prefer reality, to have everything play out as it did to ensure American continental and global dominance. Many would choose the latter, making one question how wrong such things are actually judged to be, and highlighting who and what really matters.

For more from the author, subscribe and follow or read his books.

“You Don’t Believe Women Should Have the Right to Vote?”

It was this June that I first learned I had a friend — of nearly twenty years — who no longer believed American women should have the right to vote. Nor should they be tolerated as pilots, pastors, or other professionals. Such arrangements were against the Law of God and women’s nature. After all, “the head of every man is Christ and the head of the woman is man” (1 Cor. 11:3), women are not to speak in church (1 Cor. 14:34-35), wives must submit to husbands (Col. 3:18), women are too emotional for some tasks, and so on.

Any hope that he was joking to simply rile me — we always debate politics and religion, a sparring between an atheist on the Left and a religious conservative — drained, like the blood from my face, when he called a waitress over to explain his views to her. I watched as Stage 3 was reached. From a private belief one would never admit to something you’d perhaps whisper to a friend to something you say freely to a stranger, directly to the face of a person you would oppress. I would take away your equal rights if given the chance.

There’s a flashback in the third episode of The Handmaid’s Tale — to before rightwing fundamentalists take over the United States, establish biblical law, and obliterate women’s rights — where the female protagonist is in a coffee shop and is startled when a man eyes her and says the quiet part out loud. Horrific thoughts became horrific words, which later became horrific actions, the final stages. I thought about that scene for a long time after leaving that Kansas City bar, having suddenly lived in some version of it.

Equality, freedom, decency, and democracy, I tried to explain, require extending to others the rights you want for yourself. If a man wants to vote, let him favor the same for women. If a Christian or straight person wants to marry or adopt or be served at establishments or not be fired for who they are, extend this to gays. This is a big, diverse society where not everyone is Christian, I tried to explain. There are people of other faiths and nonbelievers. Laws should not be based on Christian doctrine because this country should be for all people, not just Christians. Principles so morally obvious, yet completely impotent in the face of fundamentalist faith.

Equality, freedom, decency, and democracy must simply be sacrificed on the altar of God. His decree is more important than such things. Who cares who’s crushed, if God wills it? Islamic extremists operate under the same rules. Only in 2015 could women in Saudi Arabia vote and run for office in local elections. When the Taliban retook Afghanistan in 2021, they barred women from most jobs and schooling, and established all-male governments. “Men are in charge of women,” after all, says Qur’an 4:34. Fundamentalist Islam and fundamentalist Christianity have other obvious similarities as well, such as the oppression of gays and restriction of free, blasphemous speech (think of the Christians pushing for book bans of anything LGBTQ- or witchcraft-related).

Islamic theocracies, the Jewish state of Israel, Christian Europe for fifteen hundred years… Oppression is the natural outcome of religious states, because texts from Iron Age desert tribes call for much oppression. One wonders if slavery will be permitted as well. The New Testament also demands slaves submit to their masters, even harsh ones (Ephesians 6:5, 1 Peter 2:18, Titus 2:9-10). In Luke 12:47-48, Jesus uses the “lashing” and “flogging” of a “slave” (NASB language) to make a point in one of his parables. Why would restoring women’s subservience be ideal in a Christian nation, but not slavery? What’s the difference? Clearly, God wills it. (Whether not wanting to be accused of picking and choosing what to follow in the New Testament or sincerely believing an even more horrific thing, my friend told me that a gentle form of slavery would be acceptable, to replace the welfare state. Again, enslaving Christians or taking away their right to vote would be, one assumes, immoral and unacceptable.)

The encounter shook me in that surreal way that has grown familiar in recent times. A few years ago, an acquaintance of mine, seemingly a normal human being, turned out to be a QAnon nut. Remember how the Democrats were running a global pedophile ring out of a pizza shop? As with conspiracy theorists, you know people who oppose women’s political rights exist, vaguely, out there somewhereAnn Coulter, Candace Owens, the #RepealThe19th Twitter posters, and so on. Then the moment of horror: No, they’re your friends and family.

I felt a rare pang of despair. That such poison would spread on the Right. That the excesses of the Left may bear some responsibility, extremes stoking and worsening each other, an ideological Newtonian Third Law. Yet most Americans — and most Christians — would be aghast at the idea of abolishing women’s voting or professional rights, if not other things. And despite many recent setbacks, this is an increasingly liberal, secular society. That in itself may evoke a backlash, but is a trend likely to continue.

We’ve seen recently how democracy survives only if people care more about democracy than remaining in office, than power. Equality and freedom survive only if we care more about them than things like the awful edicts of ancient holy books.

For more from the author, subscribe and follow or read his books.

Fascinating Moments in Early U.S. History (Part 1: The Revolutionary Era)

Surprising ideas and events abound when studying the American war for independence and the early republic. Let’s take a look!

When Britain’s moves against slavery pushed American colonists to support independence

In the mid-18th century, abolitionism stirred in the American colonies among religious sects such as the pacifistic Quakers. In Britain, activists and politicians were at work as well — to end the slave trade and what little slavery there was in Britain itself — and significant developments unfolded that impacted America’s coming revolution and later political development. In 1772, the British courts handed down a ruling that changed the practice of slavery in the motherland and worried Americans invested in slavery. It was determined that James Somerset, a black slave who had been brought to Britain and escaped, was free, as Britain itself had no positive laws establishing and protecting slavery. Lord Mansfield, issuing the decision, threw out the old practice of respecting colonial laws when it came to this issue. He also called slavery “odious.” 

To American slave-owners, it appeared Britain, now essentially free soil, was turning away from slavery. This caused much concern. The Somerset Case signaled that British courts took upon themselves the power to end slavery — if it could be ended in the motherland, it could be ended throughout the empire. Colonial law didn’t matter, British law did — and British law did not uphold slavery. More evidence of this appeared when the Earl of Dunmore declared during the American Revolution that any American slave in Virginia that escaped and came to fight for Britain would be freed. Britain made this official policy throughout the colonies. Large numbers of slaves fled their American captors. South Carolina and Georgia lost an estimated one-third of their slaves.

Concerns over protecting slavery played a role, therefore, in American views toward the revolution and the new government they would establish. Southern states like Virginia, Maryland, South Carolina, and Georgia saw increasing support for the fight for independence. Slavery had to be preserved. America would seek compensation from Britain for lost slaves after the war — unsuccessfully. The Patriots were sure, however, to work relevant safeguards into constitutional law. The Somerset ruling had established for Britain that a slave going from one part of the empire to another could find freedom. So the U.S. Constitution blocked this. Article IV, Section 2 reads: “No Person held to Service or Labor in one State, under the Laws thereof, escaping into another, shall, in Consequence of any Law or Regulation therein, be discharged from such Service or Labor…” Other elements solidified slavery as established law: the slave trade was preserved for decades, to allow slave-owners time to import more slaves after losing so many during the war (and in response to abolitionist efforts to end the slave trade), and congressional representation would include the counting of slaves as three-fifths of a person. Slave-owners would not make the same mistake in U.S. law that had been made in British law.

When Thomas Jefferson and James Madison plagiarized George Mason

The Declaration of Independence and the Constitution contain language and principles that echo George Mason’s 1776 Virginia Declaration of Rights. The Virginia document begins by stating “all men are by nature equally free” and possess “inherent rights”: “the enjoyment of life and liberty, with the means of acquiring and possessing property, and pursuing and obtaining happiness…” Only slight changes — some to add flourish — would be adopted for the opening lines of the Declaration of Independence. Power, Mason continues, is derived from the people; politicians are to be the “servants” of the people, “at all times amenable to them,” a slightly more radical statement than the “consent of the governed” line employed by Jefferson, but in the same spirit. Section 3 makes clear that proper government is to secure the safety and happiness of its citizens, who have the right to alter or abolish it for failing to do so. 

Section 8 establishes for Virginians the right to a speedy trial before an impartial jury, similar to the later Amendments V and VI of the Constitution. Further, no accused individual will be forced to testify. The Bill of Rights declares that property cannot be seized without just compensation, whereas the earlier Virginia Declaration makes no mention of reparations, only requiring an act by the legislature. Section 9 — “That excessive bail ought not to be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted” — is copied directly in Amendment VIII. Both documents condemn searches and seizures without specific warrants and firm evidence. Freedom of the press and religion, and the right to a well-regulated militia, are codified in both. Finally, the Virginia Declaration insists that the executive and legislative bodies must “be separate and distinct from the judiciary,” touching lightly upon the separation of powers that is implied, but not declared, in the first few articles of the Constitution. 

When the states saw themselves as thirteen sovereign republics

Article II of the Articles of Confederation (America’s first try at a constitution) stressed that “Each state retains its sovereignty, freedom, and independence…” Any “power” or “right” not explicitly granted to the federal government — and there were few in this short document — belonged to the states. Outside of war, treaties, coinage, trade, and a few other purviews, Congress could do little in terms of national policy; state legislatures had the power to do what they liked. In later articles, the union was framed as a “league of friendship” for mutual defense and benefit; they were “binding themselves to assist each other” in the event of an attack from foreign powers. Further, citizens were assured free entry and exit from each state, something you might see in treaties between sovereign nations (the European Union comes to mind). In Article VI, each state is instructed to maintain a militia — rather than the central government operating a military force (similarly, states would levy taxes; Congress could not). Finally, note that in this document the “United States” are plural, rather than the modern singular; i.e. “each of the United States…” Clearly, this new system of government was viewed as a virtual alliance of independent powers.

When Anti-Federalists were idiots and boycotted the Constitutional Convention

In 1787, Federalists arranged a convention in Philadelphia to reform the Articles of Confederation, which they saw as giving too much power to individual states, leading to harmful policies of various sorts: the continued confiscation of Loyalist property, blocking Loyalists from seeking reparations in court, inflating the money supply, and so on. Anti-Federalists, responsible for these sorts of policies and comprised of the more radical Patriots of the revolutionary era such as George Clinton, Sam Adams, John Hancock, and Patrick Henry, opposed the types of reforms that Federalists envisioned, which would force the states to submit to the authority of a national legislature — the states would no longer be able to do as they pleased. The Anti-Federalists, seeing a strong central government as a betrayal of the revolution, chose to boycott the Philadelphia convention. Regardless of what the convention decided to do to the Articles, the changes would need to be approved by state legislatures, and the Anti-Federalists were confident this would not come to pass.

The boycott would mean an increasing loss of control for the Anti-Federalists. Their majority could have blocked the convention, or could have attended the convention and steered the course of events. They likely could have saved the Articles and tightly limited the scope of reforms. Instead, they gambled on the state legislatures and lost. The Federalists were able to design a new government without interference, and were better organized to begin working in the state legislatures for ratification. The Anti-Federalists played catch-up and made various additional mistakes. The Federalists were able to push their new Constitution through the state legislatures — the line of defense the Anti-Federalists had relied on failed.

When the Founding Fathers saw Big Government as a vital check on state injustices

With states continuing to seize Loyalist property, block British creditors from collecting American debts, and mishandle the money supply, Madison sought, with a new constitution, a federal check on state power.

In Vices of the Political System of the United States (1787), he pointed to the dangers of majority rule, arguing that representatives were more often driven by “ambition” and “personal interest” than the “public good.” Such officials banded together, at times fooling voters and honest politicians by framing their own interests as the common good, resulting in the passage of unjust laws. However, “a still more fatal” flaw of democracy was that clashing interests were rarely balanced affairs. The poor vastly outnumbered the rich, for instance, a major problem (see How the Founding Fathers Protected Their Own Wealth and Power). “All civilized societies are divided into different interests and factions, as they happen to be creditors or debtors — Rich or poor — husbandmen, merchants or manufacturers — members of different religious sects — followers of different political leaders — inhabitants of different districts — owners of different kinds of property &c &c. In republican Government the majority however composed, ultimately give the law.” The minority could thus be crushed. Here Madison’s class concerns over property confiscation, breaking contracts with creditors, and so on are made clear, alongside his traditional advocacy for freedom from religion. 

His solution was “an enlargement of the [decisionmaking] sphere.” Taking power up to the federal level would mean more public officials involved in policy. A misguided “passion is less apt to be felt and the requisite combinations less easy to be formed by a great than by a small number.” A United States Congress with real power would have many more members than a state legislature, and its members would be more ideologically and geographically diverse. There would exist “a greater variety of interests, of pursuits, of passions, which check each other.” When states stepped out of line — when Anti-Federalists passed injustices — representatives from other states in the central Congress could restrain them. A federal government could “control one part of the Society from invading the rights of another.” Madison framed this as the establishment of neutrality that would protect private rights and minority rights. He acknowledged ambition and special interests would be just as powerful a force in a United States Congress, but establishing such a legislative body was the only way to prevent the abuses of the states. The states would have to regulate each other at a higher level of government.

The federal government, Madison noted, would at the same time be “sufficiently controlled itself,” as it was unlikely enough states would establish “an interest adverse to that of the whole Society.”

Later, the Virginia Plan, Madison’s draft for a new constitution at the Philadelphia Convention in 1787, was expectedly antithetical to the Articles of Confederation and Anti-Federalist ideology. Under the Articles, states had broad power to do as they pleased. Congress had supremacy in only a handful of policy areas, and could raise no taxes to support its legislation. Further, this was a government without a chief executive or federal judges. The Virginia Plan, which was not enacted in full but served as a foundation to commence design of a new government, greatly expanded the power of Congress. Congress would be able to pass laws that the states were bound to follow; it would be able to veto state legislation: “Resolved that each branch ought to possess the right of originating Acts… to legislate in all cases to which the separate States are incompetent, or in which the harmony of the United States may be interrupted by the exercise of individual Legislation; to negative all laws passed by the several States…” Congress would be representational, rather than granting each state the same number of members — another idea distasteful to Anti-Federalists. The plan further established executive and judicial branches, other bodies of power over the states. Such top-down designs would cause much consternation among the Anti-Federalists and other supporters of the Articles. 

When George Clinton insisted the U.S. was too big and diverse for democracy to work

Founding Father George Clinton, Anti-Federalist New York governor and future vice president, writing as “Cato” in the New York Journal on October 25, 1787, argued that the states were too different for a federal government to properly function. Given the “dissimilitude of interest, morals, and politics” inherent across such a wide geographical area, federalism “can never form a perfect union, establish justice, insure domestic tranquility, promote the general welfare, and secure the blessings of liberty…” Citing Montesquieu, Clinton insists that the “public good” is incomprehensible in a larger republic, with many competing interests — what’s good for some is disastrous for others. Further, a national legislature would invest too much power in each member, the power over too many ordinary citizens and too vast a region, which would go to members’ heads: “there are too great deposits to trust in the hands of a single subject, an ambitious person soon becomes sensible that he may be happy, great, and glorious by oppressing his fellow citizens…” And bigger nations create richer men, who are more self-serving leaders. 

Clinton goes on to posit that some of the states themselves are already too big for ideal self-government. If state legislatures and governors were having trouble holding their states together, what hope did a federal government have keeping the states unified? Massachusetts was experiencing insurgency and threats of secession from its province of Maine. In a similar manner, the law under federalism would be “too feeble” to actually work; there would need to be a standing national army, an old fear of the American Patriots after their experience with Britain. Force would be needed to enact and enforce federal law and quell rebellions and secessions against it.

Clinton hits upon several truths and keen insights, but offers a theory of democracy that is not fully fleshed out. Smaller areas may indeed feature more individuals of similar backgrounds, lifestyles, and ideologies. When he writes that “the strongest principle of union resides within our domestic walls,” we’re in the realm of truism. Of course more similar people will be more united. But to seek the greatest unity of interests is to slowly abandon the concepts of democracy and nationhood altogether. Clinton insists that federalism would feature too much division, and then sees that states, rife with division themselves, should be broken up into smaller political bodies as well: “The extent of many of the states of the Union, is at this time almost too great for the superintendence of a republican form of government, and must one day or other revolve into more vigorous ones, or by separation be reduced into smaller and more useful, as well as moderate ones.” More states, smaller states. But this dissection could continue. A town or city may be more united than the entirety of a state. Did not New York City threaten to leave New York if the state did not ratify the Constitution? But even then the quest for likemindedness doesn’t stop. Clinton brought up Athens as an example of democracy working best small-scale. But Athens had its rich and poor, its many contradictory interests. Should democracy only be tolerated on a scale smaller than a city? Like in a poor neighborhood? The point is that at any level of governance, divergent interests, morals, and lifestyles exist. There may be more cohesion and similarities on many fronts, but division is unavoidable. Clinton attempts to justify a rejection of federalism on the grounds of regional and constituent dissimilitude, but that could justify the termination of democracy anywhere, at any level. It makes one wonder how nations, states, cities, and more can be justified — must they all be broken up?

Alternatively, if one accepts that democracy entails division (with every vote, between a minority and majority) and factionalism and competing visions of the common good, then it’s easier to notice that democracies at higher levels, such as a “consolidated republican form of government” proposed by the Constitution, can be safeguards of liberties as much as dangers to them. By seeking difference and an “unkindred legislature,” by expanding the sphere of contradictory interests, one has the chance to root out tyranny in every state, not just your own. Madison and the Federalists understood this — letting states do whatever they liked was a recipe for oppression by itself. Clinton brings up the South, where “wealth is rapidly acquired” and there existed a “passion for aristocratic distinction,” where “slavery is encouraged, and liberty of course less respected and protected…” He compares this to the North, “where freedom, independence, industry, equality and frugality are natural…” This feels prescient, coming right after Clinton’s discussion of insurrection and secession. The United States is too diverse and different, it will tear itself apart. Nevertheless, Clinton would rather leave a place “where slavery is encouraged, and liberty of course less respected” to its own designs. Rather than using federalism to ensure higher principles are followed in all states. Clinton complains of oppression, but won’t do anything about it — thinking only of how, in a powerful Congress, the South could infect the North, not how the North could have a positive influence on the South. Democracy is a messy business. It can bring abuses and tyranny — or their opposites.

Clinton’s other points — that decisionmaking power over an entire nation is more corrupting than decisionmaking power over a state; that larger nations create richer men; that richer men are more corrupt — go unsupported. They may be true, they may be fictions. But his suggestion that a national army would be necessary to put down insurrections and violations of federal law again rings true to the modern ear. The states themselves can be bound to enforce federal law, through militias or police and guardsmen; a national military can be banned from deploying on U.S. soil under most circumstances. But at some point all that can fail — states can rebel, and the forces of other states or a national army must step in. Higher-level militaries are then like higher-level democracies. They create the potential for tyranny for all, but also the potential to preserve and expand liberty for all. Again Clinton only acknowledges the potential for harm — more nuance, a holistic view, is needed.

When people wanted to ratify the Constitution before finishing it, to Patrick Henry’s horror

The Constitution was pushed through state legislatures only with a promise. If you pass this, Federalists assured the Anti-Federalists, a bill of rights will come later. Patrick Henry insisted, in a speech in Richmond on June 24, 1788, that the Constitution be amended before Virginia ratified it, not after. He saw approval on condition of amendment as a dangerous idea: “Evils admitted, in order to be removed subsequently, and tyranny submitted to, in order to be excluded by a subsequent alteration…” Why submit to tyranny and then try to get out from under it? Why not avoid tyranny in the first place? It was all quite backward: “Do you enter into a compact of government first, and afterwards settle the terms of the government?” Henry had a good point, given what the compromise entailed. After the Constitution was established as the law of the land, a bill of rights would go through the amendment process outlined in Article V. Three-fourths of the states would need to approve it — there was no guarantee of passage. Regardless of the popularity of certain freedoms, regardless of Anti-Federalist power or the general political makeup, there was a nonzero chance the Constitution would be ratified but a bill of rights would fail. Understandably, Henry was unwilling to take that chance, calling instead for amendments first. The legislature ignored him, narrowly ratifying the Constitution the next day.

(George Clinton and Patrick Henry were both concerned about risks to liberties. One could frame Clinton’s thinking in a similar way to Henry’s. Why submit to the potential tyranny of a national legislature or national army? Why risk it? The difference here is what justifies the risk. The potential reward of establishing and protecting liberty in all states for all people justifies it. But in the Henry case, there is little to be gained by making the gamble. Passing a law that may or may not be amended later? There’s no inherent reward. The smarter play is amending the law first and then passing it.)

For more from the author, subscribe and follow or read his books.

Anti-Semitism Remains, Statistically, Worse on the Right Than the Left

Many terrible ideas have run amok since the Hamas attack on Israel on October 7 and the ensuing Israeli bombardment of Gaza.

First, there’s what appears to be an ideological consensus — that Israel could never have oppressed the Palestinians over the years, that Israel’s policies have nothing to do with the terror and hatred against it, that all Palestinians and pro-Palestinian activists support terrorism and Hamas and anti-Semitism, and that the killing of many thousands of innocent Palestinians is an acceptable or moral response to the deaths of one thousand innocent Israelis. None of this can be judged true after a little education (start here), nuanced thinking, and ethical reasoning. But this conservative framework is so powerful, many Republicans and Democrats sound indistinguishable right now. It is telling that not even Bernie Sanders, who often speaks up for Palestinian rights, will call for a ceasefire.

Second, there’s the response by some leftists, the refusal to condemn — or even celebration of — terror against civilians as a response to oppression. (Nor do they acknowledge the role or perils of Islamic fanaticism, focusing solely on those of a religious state explicitly for Jews.) Some socialists, communists, anarchists, and so on saw the attack as justified (though Israeli children hardly have a say over Israeli policies), possessing little interest in the philosophy of nonviolent resistance embraced by other leftists and the liberals (others may accept violence only against non-civilian targets). Like anywhere else on the political spectrum, there is indeed nastiness, hyperbole, callousness, narrow-mindedness, violence, and bigotry on the Left. This reaction put them all center stage and garnered special attention. But in a way, the Left simply joined the Right in the gutter. For instance, hate crimes against both Jews and Muslims/Arabs are out of control, some conservatives don’t care, vocally, about Palestinian corpses but gasped at the insensitive response of the other side to October 7, and so on.

Writers for center, liberal, and leftwing publications condemned the response. The Right was as overjoyed as it was aghast, issuing countless articles declaring “Democrats Have an Anti-Semitism Problem,” “Liberals Need a Reckoning with Anti-Semitism,” “Pro-Hamas Protesters Are the Movement, Not Outliers,” “The Left Owns Anti-Semitism, While the Right Stands With Israel,” and so forth. The Left, it seems, is infected with hatred of Jews, perhaps even defined by it, unlike the Right, which is loyal to Israel and therefore innocent.

But if you look at the recent research on the topic — putting aside the childish idea that opposition to Israeli policies or religious states is automatically disdain for Jewish people or their faith — you will notice that anti-Semitic attitudes are actually more prevalent on the Right than the Left, and that Jews themselves generally understand this. There is of course cause for concern over both “‘traditional’ antisemitism (long-standing anti-Jewish stereotypes) emanating from the political right” and “‘Israel-related’ antisemitism (blaming individual Jews for the actions of Israel) associated with the political left.” In the literature, these are referred to as the “old” and “new” prejudices. But not all problems are created equal.

A June 2022 study in Political Research Quarterly examined the views of conservatives and liberals, for instance how much they agreed with statements such as “Jews in the United States have too much power.” Anti-Semitism has long entailed the conspiracy theory that Jews control the media, the political sphere, law and business and banking, and so on. When you hear about a New World Order, cabals, globalists, and illuminati, this is frequently what is being referenced. The Jewish societal domination idea has led to horrific violence against the Jews, playing a major role in Nazi Germany, as I mentioned in a recent piece. The study found that people on the Right are much more likely to believe that Jews have too much power in the U.S. They are also more likely to believe Jews are more loyal to Israel than the United States — bigotry often involves the question of who is or isn’t a “real American.” Across all such questions asked, problematic “agreement is higher (2–3 times higher) on the far right than on the far left.” Such beliefs are not the norm, of course. Only about 6% of far Left respondents and about 17% of far Right respondents think Jews have too much power in society, for instance (see unprimed findings). But one side is clearly worse. And things get darker still if you look only at the data for young people, with about 5% of leftists in agreement vs. 45% of rightwingers.

This makes a good deal of sense. Conservatives are noticeably more likely to believe in hate-based conspiracy theories. Half of QAnon types believe Jews want to take over the world; these are closely tied conspiracy theories that share themes of secret puppet masters and global cabals.

Interestingly, another statement placed before respondents was “It is appropriate for opponents of Israel’s policies and actions to boycott Jewish American owned businesses in their communities.” Given that the Left is highly critical of Israel’s policies towards the Palestinians, one might expect it to be more guilty here, agreeing more than the Right. Isn’t the Left into the Boycott, Divestment, and Sanctions (BDS) movement against Israel, launched by Palestinians in 2005? If the above question was old prejudice themed, this one is all new prejudice, perfect for trapping leftwingers. Well, about 10% of far Left respondents approved of boycotting Jewish American businesses over Israeli policies, but for the far Right it was about 22%. Just looking at younger people, it’s just over 10% versus just over 50%. The Left understands better that American Jews don’t really have anything to do with Israel or its policies — boycotting their businesses doesn’t make sense. It’s a punishment built on guilt by association; the American Right is twice as likely to accept punishing Jews for something they had no control over. (In a separate question having nothing to do with boycotts, the researchers found that “the far left is least likely to say that U.S. Jews should be held to account for Israel [only 4% agree]. In contrast, among the young far right 28% agree with the statement, seven times higher…”) Quite differently, the BDS movement is action against Israel, its government and economy. Boycotting and withdrawing investments from “all Israeli and international companies engaged in violations of Palestinian human rights” and encouraging sanctions against Israel by national governments and international bodies.

It of course must be emphasized that the scapegoating and bigotry and insane ideas that do exist on the Left are unacceptable, even if they are more limited. (The paper found, it’s worth noting, that leftists are more likely to demand Jews denounce Israel than to demand Muslims denounce Islamic states. For conservatives it was the opposite double standard.) Criticism of Israel and anti-Semitism can at times overlap, as documented in the study, which must be watched for closely. But in general, “people on the hard left hold significantly more anti-Israel views than other Americans, whereas those on the hard right are significantly more antisemitic,” to quote the University of Chicago’s National Opinion Research Center, highlighting the difference.

The Political Research Quarterly piece cited a couple studies conducted in other countries that had similar findings: “In a UK study, Staetsky (2020) finds higher rates of antisemitic views among British respondents who identify as far right. In Europe, Cohen (2018) finds lower support for Jewish immigration on the right than on the center or left.” But we will mostly focus on Americans.

A survey in summer 2023 from the Anti-Defamation League and Chicago’s NORC found anti-Semitism going hand-in-hand with enthusiasm for both leftwing and rightwing political violence. Violent leftists were 1.4 to 2 times more likely to be anti-Jewish; violent rightwingers were 2.8 to 3 times more likely to be anti-Jewish. Here again, the problem is serious everywhere but worse on the Right. (American political violence is consistently worse on the Right as well.)

Beyond conspiracy theories of world domination, beyond reactions to Israel’s policies against the Palestinians, other factors can breed disdain for Jews (though they tend to all mix toxically together). There were also racial and religious concerns highlighted by the survey: “Highly antisemitic Americans are twice as likely to support dangerous antidemocratic conspiracies, such as those declaring the U.S. is a ‘Christian nation,’ believing that white Christians are oppressed or that white people will have less rights than minorities in the future (i.e., the ‘Great Replacement’ idea).” One may recall the rightwingers chanting “Jews will not replace us” at the infamous Charlottesville rally of 2017. White nationalists, in addition to demanding whites run society, believe that the Jews and other people of color are trying to subjugate and wipe out white people. Racism is central to much anti-Semitism. And racism tends to be a bigger problem on the Right (this has been much more thoroughly researched in the social sciences; see Conservatives Are More Likely to be Racist). Race and faith are often intertwined, with the dominant former following the dominant latter, thus the need for not only white supremacy but white Christian supremacy. Today many Christians feel an affinity for followers of Judaism, due to the intimate relationship and history between the two faiths and of course, like others, the memory of the Holocaust. It is easy to forget the historical hostility, the Christian persecution of the devilish “Christ killers” over the centuries (see When Christianity Was as Violent as Islam). Religious animosity may yet explain some anti-Jewish sentiments. In this case, Judaism, like other faiths, is a threat to Christian supremacy. An early 2023 study in Social Science Quarterly found that Christian nationalism, a mostly rightwing phenomenon, correlates with anti-Semitism. The desire to dominate others is key to this connection. Beyond Christian nationalists, the authors note, Republicans in general “have the highest average level of antisemitism,” followed by Democrats (relatively closely compared to the evidence examined thus far) and then independents.

On that note, it must be said that some findings are more mixed. Research from 2018 concluded that from 1964 to 2016 strong Democrats and strong Republicans had essentially the same levels of warmth and agreeability towards Jews, with Democrats a hair better on the issue. If their attitudes were equally positive, by this odd metric a “warmth” of about “71 degrees” by 2016, this suggests equal coldness, equal anti-Jewish sentiment, of about 29 degrees. Things don’t get much deeper or detailed than that, leaving the meaning of the finding somewhat obscure. What are the actual beliefs of the people in those minorities? Do they vary in levels of hostility? Perhaps along partisan lines? At minimum, the finding supports the notion that liberals are less anti-Semitic than conservatives, even if it’s fractionally rather than significantly.

An April 2023 study in the UK found that individuals, Right or Left, who believed in authoritarianism, conspiracy theories, or smashing apart the social order were more likely to exhibit bigotry against Jews. These three beliefs were checked for associations with the old (“Judeophobic Antisemitism”) and new (“Antizionist Antisemitism”) prejudices. While old and new are not exclusive to a single side, extrapolations can be made based on aforementioned trends. Totalitarianism earned a .26 coefficient of correlation with the old prejudice, less than zero with the new. In other words, this suggests that totalitarianism on the Right is tied with disdain for Jews; leftwing authoritarianism may not have that problem. There was a .21 correlation between global conspiracy beliefs (“GCB”) and the old prejudice, versus .12 for the new. Meaning a closer connection between the prejudice associated with the Right and belief in conspiracy theories; leftwing conspiracy theorists can be problematic but not as often. However, there is a .29 correlation between those who want revolution against the social order (“Anti-Hierarchical Aggression”) and the new prejudice, compared to only a .21 for the old prejudice. This suggests leftists who support revolution are more bigoted against Jews than rightwingers who support revolution. As .29 is highest among all the numbers in this paragraph, it is so that, as a recent article put it, “Left-wing Anti-Hierarchical Aggression Emerges as the Strongest Predictor of Antisemitism in Recent Study.” But this could probably be labeled a mixed result, as the Right was worse on two of the three categories of belief. For their part, the authors wish to shift focus from Left and Right to those beliefs — in authoritarianism and absurdities and revolution — that they share.

Next, a scholar at the University of Massachusetts at Lowell wrote an article soon after the October 7 attack entitled “Antisemitism Has Moved from the Right to the Left in the United States,” outlining his recent, unpublished findings. The headline should not be misinterpreted to mean anti-Semitism no longer infects the Right, however, or is worse on the Left. Rather, it now simply infects both: “Our study, which will be published soon, found a startling new phenomenon: The ideology underlying antisemitism in the U.S. now encompasses both sides of the political spectrum.” Anti-Israel sentiment, the article notes, has increased anti-Semitism and imaginings by leftists that Jews are more loyal to Israel than the United States. None of this is surprising. We will have to wait until the study is published to know if it concludes one side is worse than the other, or how they compare (but if the Left was found to be worse, rather than simply worsening, that would probably have taken center stage in the piece). Overall, the study sounds valuable and novel because it links anti-Jewish incidents, as opposed to attitudes, to leftists. Even in 2020, American scholars could say “There is little evidence…of far-left violence being directed or inspired by antisemitism, something which…cannot be said for jihadist or far-right attacks,” but more research and increasing leftwing violence is changing this.

However, in Europe, a 2018 report indicated that victims perceived anti-Semitic harassment as coming from leftists 21% of the time, versus only 13% from conservatives. This is one of the few tools a conservative could use to argue things are worse on the Left, at least when it comes to violent acts (attitudes are a different question).

On that note, it is important to take Jewish perceptions and affiliations into account, even if this is a bit less scientific. American Jews are three times as likely to identify as liberal compared to conservative, and seven out of ten vote Democratic. Jewish Republicans are noticeably less likely to say anti-Semitism is increasing and more likely to say it’s decreasing, a bit strange if such bigotry is pouring in from the Left. Although conservative Jews do see more prejudice in the Democratic Party, the opposite of what most Jews conclude. Naturally, politics determines blame. Seven out of ten Jews believe there is much anti-Semitism in the Republican Party; under four in ten say the same about the Democratic Party. Jews who live in more liberal areas perceive less anti-Semitism than those in more conservative areas. More Jews trust Democrats to fight anti-Semitism, according to a poll after the October 7 attack. (Relatedly: Americans in general who identify as Democrats are typically twice as likely to say anti-Jewish sentiment is a problem in the United States, compared to Republicans. Who would you trust to fight it, those who don’t believe in it?) While the far Left is viewed as a serious threat, the far Right is judged to be far worse, a gap of 30-35 percentage points. True, this could change after recent events, given the response of some leftists. Concern over the new anti-Semitism has indeed grown in recent years. New research will let us know.

For now, weighing all the available evidence, the Right, with its crazed attempts to present the Left as anti-Semitic and itself as saintly, seems to be living in a fantasy, per usual. That upside-down world where you needn’t worry about the log in your own eye, for the Left has a smaller obstruction.

For more from the author, subscribe and follow or read his books.

‘Savages’: Perceptions of the Ozark Settlers

In his first volume of A History of the Ozarks, Brooks Blevins explores the antebellum history of the Ozarks region, arguing that past and contemporary depictions of white nineteenth-century Ozarkers as distinct from other Americans — primitive, isolated, ignorant — do not withstand scrutiny.[1] The Old Ozarks (2020) is intended to provide a more nuanced portrayal of settlers and frontiersmen, to capture the complexities of local history and diversity of its people — rather than defined by the stereotypical “barefooted hillbillies” and “hicks,” Blevins posits “that the Ozarks, when shorn of the mythology…comes closer to being a regional microcosm of the American experience than to being a place and people of unique qualities.”[2] Importantly, Blevins sees such stereotypes, coming to full power after the Civil War and in the twentieth century, as coloring historians’ views of the earliest Ozark communities.[3] Like the explorers and novelists before them, historians placed too great an emphasis on Ozarkers’ particularities, masking their rather unexceptional American-ness. Blevins’ contribution, alongside other works of the past few decades in his own field and that of historical anthropology, helps break the spell.[4]

To get a sense of the “exaggerations and oversimplifications” Blevins is working with, one might turn to the nineteenth-century American geographer and explorer Henry R. Schoolcraft, who makes many appearances in The Old Ozarks.[5] Schoolcraft documented his observations of the Ozarks in his influential Journal of a Tour into the Interior of Missouri and Arkansaw: From Potosi, or Mine á Burton, in Missouri Territory, in a South-West Direction, toward the Rocky Mountains, Performed in the Years 1818 and 1819.[6] He wrote of dirt-floor log houses “beyond the pale of the civilized world,” devoid of “comfort,” “cleanliness,” and modern conveniences. They were full of horns, skins, and other hunting trophies — few items of value. Noticing the dried meats kept indoors, Schoolcraft compared an Ozark home to a smokehouse. Children were dirty and dressed in buckskin, the girls ugly from a poor diet. Schoolcraft was dismayed to see women “doing in many instances the man’s work,” and to hear that many infants perished in the region due to a lack of basic medicine. These were people divorced from “refined society.” They were of the remote wilderness, battling native tribes, thieves, and nature.

Schoolcraft writes that he tried to engage the Ozarkers in “small-talk, such as passes current in every social corner; but, for the first time, found I should not recommend myself in that way. They could only talk of bears, hunting, and the like. The rude pursuits, and the coarse enjoyments of the hunter state, were all they knew.” This positions Ozarkers as different from other Americans — proper discourse occurred in all other corners, he had never needed to refrain from it elsewhere. Schoolcraft further complained of a greedy and dishonest guide and his sons, who abruptly abandoned Schoolcraft and his fellow explorers. Again, the exceptionalism of the Ozarkers is highlighted: the group “bore no comparison” to anything “we had ever before witnessed, but was rather characterized in partaking of whatever was disgusting, terrific, [and] rude.” Proud displays of skins outside homes, and other eccentricities, were likewise “novel.”

The geographer reported that settlers hunted and farmed a limited number of crops only to sustain themselves; there were no exports. They were too isolated and remote for that. Life revolved around simple subsistence, when more could in fact be produced, and tolerating the associated deprivations and hardships; the people, therefore, were both “lazy” and “hardy.” They were inferior to Americans back east in every conceivable way. “In manners, morals, customs, dress, contempt of labor and hospitality, the state of society is not essentially different from that which exists among the savages. Schools, religion, and learning are alike unknown.” Ozarkers, Schoolcraft writes, did not pray or observe the Sabbath. There was no reading or books, only “ignorance.” Residents knew nothing of the political happenings of the nation — not even who the president was — and did not wish to learn. Such “indifference” set them apart. Ignorance and faithlessness led to moral decay. The Ozarks were a place of not only sloth but vigilante justice and drunken brawls. Even young boys settled their disagreements with violence, “the act being rather looked upon as a promising trait of character.”

Clearly, Ozarkers were seen as backward and primitive. Schoolcraft compared them to indigenous people, but even went so far as to position them as, at least in some ways, inferior. Native Americans did the same tasks with “half the labour” — implying more intelligent methods — and fewer resources. The settlers had no interest in preservation or frugality, but carelessly killed more game than they needed, felled more trees than they could use, and so on. “The white…destroys all before him…” Sources like Schoolcraft’s Journal not only influenced how early nineteenth-century Americans back east regarded this region, but they further informed the writing of history during the twentieth century. Carl O. Sauer, Robert Flanders, David Thelen, Jeff Bremer, and others marked the early Ozarks as cut off and stuck in the past, an island of uncivilized, ignorant frontierism.[7] 

Blevins of course points out that many observations by explorers and later historians were “not whole-cloth fabrications.”[8] The Ozarks had hunters, material deprivation and poverty, violence and vigilantism, a dearth of modernity, and so on. But it had much else — it was too diverse to be characterized by those elements alone. For example, ironworks developed even before Schoolcraft’s journey through the region.[9] Iron was mined and forged into wagon boxes, ovens, kettles, cannonballs, and all manner of other objects to be sold at market. Pig iron was shipped to St. Louis and other cities. Beginning in the 1820s, Maramec Iron Works was a major “iron plantation with modern technology in a place still lightly settled” that quickly “dominated the local economy…”[10] After arriving in Missouri, wealthy entrepreneurs Thomas James and Samuel Massey brought workers and slaves from Ohio to dig up ore and run the Maramec furnaces. Manual laborers often lived in company housing and were paid in credit to company stores. This booming industry determined where many roads and rails were constructed, which helped ship raw material to surrounding states and territories. “With hundreds of employees, modern technology and equipment, and access to shiny new railroads,” Blevins writes, ironworks ensured “the region’s integration into a broader national and international marketplace… Travelers like [journalist] Albert D. Richardson were surprised to find such modern industrial activities in the far western reaches of the nation.”[11] When serious study of a broad range of Ozarker experience is conducted, the region starts to look less backward and isolated.

Clearly, not all who settled in the Ozarks were hunters. As partially noted, despite his emphasis on the “hunter state,” Schoolcraft acknowledged that Ozarkers grew corn, possessed livestock like pigs and cows, and engaged in trade by river. Blevins writes: “The marketing of grains, hides, and livestock connected farmers and herders of the rural antebellum Ozarks to a wider world of regional and national commerce and trade.”[12] The historian again documents how other settlers lived and how this tied them to the rest of the nation. They grew corn, wheat, cotton, tobacco, sweet potatoes, cabbage, peas, oats, and much else.[13] These were at times brought to market locally: “Wiley Britton recalled that his father…sold corn and other surplus crops to Cherokees or to merchants in Neosho.”[14] More significantly, however, “by 1819 the region already produced surplus beef and pork for the New Orleans market,” and soon became a leading open-range livestock producer nationally.[15] Cattle drives left the Ozarks and marched all over the United States, even as far as New York.[16] Beyond ranchers, farmers, ironworkers, miners, railmen, and hunters, there were artisans, merchants, shopkeepers, mechanics, millers, distillers, company lumberjacks, attorneys, and so on.[17] Like the rest of the country, the Ozarks attracted and produced a wide range of laborers, especially as its towns and cities developed.

After the workday was through, many Ozarkers would return home to their log cabins, some with dirt floors and others wood. But, especially in the decades after Schoolcraft’s visit and before the Civil War, some more affluent residents had frame houses painted white with crushed limestone, or even brick houses.[18] Women and girls would make quilts and clothing; those from more prosperous families purchased the latest fashions from cities like Philadelphia, and owned English glassware.[19] “Don’t think for an instant that I am among semi-wild people,” German doctor George Engelmann wrote in the 1830s as he traveled through the Ozarks. “On the contrary, these people have a good deal of culture…”[20] Contrary to claims concerning a lack of religion, Ozarkers were mostly Methodists and Baptists, plus some Presbyterians and others.[21] Bethel Baptist Church was founded near Jackson, Missouri, in 1806, and by 1818 had half a dozen churches in the area it could claim as descendants.[22] Methodist preachers like William Stevenson were at work in 1814.[23] There were churches, camp meetings, and religious societies and organizations. Missionaries came to and emerged from the Ozarks. Religion was a major feature of life, as it was in other parts of the United States in this era.[24] Education was slow to develop, with most children not attending school until after the 1850s, but an academy appeared in Potosi in 1816, and more were established in other towns.[25] In areas without formal schools, children would at times be taught reading, writing, and arithmetic for a fee by a private individual, a “subscription” model.[26] Ozarkers would often remember their hard times and difficulty receiving an education with, to quote Blevins, the same kind of “bootstraps, self-congratulatory memory that had your grandfather trudging five miles uphill in a perpetual blizzard” to get to school.[27]

On that note, the writing itself in The Old Ozarks is generally engaging and dynamic. This elevates both interesting and more tedious content. While the line “Given the myriad uses of corn, it is not surprising that Ozark farms went through it like Henry VIII went through wives” may induce a wince, there are far less lively discussions of agriculture in historical scholarship.[28] A few moments approaching rhetorical beauty occur as well: the “Ozark plateau is rendered, by our rather myopic and mortal outlook, a fixed and everlasting entity, a place as solid and unchanging as the age-old igneous rocks of the St. Francois Mountains, the ancient core of the region. But you and I are human, and history is preoccupied with our kind.”[29] The author’s exposition is interlaced with quotations from letters, diaries, published books, and more by early Ozarkers and visitors, which keeps the history grounded and personified, while secondary sources from other scholars are usually cited without quotation, serving largely the same function. Beyond creativity and variety, the writing is clear and largely dispassionate, though Blevins is an Ozarker, and may have a vested interest in confronting images of backwardness, suggested in comments such as: “Whether our peculiarities are perceived or real, in the Ozarks we are no strangers to stereotype. We’re accustomed to being labeled by outsiders.”[30] This does not appear to impact the validity of his case however, given the nature of the thesis.

The Old Ozarks is a heavily detailed text with the simplest of theses. Dispelling stereotypes is perhaps the most straightforward task a historian can undertake — even a few primary sources can quickly qualify or even blow up an improper, oversimplified representation of a people or place. (Blevins understands well, offering the somewhat sheepish “If…this book contains a central premise, it is that…”[31]) The author accomplishes this detonation, revealing the complexity, diversity, and normality of the early Ozarks using an avalanche of documentation from archives across the region, leaving little doubt that its populace, while including such elements at certain times, should not be defined by isolation, backwardness, or exceptionalism.[32] The “backwoods hunter-herder,” Blevins writes, “represented only a temporary stage in the development of society in the Ozark uplift” and existed alongside “more progressive settlers”; the backwoodsman simply “captured the attention of travelers more…”[33] Explorers and later folklorists and novelists wrote for audiences that loved the exotic — “‘They’re really not that different from you and me,’” Blevins explains, would hardly sell copies.[34] Blevins deserves credit for bringing so many sources together to address myths and capture local history, expanding significantly upon the work of other modern historians and qualifying or correcting that of twentieth-century academics. However, the comprehensive and meticulous nature of the text — recall that this is only the first of three books — makes it for scholars rather than a general audience. With its scope, this is a seminal work for the field.

For more from the author, subscribe and follow or read his books.


[1] Brooks Blevins, A History of the Ozarks, Volume I: The Old Ozarks (Urbana: University of Illinois Press, 2020), 2-9.

[2] Ibid., 2, 8.

[3] Ibid., 2, 5, 7.

[4] Ibid., 122.

[5] Ibid., 9, 293.

[6] Henry R. Schoolcraft, Journal of a Tour into the Interior of Missouri and Arkansaw: From Potosi, or Mine á Burton, in Missouri Territory, in a South-West Direction, toward the Rocky Mountains, Performed in the Years 1818 and 1819 (London: Richard Phillips and Company, 1821).

[7] Blevins, Ozarks, 121-122.

[8] Ibid., 9.

[9] Ibid., 192.

[10] Ibid., 193.

[11] Ibid., 196. See 192-196.

[12] Ibid., 153.

[13] Ibid., 140.

[14] Ibid., 151-153.

[15] Ibid., 142-143.

[16] Ibid., 147.

[17] Ibid., 153, 175, 182.

[18] Ibid., 134.

[19] Ibid., 122, 137-138.

[20] Ibid., 84.

[21] Ibid., 200.

[22] Ibid., 201.

[23] Ibid., 204.

[24] Ibid., 197-217.

[25] Ibid., 230-231.

[26] Ibid., 232.

[27] Ibid., 230.

[28] Ibid., 152.

[29] Ibid., 5.

[30] Ibid., 2.

[31] Ibid., 8.

[32] Ibid., ix-x.

[33] Ibid., 82.

[34] Ibid., 8.

‘Israel’s 9/11’ Is Apt Phrasing, with Root Causes Ignored and War Worsening Terrorism

Hamas’ horrific attack in Israel on October 7 was quickly labeled “Israel’s 9/11” — it was a surprise strike that destroyed a large number of innocent people and traumatized a nation. Yet the parallels do not end there. They are not difficult to see. Let us consider them, while keeping in mind that in the same way one recognizes terrorism as reprehensible, one should, through careful study of history and current geopolitics, recognize where terrorism comes from and how to prevent it from occurring in the future. As Sarah Schulman writes in New York Magazine, “Explanations are not excuses” — to understand why the Hamas assault occurred is not to say it was right. To understand the world as it actually is, such as how harmful state policies can inspire terrorism, is not to condone terrorism; it is simply to oppose both. “But the problem with understanding how we got to where we are,” Schulman notes, “is that we could then be implicated.” She writes:

My parents raised me with the idea that Jews were people who sided with the oppressed and worked their way into helping professions. They could not adjust the worldview born of this experience to a new reality: that in Israel, we Jews had acquired state power and built a highly funded militarized society, and were now subordinating others. No one wants to think about themselves that way… Humans want to be innocent. Better than innocent is the innocent victim. The innocent victim is eligible for compassion and does not have to carry the burden of self-criticism.

The situation is too well-documented to be controversial. For a long time, Israel has seized Palestinian land, blockaded the rest, and subjugated Palestinians in Israel itself. The United Nations (including its Human Rights Commission), Amnesty International, Human Rights Watch, and other bodies have condemned Israeli policies as illegal and crimes against humanity. Palestinians and Israelis alike oppose Israel’s occupation, military violence, and apartheid system. I wrote of all this at length in Is Standing with Israel Standing with a Violent Oppressor? Predictably, oppression breeds extremism and terror. Hamas declared its attack a response to “the crimes of the occupation.” There were both long-term and short-term causes linked to the sorry conditions of the Palestinian people.

But in much news and commentary, no actual explanation is given for Hamas terror. There is no serious look at the realities of Israeli-Palestinian relations, no history or context. Israel is the good guy, its enemies seek its eradication, The End. “They hate Jews and want to destroy Israel” is an empty statement, explaining nothing, but is quite popular. True, those are real sentiments, especially among Islamic extremists (other Palestinians, including Muslims, Christians, Druze, atheists, and so on, want to peacefully coexist with Israelis through a unified one-state solution or even a two-state solution), but they’re missing the major why. All of this is virtually indistinguishable from the American experience of 9/11. The noble United States was divorced from the terrible thing that happened to it. There were no causal ties between our activities and the 3,000 people massacred, save perhaps one: Al Qaeda, vaguely, hated our freedom! Americans had little interest in root causes, in pondering Al Qaeda’s rage over bloody U.S. military interventions and wars in Muslim lands in the 1980s and ’90s, America’s devastating sanctions against Iraq and its support of Israel against the Palestinians, our close relationship with Saudi Arabia and our military bases near Islam’s holy cities, and so on. Extremism comes from somewhere — somewhere concrete like the bodies of Muslim children, not somewhere vague like the First Amendment of a nation 7,000 miles away. This is not to say that all the religious extremists and fundamentalists among the Palestinians would tolerate a Jewish state with friendly policies and equal rights for all, nor a secular one-state solution that’s likewise for everyone regardless of faith or race (we should all favor the latter). For some it’s a Muslim nation for a Muslim holy land or nothing, similar to Zionist Jewish thought. But by addressing the grievances and needs of the Palestinian people, you can reduce radicalization and violence, ensure there are fewer extremists and plots against innocents. To prevent terrorism, you have to change policy.

But that is unthinkable. It would suggest you’ve done something wrong, and it would curb your own dominance and self-interest. Lessening American military power in the Middle East was unacceptable, and Israel probably won’t be giving back Palestinian land, withdrawing its military and citizens from the settlements in the West Bank. It won’t end its decades-long blockade and stranglehold of Gaza, which has caused a massive humanitarian crisis. And Palestinians in Israel will not enjoy equal rights and real protection from discrimination any time soon. Instead, the policies that caused Hamas’ terrible attack will be supercharged. This is the second way Israel’s experience of 2023 is like America’s of 2001. Not only are root causes ignored to maintain your patriotic, pure-as-snow self-image and your national power, but the response to the violence doubles down on the policies that caused it in the first place. The United States launched a War on Terror — more military intervention in the Middle East. Predictably, our invasions and bombings spawned many new terror groups, increased recruitment to established ones like Al Qaeda, spread cells to new countries, and led to far more plots and acts of violence (see A History of Violence: How the War on Terror Breeds More Terror). Israel is now laying waste to Gaza and, as it has in the past, cut off water, electricity, food, medicine, and so on from the region. The suffering of ordinary people is greater than ever before. This will of course encourage radicalization against Israel and breed more terror attacks. Even if Hamas is destroyed, which is unlikely, another group will take its place. The problem will not be solved through war, it will be magnified, worsened. That endless cycle of violence and revenge.

Finally, and briefly, a third way in which the current crisis echoes 9/11. “War Worsening Terrorism” of course refers to how bombings and invasions as a response to terrorism simply encourages more terrorism. But it also refers to the terror rained down upon the innocent people who had nothing to do with Hamas or Al Qaeda. War is State terrorism, expanding the violence to an unprecedented scale, of which extremist groups could only dream. 3,000 Americans died on 9/11, but our War on Terror killed about a million people. 1,400 Israelis died in Hamas’ attack; Israel has thus far killed 4,200 people in Gaza. Many more innocent Palestinians will perish before the end.

None of this sounds like justice or reason, but it certainly sounds familiar.

For more from the author, subscribe and follow or read his books.

The Next MSU President Must Commit to Three Goals

Under President Clif Smart’s valiant leadership, Missouri State University has grown in many ways. Fundraising smashed records, and renovations and new buildings beautified campus. Our profile rose alongside school pride. But there are three huge tasks ahead for the next university president. If accomplished, they will bring in more students, income, donors — what any institution needs to improve its degree programs, keep tuition down, pay professors better, and more. Our next leader must have an absolute commitment to the following aims. 

1) Joining an FBS conference. Smart, his athletics director, our new football coach, and the fan base have all started speaking the same language. It’s time to move onward and upward, out of the Missouri Valley Conference and FCS football. We’re a major institution. If there’s a higher tier, we’re gunning for it. Bobby Petrino’s football program revealed the possible, giving Arkansas and Oklahoma State real scares. Joining an FBS conference brings wider national exposure, richer TV contracts, a chance at bowl games, and a more excited, proud fan base. The next president must craft a plan, including facilities improvements, to attract an invite. [2024 Update: MSU has joined CUSA and risen to FBS.]

2) Men’s basketball’s consistent appearance in the NCAA Tournament, and football’s consistent appearance in the FCS playoffs until FBS is reached. Imagine for a moment that Missouri State was the team in Missouri to reliably enter the NCAA Tournament. It would be transformative. Multitudes of young people would want to be Bears. The national exposure would be invaluable. Remember the 1980s and ’90s, when we’d make it into March Madness, even the Sweet 16? It’s time to return to glory. Likewise, football must continue the success Petrino created to remain attractive to FBS conferences and engage fans, battling in the FCS playoffs. Our new president must better fund these two critical sports, and never be afraid to cut ties with a failing coach and find one with playoff or championship experience.

3) Helping free MSU to offer PhDs. It’s illegal for MSU and eight other public universities in the state to offer PhDs and first-professional degrees in law, medicine, engineering, and so on. The University of Missouri system holds a monopoly on these degrees. There’s a reason MSU offers fewer than a dozen doctorates — it isn’t allowed to do much more. In 2023, bills were filed in the Missouri Legislature to end the monopoly (SB 473, HB 1189), but they died in committee. New bills are coming in 2024. The next president must fight for fairness and work with the Legislature to lift the ban. MSU will then be able to offer many more doctoral programs, attracting more graduate students.

Achieving these goals will send a university that has long been on an upward trajectory into the stratosphere. I urge the search committee to find a candidate who will commit with the utmost passion, and I encourage Bears everywhere to call for this as well at MSU’s October 2 input forum and through the community survey at missouristate.edu/president/search.

This article originally appeared in the Springfield News-Leader.

For more from the author, subscribe and follow or read his books.

A Real Writer


A Real Writer

A real writer writes for pay.
A real writer writes for nothing.
A real writer makes a living off the craft.
Or enjoys the grocery money.
A true writer publishes with traditional houses.
A true writer self-publishes.
A real writer writes for others to see.
A real writer writes for himself alone.
A real writer has done all of these things.
Or maybe one.

For more from the author, subscribe and follow or read his books.

9 Gay Films to Watch Immediately

As with any genre, gay romance has its duds (looking at you, Happiest Season), its all rights (Boy Erased), and its overhypes (Call Me by Your Name). But many of its films are immensely powerful. After all, the most compelling romance writing involves forbidden, secret love and associated dangers, and — tragically for the real human beings who experienced and experience this — these elements are inherent to many gay stories. The following is a selection of movies that you will not soon forget.

Supernova — Love in the time of dementia. Stars Colin Firth and Stanley Tucci.

Disobedience — Rachel McAdams and Rachel Weisz find love in an orthodox Jewish community.

The World to Come — Farmers’ wives fall for each other on the American frontier. With Vanessa Kirby, Katherine Waterston, and Casey Affleck.

Carol — Cate Blanchett’s character meets a younger woman (Rooney Mara) in 1950s New York.

Ammonite — Searching for fossils and companionship in the early 1800s. Stars Kate Winslet and Saoirse Ronan.

Brokeback Mountain — The undisputed classic. With Heath Ledger, Jake Gyllenhaal, and Anne Hathaway.

Moonlight — The Best Picture winner, a journey from black boyhood to manhood. With Mahershala Ali, Ashton Sanders, and Naomie Harris.

The Power of the Dog — Benedict Cumberbatch’s character abuses his brother’s wife and son (Kirsten Dunst, Kodi Smit-McPhee) while struggling with his feelings for the latter.

Professor Marston and the Wonder Women — A throuple in the 1940s evades discovery, while inspiring the creation of Wonder Woman. With Luke Evans, Rebecca Hall, and Bella Heathcote.

For more from the author, subscribe and follow or read his books.

The Founding Fathers Were (Accidentally?) Right About the Senate

I noticed something interesting during the Trump era. As the nation completely lost its mind, I saw incidents here and there of Republican senators seeming to keep their heads a little better than House Republicans.

For example, after Trump’s lies about voter fraud led to the January 6 riot, 14% of Republican senators (seven individuals) voted to convict him, whereas in the House only 5% of Republicans voted to impeach (ten individuals). Or look at who still voted against Arizona and Pennsylvania’s 2020 election results two months after election day, after (Republican) states had recounted and certified their results and Trump’s own administration officials and the federal courts had rejected the myth of voter fraud. 66% of House Republicans (139 politicians) voted to object to the validity of these states’ elections, with no actual evidence for their position. Only 6% of GOP senators (eight officials) did the same. And sure, the Senate has its Josh Hawleys, Lindsey Grahams, and Ted Cruzes, but doesn’t it usually feel like the most insane people are in the House? Like Majorie Taylor Greene (QAnon, space lasers owned by Jews causing wildfires, 9/11 was an inside job) or George Santos (pathologically lying about his career, relatives experiencing the Holocaust or 9/11, and founding an animal charity)? Why does the Senate at times seem like a slightly more sober place? Perhaps it’s nothing, but such things reminded me a bit of what the Constitutional framers wrote about the Senate and House.

For the Founding Fathers, the Senate, which would not be elected by voters but by state legislatures (this was true until 1913), would be comprised of more serious, intelligent people. A nation must, James Madison wrote in 1787, “protect the people agst. the transient impressions into which they themselves might be led.” The foolishness of the citizenry had to be tempered. Because the House of Representatives would be elected by the people, it would also be infected: the voters, “as well as a numerous body of Representatives, were liable to err also, from fickleness and passion.” Thus, “a necessary fence agst. this danger would be to select a portion of enlightened citizens, whose limited number, and firmness might seasonably interpose agst. impetuous counsels.” This was the Senate, the upper chamber, following closely the House of Lords in Britain that operated beside the House of Commons, the lower chamber.

Madison positioned the Senate as a check on the “temporary errors” of the masses-representing House, whereas the masses-representing House would be a guard against the abuses of the Senate, small and unelected by the citizenry. (He then went on to stress that one had to keep power away from the people, whose sheer numbers would threaten the interests of the rich. So the president, senators, justices, and so on would not be elected by ordinary voters — and only men with property could vote for House reps. See How the Founding Fathers Protected Their Own Wealth and Power.)

“The main design of the convention, in forming the senate,” the New York publisher Francis Childs wrote in 1788, “was to prevent fluctuations and cabals: With this view, they made that body small, and to exist for a considerable period.” Indeed, “There are few positions more demonstrable than that there should be in every republic, some permanent body to correct the prejudices, check the intemperate passions, and regulate the fluctuations of a popular assembly.” Childs was railing against the idea of senators not serving for life.

Alexander Hamilton’s plan was for life-term senators. “Gentlemen differ in their opinions concerning the necessary checks, from the different estimates they form of the human passions. They suppose seven years a sufficient period to give the Senate an adequate firmness, from not duly considering the amazing violence and turbulence of the democratic spirit.” Senators would “hold their places for life,” to achieve “stability.”

The story of George Washington calling the Senate the cooling saucer for the hot coffee of House legislation is probably untrue, but captures the general mindset of the framers.

Of course, the idea of senators being significantly more “enlightened” and level-headed than House reps from 1789 to 1913 deserves skepticism, but it would take lengthy historical study to form a coherent position. The modern observations that opened this writing can’t really support the opinions of the Founders, for modern senators are elected by the voters, not state legislatures. The 17th Amendment gave us different rules for the game. What this means is I can only ponder whether the framers were accidentally right: perhaps they theorized that senators would be more serious people on average, but this only became so after 1913. It is true that they could simply have been right, with this phenomenon defining the Senate no matter how senators were elected, but this cannot be answered without careful analysis of the political realm from the early republic era to World War I. Not that my musings can at present be fully answered either, as they are merely based on a few random observations, not careful, systematic analysis of modern behavioral differences between senators and representatives. All this is highly speculative.

However, it seems obvious it would be a little easier for crazy people to enter the House than the Senate. You simply don’t have to convince as many voters to support you. In 2022, there were 98 House districts (out of 435) where turnout was less than 200,000 people. The lowest districts had 90,000 to 140,000 total voters. If you’re a dunce who can get 50,000, 75,000, or 100,000 people to vote for you, you can make it to Congress. Districts are small, less diverse, sometimes gerrymandered. More people within them think and vote the same way — the average margin of victory among U.S. House races is 29%, versus 18-19% for Senate races — meaning it’s a bit easier to beat your rival candidate from the other party, if you live in the right district. If you’re running in a safe district — a blue candidate in an extremely blue area or a red one in an extremely red area — all you must truly worry about is beating your primary challengers from your own party, meaning you can secure a seat in Congress with even fewer votes.

Candidates for Senate, while naturally still courting voters on their side of the political spectrum as well as moderates, seek supporters across entire states, in wilderness and small towns and suburbs and big cities. Potential voters are more diverse geographically, racially, economically, ideologically (the poor rightwing farmer is not precisely the same as the rich rightwing business tycoon). To make it to the Senate, you’ll need more votes. 100,000 supporters might be enough in sparsely populated states like Alaska, Vermont, Wyoming, or the Dakotas. But beyond that you’ll need hundreds of thousands or millions of voters to beat the candidate from the other party. This is true regardless of the fact that you could win a primary with a relatively low number of supporters and would have a much better chance of winning a safe state.

Entering the House also requires far less money. Which may be a benefit to crazy people who lose funders when they do and say crazy things. (Admittedly, you may see the opposite effect these days.) It also opens the door to more self-funded candidates. Overall, it’s five to seven times more expensive to win a Senate race than a House race.

All this is to say it may be more difficult for the worst clowns to enter the Senate. There are more opportunities with the House; you need fewer voters and less cash. This may sound ludicrous in a world where Donald Trump could dominate the Republican primaries, indeed it is frightening when extremists like Trump or Greene beat normal conservatives, but more voters may nevertheless function — imperfectly — as a bulwark against irrationality, a check on dangerous candidates. (Recall that Trump lost one popular vote by 3 million and the next by 7 million, once the decision was placed before even more voters.) Someone like Marjorie Taylor Greene can garner 170,000 votes, and George Santos 145,000, but it may be more difficult for them to be taken seriously by their entire states, by the millions necessary to beat rival candidates. It’s not impossible, as Trump has shown, and enthusiasm among the rightwing masses for lunacy (authoritarianism, conspiracy theories, demagoguery) is only encouraging lunatics to run and helping them win, but “more voters, fewer clowns” may nevertheless be a general principle of democracy that held true before the Trump era and may yet hold true today. (Enough popular extremism, of course, will dismantle this principle entirely.)

If the Senate is in fact a more serious place, it’s possibly a product of the system established in 1913. You have those factors making it difficult for loons to get there. Consider the setup before this. A propertied resident of, say, Virginia would vote for state legislators to go to Richmond to represent his local district. The legislators in Richmond would then elect two senators to serve in Congress. (Meanwhile, House reps were elected as they are today; that Virginia resident would vote for one directly.) Now, perhaps state legislators somewhat paralleled the sobering function of voters today, in that they came from all over a state. Between this and being elected officials themselves, perhaps legislators really did ensure more serious people were generally sent to the Senate compared to the House. The Founders could have understood this; perhaps it played into their visions of enlightened politicians. (Perhaps the vision itself, the mere idea of a more serious Senate, partly made and makes it so, changing behavior, a self-fulfilling prophecy.) But maybe there was no difference whatsoever — if state legislators were elected by the stupid herd, why would they be serious, enlightened enough people to send serious, enlightened people to the Senate? And is convincing a few score legislators — fewer people — of your suitability actually easier than convincing thousands of voters? Creating just as big a door for nincompoops? We saw earlier how fewer voters might be beneficial to such candidates. An answer is elusive, but if the Founders were wrong in the beginning, perhaps they were made right with the reforms of the early twentieth century.

For more from the author, subscribe and follow or read his books.

A 6/10 for ‘Guardians of the Galaxy 3’

The third Guardians of the Galaxy film fits neatly into the post-Endgame tradition of mediocre Marvel products. A 6/10 is not a bad movie in my rating system (that’s fives and below), but it is decidedly meh. It’s fairly surprising that the IMDb average — usually a reliable metric of quality — currently stands at 8/10. That’s what I would give the original Guardians (perhaps even higher), the best Marvel film there is. Guardians 2 was about a 7, a good, solid movie (though it always irked me that Peter and Gamora switched positions, abruptly, on whether Peter should get to know his father, just to manufacture some cheap tension). Many viewers have praised the third installment, but I was not impressed — despite its lovable characters, good humor, and some genuinely emotional moments, something just felt off.

The first thing I noticed was that a couple characters had lost their edge. Nebula seemed far less hostile and brooding than normal. Rocket was of course a child (in flashbacks) for most of the film, so he wasn’t sarcastic, nasty, or argumentative either, but didn’t return to form in his adult scenes. I tried to let this slide, as the Guardians have become friends over time and in finding such a family have been able to let go of some bitterness. It makes some sense, they’ve grown. Still, part of what made the characters memorable and interesting was that they had dark sides, would bicker to the point of dysfunction, and so on. The happy family vibe takes some protagonists out of (original) character and is a bit dull. Thank goodness alternate-timeline Gamora was there to add back in some selfishness, conflict, spice.

It should be noted also that Groot felt somewhat absent. Sure, he was there, got his line in, but left no real impression in the way that Drax, Peter, Rocket, and others did. You’ll never forget Baby Groot dancing in Guardians 2, nor Groot sacrificing himself with a “We are Groot” at the end of the original film. Here he’s in the background, forgettable, forgotten. Was there even an emotional scene between him and Rocket, who’s on his deathbed? Aren’t they best friends and the OG pair?

To me, everything in this movie feels unnatural or forced. What would actually make sense is ignored in favor of achieving certain goals, whether plot or style goals (this mistake often turns sequels into ridiculous caricatures of original ideas). Consider, for instance:

  • Why are Peter’s mask and rocket boots erased from this tale? So he can be saved in space at the end?
  • Why are we jamming as many pop songs as humanly possible into this thing, even when it ruins emotional, dark moments? Because that’s what a GOTG movie must have, like a factory quota must be met? I kept thinking to myself that I was witnessing a formerly fresh, exciting world gone pure parody — Hey, earlier outings had tunes, jokes, bizarre creatures, let’s multiply all that by ten thousand, trust me, it’ll be ten thousand times better.
  • Why does Peter go home, Mantis go find herself, and Nebula want to lead a new society, all coming nearly out of nowhere at the end? Because the Guardians need to break up, it’s the last movie?
  • Why do we go to the goo planet? To not find what we need, so we can go to the next location, the Arthur planet. Gotta get the code, then the man who took the code. It’s a bit Mandalorian / Rise of Skywalker side questy, only not nearly as protracted. It’s as if we’re going to these places just to fill runtime or to simply see weird GOTG designs one by one like a parade or zoo. The meandering video game quest just isn’t compelling storytelling to me. There’s a way to take characters on adventures through many different worlds that feels natural (think of the original Star Wars or Lord of the Rings trilogies), where you’re not going from spot to spot because each one is a dead end or has a tiny clue that leads to the next destination. Real life involves such things at times, and it’s not as if all this should be off-limits for entertainment, but it often does feel contrived — forced and unnatural, the audience being jerked around and dragged along for two and a half hours, childish writing, location porn.
  • Why does Warlock feel so shoehorned into this film? He shows up briefly in the beginning, gets to do a little something at the end, and is mostly pointless and forgotten about in the middle, the majority of the story. He has so little purpose it almost feels like inserting him was a mere obligation after the tease at the end of Guardians 2, rather than an excited, thoughtful addition to the lore.
  • And of course you have the Bad Guy who’s a complete empty suit. A cackling, cartoonish Disney villain without any depth or room for us to sympathize — the things that made Thanos, Killmonger, and so on good antagonists. Here what’s forced is simply a bad guy in general. It’s part of the old, tired formula. How can you have a superhero movie without a baddie? I think this prescription, this dull necessity, leads to a lack of effort. The goodies have to have someone to fight, that’s all that really matters — why bother fleshing out a villain? The box is checked, move on.

And so forth. There is more that makes little sense (why is the final scene the Guardians charging off to kill wildlife when the climax of the film saw them valiantly saving wildlife?), but one gets the idea.

As a final, unrelated gripe, as creative as this world has been in many ways, this particular production felt like a strange mix of too-familiar IPs to me — a Power Rangers villain, Arthur, The Rats of NIMH, Willy Wonka, the monsters from Maze Runner, and GOTG / Marvel all put in a box and shaken as hard as you can.

For more from the author, subscribe and follow or read his books.

Is Altering Offensive Art Whitewashing?

Roald Dahl’s books — James and the Giant Peach, Charlie and the Chocolate Factory — were recently rewritten to excise terms like “fat” and “ugly” (“enormous” and “brute” are apparently more palatable). Ian Fleming’s James Bond novels got the same treatment for racism, as did Agatha Christie’s works. Disney has edited everything from Aladdin to Toy Story 2 to remove offensive content, with as much care as it devotes to wiping out LGBTQ stories from films in production and finished films streamed in the Middle East. Movies and shows for adultsThe Office, The French Connection — have been altered. And while no one is picking up the paintbrush just yet, the names of old art pieces in museums are being revised as well.

These practices are not fully new, of course. Movies shown on television have long been edited for language, sexual content, length, and so on. The radio has traditionally muted vulgar lyrics. It wasn’t exactly the Left pushing for such things. (In general, conservatives and the religious have a long history, and present, of cancellations and censorship, from book bans to moral panics over films and music, but this piece aims to focus specifically on changes to previously published works.) But people of all stripes and times have participated. In 1988, The Story of Dr. Dolittle (1920) was scrubbed of racist elements long after the author’s death. Residents of past centuries did pick up tools and modify paintings and sculptures featuring nudity — even a Michelangelo or two. And so on. Yet the modern age has brought a new, perhaps unprecedented intensity to the alteration of past works of art. Driven by the Left, it is the responsibility of the leftist to consider its ramifications.

Publishers, studios, and streaming services want to offer people classic, beloved works, but recognize their racist, homophobic, sexist, nonconsensual elements are wrong. There is no doubt that the decision to act can stem from a sincere desire to address harm, but some institutions lack any real principle or spine, modifying art and then reversing course immediately after the inevitable backlash, racing in this direction to avoid one mob and then in the opposite direction to avoid another, whatever can be done to protect image and profits. Capitalism at work.

As for individuals, while the independent thinker will always find institutional overreactions, things that really weren’t that bad, she will likewise be unable to deny the horrific nature of some scenes and terminology in older media. That something should be done to curtail the impact of bigoted ideas and portrayals is right and reasonable.

Alteration is not the only option available, of course. New introductions, content warnings, serious discussions before or after a film, and so on have been and can be utilized, offering context and critique rather than cuts. Then there’s the nuclear option, which is a removal but one that preserves the work: no longer publishing texts (auf Wiedersehen, Dr. Seuss), removing a creation from your streaming platform, etc.

These may be comparatively beneficial — even the last one — because they avoid certain problems. Despite the noble motives behind changing past art, there is something a bit bothersome about it: doesn’t this make past artists out to be better people than they were? If Roald Dahl or Hugh Lofting employed harmful language or stereotypes, why would they deserve a more polished, progressive image for today’s readers and those of the long future? The awful caricatures of Native Americans, unabashedly called “injuns,” in Peter Pan (1953) should quite frankly be a mark upon Disney forever. What interest have I in making Walt Disney of all people, or his studio, or the film’s many directors and writers look better? This isn’t precisely the same as whitewashing. In history, or the present, whitewashing is intended to glorify individuals or events by ignoring crimes and horrors. The Founding Fathers need to be heroes, so their enslavement of human beings and vile racism can be downplayed and swept under the rug. Here the motive is entirely different: awfulness will be surgically removed so that bigoted ideas and behavior are better contained, an attempt to avoid infection of children and adults alike while still letting them enjoy beloved works. Nevertheless, the effect is rather similar. Sanitization may have a clear benefit, but it inherently creates ahistorical representations of past artists. They are positioned as fundamentally different, more moral people. This does not seemed deserved, and it is troubling to voluntarily create any false view of history, whether of its cultural creations, its artists, or anything else. This may not be a big deal for those of us who know edits have been made — but children and future generations may not have such a firm understanding, resulting, to some degree, in a rosier view of authors, filmmakers, and studios of the 1950s and other decades.

One must further wrestle with the larger question. Is it right to change someone’s art without his consent? What if she wouldn’t want her piece altered? This provokes a couple answers. If it’s a work judged to be benign, everyone would be outraged at the suggestion of tinkering — don’t change Frank Capra’s It’s a Wonderful Life or Mr. Smith Goes to Washington, leave alone the works of Beverly Cleary, A.A. Milne, and Beatrix Potter! They may not approve and are not around to object; let their creations exist as they intended. (Studios and publishers have the legal right to tamper, of course, but that does not mean they should.) It’s easy to say that problematic art and artists have forfeited that right to preservation and respect of intention. “It’s racist, he’s racist, who gives a shit?” But as a writer, I’m horrified at the thought of someone changing my books or articles when I’m gone, even to make them better, less offensive, more moral. Beyond constructing a false view of who I was, it would be without my consent and against my strong-felt wishes and beliefs (verbalized here, with any luck forever). I imagine that most artists, whenever they lived, did not want other people meddling with their creations. There is too much obsession, care, and satisfaction involved in the creative process. So, if this is a treatment and right I want for myself — respect for my consent and control over my own material offspring — I have to extend it to others. No matter how innocent or flawed their pieces. Only those who would sincerely have no issue with a song, book, article, painting, film, sketch, photograph, or other work they made being changed a century from now in an attempt to purify it can support editing Dahl or Disney (one cannot say “that would never happen” or “there’d be nothing offensive to cut” because it is likely that few of the impacted creators of today could have imagined any of this happening to their work either). The rest of us must begrudgingly respect the consent of artists (though not the content of their art) or else fall into hypocrisy.

It seems worth adding that not only do our views on preservation and the artist’s consent change when speaking of benign art versus offensive art, a change that is questionable, it also appears that the form of art matters. The idea of brushing over an offensive painting in a museum is far less comfortable, and still nearly unthinkable, compared to tinkering with entertainment and books. How about altering old photographs? Or imagine Spotify offering new versions of old, beloved songs and simply wiping out the originals. Surely one is a bit slower to defend such things. But why? Why would the form matter? Similar feelings have lurked in the back of my mind as this writing has progressed. Perhaps understandably, I find the alterations of books more troubling than films and shows. I also find tampering with films and shows for impressionable children less irksome than doing the same to entertainment for adults. Yet those distinctions and biases don’t seem to matter much. Art is art, no?

The other strategies noted above avoid all of these challenges completely. Artists may not deserve (in more than one sense) to have their work modified by others, but people have the right to discuss, condemn, or ignore art. All that’s expected. Content labels, new introductions, serious discussions, and cancellations are fair game. Of course, people will disagree over which works should be pulled from platforms or publication and which should be offered with commentary and criticism. I have little hope of solving that. It is the intention here to simply highlight these possibilities as more acceptable choices, and encourage some skepticism of changing past art of any form.

For more from the author, subscribe and follow or read his books.

Old Maids at the Close of a More Sexually Liberal America, 1780-1830

The genesis of this paper was rooted, like much historical work, in a question: what place did the “old maid” have in the early American republic, when a more sexually permissive culture was being wrestled under control? This was intriguing because the old maid, as an individual and as a concept, stood outside the realm of commonplace premarital sexual activity in urban areas. “Old maid” sequentially referenced a woman’s age and virginity. “Spinster,” used synonymously, derived from older unmarried women in a household spinning wool, the traditional domestic task of younger women and girls.[1] These labels marked women as both virginal and unmarried, and tended to be applied by the mid-twenties, or even as early as twenty.[2] This rhetorical othering accompanied the rather different life of the old maid. As historian Mary Beth Norton wrote, in the late eighteenth century “a white spinster’s lot was unenviable: single women usually resided as perpetual dependents in the homes of relatives, helping out with housework, nursing, and childcare in exchange for room and board. Even when a woman’s skills were sufficient to enable her to earn an independent living, her anomalous position in a society in which marriage was almost universal placed her near the bottom of the social scale.”[3] Single women were anomalies and publicly labeled as such, a dual burden.

Attitudes toward spinsters reflect societal developments and ideologies of gender, race, and more. Susan Matthews of the University of Roehampton, studying old maids in eighteenth-century Britain, “suggest[s] that there is a relationship between a culture’s attitude to fertility and its representation of single women as writers.”[4] As concern over overpopulation spread, Matthews found, old maids became a bit more tolerable. In her dissertation on “Old Maids and Reproductive Anxiety in U.S. Southern Fiction, 1923-1946,” Alison Arant argued that old maids in the twentieth-century South threatened, through their childlessness, the future of the white race and its culture.[5] English scholar Rita Kranidis has argued that in Victorian Britain, spinsters were an affront to the ideal of true womanhood.[6] To be a woman was to be a wife and mother. Old maids were thus regarded as unnecessary to society, cultural excesses that must, some argued, be redistributed to the empire’s colonies.[7] Similarly, this paper concerns how societal realities and ideologies of women’s nature impacted perspectives on spinsters, and how all these elements changed over time. The work argues that old maids were more tolerated in the last decades of eighteenth-century America due, in part, to a more sexually permissive culture. It further argues that the harsher social attitudes toward old maids that solidify as the U.S. approaches the 1830s can likewise be partially explained by a crackdown on sexual excess. As we will see, scholars have more or less agreed that spinsters were relatively tolerable in this earlier period and less so in the later, but this paper adds a layer of nuance, exploring an unconsidered factor and making our understanding more comprehensive. What follows, then, is a look at the old maid’s place in a time of changing social constructions of womanhood, from sexual beings to sexually reserved Victorians, from mothers of little national importance to mothers as critical moral guides to the helmsmen of the new nation.

We begin with sexual norms. In Sexual Revolution in Early America, historian Richard Godbeer reveals a more permissive era in the eighteenth century, as the American colonies diversified and Puritan influence weakened.[8] While church authorities and others continued to insist upon strict sexual rules, such as no sexual activity until marriage, many ordinary people and local governments left them behind. It was in “the middle of the eighteenth century that county courts ceased to prosecute married couples for having engaged in premarital sex.”[9] Sex during courtship or otherwise outside marriage grew more common. The number of pregnant brides, low in the 1600s, rose dramatically by the time of the American Revolution: 30-40% of brides were already with child in some towns.[10] Another scholar notes that 1701-1760 saw one in five first births out of wedlock; from 1761-1800 it was one in three.[11] Some women married the father after they became pregnant, but others did not, either due to choice, abandonment, or not knowing who the father was.[12] Parents of sexually active young women often allowed the dalliances to take place in their homes, as it was much better to know who the young man was so he could be held accountable for any offspring and pressured to move forward with marriage.[13] This is a different culture than many modern Americans expect to find — did not Puritan religiosity and Victorian propriety define the American past, one leading directly to the other? On the contrary, in between these two distinct historical eras were rather different practices and beliefs. According to historian Jack Larkin, at this time long periods of abstinence were thought to be hazardous to one’s health.[14]

Further, rather than these norms representing a fall from grace, a new post-Puritan culture of moral corruption, it was in fact a return, according to Godbeer, to “English popular tradition.”[15] Puritans left behind a more permissive sexual culture in Europe, but as immigration to the colonies continued and as Puritan control loosened over growing populations, such a culture developed in America as well. This is not to say that the Puritans were wholly well-behaved. Court records reveal instances of fornication or adultery, punished with fines or whippings, and sodomy, punished with whippings, brandings, or banishment.[16] As noted above, there were pregnancies outside of marriage. Historian Francis Bremer of Millersville University points out Puritan colonists could be quite erotic, rather than prudish, and that “some people in early New England [were] censured by the church because they…deprived their married partner of sex.”[17] Nevertheless, it is clear that in the eighteenth century unmarried sexual behavior grew more common and societal rules around it grew less punitive. Godbeer suggests that the revolutionary spirit that emphasized independence and liberty further loosened Americans from the moorings of the church, parents, and so on.[18] This also had an effect on attitudes toward spinsters, as we will see. The beating heart of the Revolution played an interesting role in this story.

Philadelphia, the capital of the United States from 1790 to 1800, also seemed to be a hub of sexual activity. Puritans and parents could regulate sex more easily in small settlement towns where everyone knew everyone and the church had more power over policy. Urbanization changed that. Young men and women migrated alone to cities like Philadelphia to find work — they were living independently in the birthplace of Independence. “The sexual climate in Philadelphia was remarkable for its lack of restraint,” Godbeer writes. “Casual sex, unmarried relationships, and adulterous affairs were commonplace,” as was prostitution.[19] Gay and lesbian couplings have also been documented.[20] “Maids are become mistresses,” an Elizabeth Drinker complained at the time.[21] In 2006, four years after Godbeer’s text, historian Clare Lyons produced Sex among the Rabble: An Intimate History of Gender and Power in the Age of Revolution, Philadelphia, 1730-1830, an even deeper look at the licentious city. Philadelphians experienced an “era when the independent sexuality of their women was left unpoliced and their community openly engaged in struggles over the patriarchal prerogatives of husbands, embodied in the actions of eloping wives, adulterous women, and women who established sexual liaisons outside marriage.”[22] There occurred “debates over the nature of female sexuality and the extent of female agency…”[23] According to Lyons, free love challenged the gender order (as well as racial and class hierarchies, as sex between rather different people occurred).[24] The backlash to this, driven by the upper class and elements of the emerging middle class, slowly unfolded from the 1780s to the 1830s, redefining true womanhood as characterized by chastity and limited sexual interest.[25]

Christian Europe and America had for many centuries considered women more lustful than men, more sinful by nature, as evidenced by Eve.[26] This changed during the eighteenth century — by its end, men were the ones with uncontrollable sexual appetites.[27] Women were transformed: American historian Nancy F. Cott called the “passionlessness” of women the “central tenet of Victorian sexual ideology.”[28] Women were, Samuel Worchester of Vermont wrote in 1809, “formed for exalted purity.”[29] A cultural and legal crackdown on loose lower- and middle-class Philadelphians accompanied the redefinition of woman at the end of the eighteenth century and beginning of the nineteenth.[30] For example, arrests and prosecutions of prostitutes increased, medical texts explained the “Morbid State of the Sexual Appetite” causing everything from vision loss to vertigo to death, children born out of wedlock took center stage in true crime literature, and public relief for mothers of illegitimate children was slashed.[31] Such regulation occurred elsewhere as well, such as in Massachusetts.[32] Jack Larkin points to the 1830s as when sexually restrictive, Victorian norms solidified in the United States as a whole.[33]

As with the Puritans, of course, one must be careful not to overlook the complexities of Victorians. While societal rules and ideologies grew more repressive in cities like New York, Americans were not passionless beings, and a subculture continued to enjoy non-conjugal sex, gay relationships, prostitution, and pornography.[34] It must be understood that different eras may have different ideologies, rules from the powerful, and patterns of behavior, but there are always those who do not abide by common expectations. Concerning Victorian virtue, historian Carl Degler differentiates between “What Ought to Be and What Was.”[35] What is most relevant to this work, however, are indeed major doctrines and norms. It concerns, for instance, premarital sex being reframed in Philadelphia’s newspapers, magazines, and pamphlets as prostitution, and the Americans who adopted such views.[36] Such changing norms may have had a significant effect, though other causal factors are possible: from a peak in the Revolutionary period, premarital pregnancies fell steadily from about 30% before 1800, as noted, to about 10% after 1850.[37]

Of course, historians have lifted up factors other than sexual excess to explain the reconstruction of women’s nature and place in this era. Women were not just made chaste, after all, they were made content and dutiful in the home. Sex was to be for a husband alone, and its result, children, were to be women’s central concern in life. Rosemarie Zagarri, in Revolutionary Backlash: Women and Politics in the Early American Republic, offers as a causal factor women’s increased involvement in politics during this age (which built on women’s leap into political activity — boycotts, protests, writings, debates — that began during the American Revolution, charted by Mary Beth Norton in Liberty’s Daughters: The Revolutionary Experience of American Women, 1750-1800).[38] With women now engaging in party organizing and advocacy, speaking enthusiastically of “women’s rights” after the 1792 publication of Mary Wollstonecraft’s A Vindication of the Rights of Woman, and even voting in New Jersey, the gender hierarchy was under further threat.[39] By the 1830s, Zagarri argues, a backlash successfully drove women away from the parties and the ballot box (though women’s activism continued in other ways), fueled by a demand that arose in the 1780s: that women instead serve as “republican wives” and “republican mothers” at home, giving moral guidance to husbands and sons, those important to the success of the new nation.[40]

The redefinition of true womanhood should be seen, then, as a development that served more than one function in the early republic era. Sexual permissiveness and women’s political involvement alike were deemed damaging to society and, Lyons and Zagarri argue, its gender system. Thus, the new woman was not just sexually reserved and concerned with purity far more than pleasure, but she was a wife and mother who avoided politics. How well these concepts fit together — an emphasis on housewifery rejected sexual freedom, demands for chastity drove women toward married life. Sex was for husbands and wives.

In such a climate, what was the social attitude toward unmarried, virginal women? The old maid’s place is interesting. She stands at the intersection of changing sexual norms and changing familial ideologies. On one axis, the old maid was out of place in a more sexually permissive age (or at least aligned with church authorities and the most pious Americans rather than the cultural trend and their sexually active, unmarried peers), but then fit rather better under the more restrictive regime that followed, as she already followed the calls for chastity, willingly or not. On the other axis, the old maid may have been more tolerated before the onset of the demand for republican wives and mothers, when she would have become antithetical to the perceived needs of the young nation. Before the Revolution, marriage and motherhood were of course central to women’s lives, but they were not of any importance to the larger society, to politics and economics and national success.[41] What did it really matter if a woman remained single for life? All of that changed with the call for republican motherhood — marriage and childbearing were now critical, patriotic. The spinster was both out of line and in line before 1780, and then, in different ways, out of line and in line after 1830. In such a complex and changing world, how did Americans speak of old maids? Here it is meant literate, generally white Americans in urban areas, per the available evidence.

It is reasonable to predict spinsters would be more and more castigated the stronger republican motherhood took hold of the United States. (And they would not be alone — in her dissertation, advised by Clare Lyons, Kelly Ryan argues that bachelors were seen as deviant and selfish, betraying republican virtue and the common good by not taking wives.[42]) Chastity was increasingly stressed, but it was not supposed to last long with marriage and childrearing on the urgent agenda. But it is not such a given that spinsters would be more tolerated in the eighteenth century. One could hypothesize that old maids would be looked upon with greater, or similar, scorn in a permissive period. If it was more common for unmarried American women to be sexually active, would old maids be considered odd, or even failures, due to the inability to find a lover (rather than strictly the inability to find a husband and have children)? Prudish and old-fashioned for aligning with church authority in a time of liberty and independence? Or would there instead be more sympathy for spinsters, for the lonely in a time of free love? What of the health concerns? Virgin women in their late teens and early twenties with chlorosis, in reality caused by an iron deficiency, were thought to be ill due to lack of sexual intercourse.[43] Precise motivations behind sentiments cannot always be known, but the sentiments themselves can be revealing.

Let us consider how residents and the press of Philadelphia, New York, Boston, and other cities spoke of and represented old maids during the last decades of the 1700s. First, a look at expressions of undesirability.[44] In 1765, a Boston paper featured a woman who “would choose rather to be an Old Maid, than that the operation of the Stamp Act should commence in the colonies,” which frames spinsterhood as the lesser of two evils but an evil nonetheless.[45] “I often Run over in my mind, the many Disadvantages that Accrues to our Sex from an Alliance with another,” a New Yorker said in 1762, yet “the thought of being Domed to live alone I Cant yet Reconcile… [T]he Appellation of old Made…I don’t believe one of our Sex wou’d voluntarily Bare.”[46] A forty-nine year old Massachusetts woman in 1787 was deeply depressed, her home “dark and lonesome”; she “walked the rooms and cryed myself Sick.”[47] Dying an old maid was especially unfavorable, according to a New York paper in 1791.[48] Marrying an old maid was not always desirable, either. “An old ALMANAC-MAKER” wrote of the heavens in a 1793 National Gazette (Philadelphia) piece, personifying the moon and asking “Whether she be a maid? (if so, she must be a very old one indeed, and I’ll have no thing to do with her)…”[49] Some suspicion existed in 1796 Boston toward “old Maids and Bachelors, who alone, are opposed to Matrimony,” harboring “prejudices” against it.[50]

Yet while the old maid was disadvantaged, lonely, out of step, and perhaps not an ideal partner in some men’s eyes, she was not the object of disgust and vilification seen later. Further, there are in fact positive connotations applied to spinsterhood, as well as sincere extensions of sympathies. In 1792, the National Gazette reprinted a plan published in Ireland for a college for old maids. “It may at once amuse the curious,” the Gazette commented in a short introduction, “and afford a hint to the benevolent on this side of the Atlantic to attempt something upon a similar idea.”[51] The paper clearly favored the notion; its republication is significant, for the Irish writing expressed deep sympathy for unmarried women: “solitary seclusion is never the object of our voluntary choice… we require the mutual aid of each other. How deplorable then is the condition of an OLD MAID!” It presents the spinster as “stripped” of her relatives and friends; she “pines in solitude,” “cheerless” with no children underfoot, “denied the pleasures of society,” an “evil” state of affairs. Death “advances to her relief.” But a college would “relieve the miseries” and bring women into a “sisterhood” of great “comfort.” Here old maids are worthy of empathy and aid, not scorn.

Take a similar example. In matters of finding a spouse, money could impact desirability, one writer asserted. “Let an old maid, nine winters past the corner…come into the possession of a fortune: Though she was before neglected, and passed by with contempt; she all at once becomes the bon ton [fashionable, desirable].”[52] Suddenly “her youth is renewed — the wrinkles are all fled, and she is surrounded” by interested men. This was part of a critique of the harmful effects of the love of money — man would do anything for wealth, even court an old maid. They discover spinsters’ “beauties, which would never have had an existence, had she remained in her former indigent circumstances.” But “the world should be ashamed, that it can discover no merit but what is annexed to money.” Here is a small defense of spinsters, an implication that they have merits even if they do not come into riches when a relative passes away.

Even stronger, in Milcah Martha Moore’s commonplace book, a semi-private collection of women’s writings assembled during the American Revolution and later converted into a classroom text, poet Hannah Griffiths of Philadelphia defended her spinsterhood.[53] She was unbothered by the “Sneers thrown on the single Life.” A poem of hers read: “The Men, (as a Friend) I prefer, I esteem / And love them as well as I ought / But to fix all my Happiness, solely on Him / Was never my Wish or my Thought.” Vermont and Philadelphia papers ran a short verse in 1799 called “OLD MAIDS OF WINTER”: “But earlier happy is the rose distill’d / Than that, which withering on the virgin thorn, / Grows, lives and dies in single blessedness!”[54] This could be interpreted in different ways. “Single blessedness” may refer merely to the unmarried state, rather than stressing that singlehood is a blessing. In other words, it would be better to get married instead of dying alone. A rose will wither and die on its stem, but if it is chosen and plucked and distilled it will be “happy.” The use of the flower is rather sexual. But as we have seen, a woman need not be married to engage in intercourse in this era. We could just as easily interpret the work to mean one should have sex rather than remain a virgin until old age — intimacy as the key to happiness, not necessarily marital intimacy. The use of “virgin” could be seen as evidence of a focus on sex, rather than marriage. Though again, these still often went together for many Americans, so it is difficult to say for certain. (The use of “earlier” is also intriguing. Some roses will be wanted and plucked; they will be happier earlier. Does this not imply that roses who are not, who are left on the vine, will be happy at some point? If they are “old maids of winter,” perhaps not because they wed or had sex, but because they came to peace with single life.)

As Mary Beth Norton shows, the 1780s and 1790s saw women speaking of “the honourable appellation of old maid,” a situation of “great dignity.”[55] “It is not marriage or celibacy [that] gives merit or demerit to a person,” Anne Emlen wrote.[56] Unmarried women were “as well of[f]” as wives; some “young ladies are…very willing to be old maids” if “worthy” men were nowhere to be found.[57] Elizabeth Parker felt a bond with other spinsters, disappointed at “one of the sisterhood’s falling off” (getting married).[58] A girl from Maine said, “I do not esteem marriage absolutely essential to happiness… [W]hich is the most despicable — she who marries a man she scarcely thinks well of — to avoid the reputation of an old maid — or she, who with more delicacy, than marry one she could not highly esteem, preferred to live single all her life?”[59] Of course, old maidism was not always about rejecting undesirable men, but having no sexual interest in them. Some gay women of course refused to marry, despite any social disadvantages, instead enjoying flings, long-term relationships, and cohabitation with other women.[60] Asexuality is also part of the human condition and cannot be discounted.

A powerful declaration of independence from this age was “Lines Written by a Lady, who was questioned respecting her inclination to marry,” published anonymously in Massachusetts Magazine in 1794. English scholar Paul Lewis suspects it was written by Judith Sargent Murray, author of the 1790 “On the Equality of the Sexes,” which defended women’s intelligence and called for more educational opportunities.[61] He calls “Lines Written by a Lady” possibly “the most joyfully and radically feminist work published in an American magazine during the early national period.” The astounding poem read:

With an heart light as cork, and mind free as air
Unshackled I’ll live, and I’ll die, I declare;
No ties shall perplex me, no fetters shall bind,
That innocent freedom that dwells in my mind.
At liberty’s spring, such draughts I’ve imbibed,
That I hate all the doctrines by wedlock prescribed.
Its law of obedience could never suit me,
My spirit’s too lofty, my thoughts are too free.
Like an haughty republic my heart with disdain
Views the edicts of Hymen, and laughs at his chain,
Abhors his tyrannical systems and modes,
His bastiles, his shackles, his maxims, and codes,
Inquires why women consent to be tools
And calmly conform to such rigorous rules;
Inquires in vain, for no reasons appear
Why matrons should live in subjection and fear.
But round freedom’s fair standard I’ve rallied and paid
A vow of allegiance to die an old maid.
Long live the Republic of freedom and ease,
May its subjects live happy and do as they please.[62]

Here a powerless, miserable marriage is deemed far worse than spinsterhood. Interestingly, one of Paul Lewis’ students discovered a 1798 poem in a Boston newspaper, the Independent Chronicle and the Universal Advertiser, that echoed and even directly quoted “Lines Written by a Lady.”[63] This later piece was penned under the pseudonym “Betty Broadface.” It is entitled: “Occasioned by reading a piece in the Chronicle, written by a disappointed Old Bachelor” — in other words, it is a response to a previously printed poem in the Chronicle that castigated wives and marriage from a man’s perspective. The response read:

The greatest of evils (you say) is a wife,
That happens to man in the course of his life!
Yet, for a woman to wish for a Husband, tis plain,
Is wishing for something as foolish as vain!
A husband! oh, think of setting up late,
While at tavern, he’s gaming away your estate!
In getting a husband, how much do you gain?
Why, a husband and children perhaps to maintain.
A husband! consider tyrannical rule.
A husband! don’t get one, unless you’re a fool.
A husband! (oh think what a life of delight)
All day in a passion, in liquor all night;
All husbands I do not thus charge with disgrace,
But you know my good reader, ’tis often the case,
There a’nt (we can prove it by tracing their lives)
Not one honest husband, to two honest wives.
There’s such a great chance, such a risk to be run,
So few that succeed, and so many undone;
Round the standard of freedom, I’ve rallied and paid
A vow of allegiance, to die an old maid!
Ye girls for the future like me be resolv’d,
Let all your connections with men be dissolv’d!
Tho’ the crying of children, perhaps now appears
As charming as music, to delicate ears,
This music you’d find, would be soon out of tone,
And you’d sigh for the time, when you once slept alone.[64]

Old maids were also connoted as wise. The Connecticut Courant in 1795 referenced the “nine old maids,” the muses consulted in ancient poetry.[65] This was reprinted in Philadelphia’s Gazette of the United States and Daily Evening Advertiser. New York’s Gazette of the United States mentioned the nine old maids and their prophetic dance as well.[66]

Many mentions of old maids have no negative or positive connotations.[67] The term was often used as a simple descriptor, like one would call a man a “farmer” or “doctor,” but this is notable — spinsterhood defined one’s entire identity. In any case, though there were “sneers” and “contempt,” it is clear that a certain degree of tolerance existed at the end of the eighteenth century. Not only were single women speaking up in their own defense, but men were publishing such writings in their papers, not only to entertain readers but to express some sympathies as well. As we turn to sources after 1800, there is still some empathy for spinsters, especially from women,[68] but other expressions grow harsher in tone. Remember, there is no hard line between the more sexually liberal age and the more restrictive Victorian period. Just as there is no clear demarcation between the times of unimportant, traditional motherhood and crucial, republican motherhood. While 1800 is noted, the ideological changes did begin before this and slowly evolved until coming to dominance in the 1830s.

Without treatment, a girl with reddened skin in the year 1800 would be undesirable, and experience the “remorses and miseries of a despised old maid.”[69] In 1815, old maids were “withr’d.”[70] They could grow “ugly and ill-natured,” complaining of hard times, circumstances that made potential husbands more difficult to find and remaining with “her father, mother, uncle, or aunt” more appealing.[71] Women who rejected suitors were “scornful” and “cold,” having only themselves to blame for singlehood.[72] In the 1830s, an “old maid” of the Winnebago was described by Caleb Atwater, a white politician and historian, as a “miserable human being,” “snarling, hissing.”[73] Her unpleasant character was tied to her lack of interest from men: “the only distinguishing mark of attention she had ever received from any man, was a smart blow, with a flat hand, on her right ear!” A New York paper wrote of “a little withered old maid residing at the village of Aldbury, with cold, unwinning manners, and grey, dark eyes, in which sadness and suspicion seem ever striving for mastery.”[74] One old maid was described as “snuffy,” meaning contemptuous — castigated for her abolitionism, which was tied to her singlehood (she “supposes a strapping runaway negro rascal a very Adonis”) and possibly for her sexuality, which would also relate to her unmarried status (“she is a great he-woman, who wears breeches under her petticoats”).[75] In 1838, a writer compared New York’s winter months to “wretched spinsters over the age of twenty.”[76] The next year, the same paper wrote of “senseless, heartless, shrivelled old maids” in expensive boarding schools.[77] The attitudes did not appear in white papers alone. The Cherokee Phoenix and Indians’ Advocate, a paper from New Echota, Georgia (a capital of the Cherokee), reprinted a piece from a Scottish journal in 1829 stating that “would-be-young old maid[s]” could be “monster[s],” smooth-tongued and on the surface gentle but in reality “the most peevish, hypocritical, greedy, selfish, and tyrannical being in existence.”[78] She is all “stings” under a “coat of honey,” doing “more mischief, in her own officious, sneaking, underhand way than a hundred bold down-right murderers, who kill their men, and are hanged for it.” American society, it seems, was turning against old maids.

What afforded more tolerable views toward spinsters in the last decades of the eighteenth century? Historians have offered persuasive theories. There were various important developments that could change ideologies. Mary Beth Norton argued that a questioning of marriage and more favorable attitudes towards old maids were driven by the struggle for national independence. All the talk of freedom and change seeped into the foundations of culture.[79] Note, as Norton did, the language of the Revolution in “Lines Written by a Lady” above.[80] But demographics also have causal power. “By the late 1700s,” sociologist Laura Carpenter writes, “men in America no longer outnumbered women, as they had in the early colonial period, making it increasingly difficult for women to marry.”[81] With fewer possibilities of marriage, spinsterhood would last longer and more women would experience it. We would expect this to ease social attitudes towards old maids — what is more common is far less mockable. Norton engages with this demographic change, writing that women came to outnumber men in parts of New England by 1790, which “in part” helps explain more positivity toward old maidism, but argues that revolutionary ideology must be considered a significant factor, given that such positivity existed in areas of the U.S. with a more even sex ratio.[82] It should be noted that scholars have determined that in other periods of U.S. history, such as the twentieth century, views of old maids grew harsher as their numbers decreased — the converse of what we see in the early republic era.[83] There is an inverse relationship between numbers and negativity.

But what the field has not yet considered is the role of sexual excess — how it could impact social attitudes toward the spinster. Before elaborating, note again that “old maid” was both a comment on sex and a comment on marriage — here is a virginal, unmarried woman — but their interconnectedness could be broken. For instance, a woman could, from one perspective, cease to be an old maid upon becoming sexually active, no marriage required (likewise, she could, from one perspective, remain an old maid between the wedding and consummation). Just bear in mind that there were two senses to the label “old maid.”

In a more sexually permissive age, this paper argues, the celibate was not such a reviled oddity because she had the potential, at any time, to abandon her maiden state. Being an old maid, in the sexual rather than matrimonial sense, was therefore more a matter of personal choice, rather than a personal failure. Sex and marriage were, for a century or so, pulled somewhat apart. If a woman was unmarried, it could not be so assuredly assumed she was in fact a maid — many unmarried women were having sex. “Maids” had “become mistresses.” A writer in 1800 declared that “those who marry will have husbands, and those who marry not, by Fate’s unalterable decrees, must live old maids, or else no maids at all.”[84] Despite the mention of fate, the last thought highlights women’s choice in this period — to be unmarried and celibate or unmarried and sexually active. “Celibacy,” after all, as we saw above, did not give “merit or demerit to a person,” so many chose to abandon it. As for those who were old maids (and as for the old maid as a concept in the American imagination), they were unmarried and virginal, but the latter could be addressed so easily, and often was, that “old maid” as a degradation held little power. You could still mock someone for being unmarried and thus undesirable, but such a barb would not have as much sting if marriage was not a prerequisite for love and sexual pleasure. Observers simply did not know who was or was not an old maid in the sexual sense, only in the marital sense, and that did not carry much weight — an unmarried woman could be greatly desired and acting upon it. A sex life was private, not publicized by marital status. But when the concepts of sex and marriage were pushed back together, when it was more understood that singleness and chastity went hand-in-hand, there was a stronger foundation for denigration — to be unmarried was more safely assumed to be virginal, to be wholly undesired and defective, to be alone and miserable. Contempt for spinsters suddenly made more sense.

Interestingly, examining sources from the Library of Congress digital archive, definitional or redundant elements grew substantially more prevalent in the early nineteenth century. Like the reminder in “OLD MAIDS OF WINTER” (1799) that old maids were “virgin[s],” later publications were more likely to draw attention to meaning. In a Philadelphia paper in 1800, “old maids” were “antiquated desponding virgins.”[85] The old maid, an 1833 book noted, was a “virgin charmer.”[86] The Madisonian, printed in Washington, D.C., made sure to mark a “spinster” as a “maiden” in 1837.[87] The Morning Herald of New York did the same.[88] A few months later, the Herald included a true redundancy: “old maiden spinsters.”[89] In 1838, a “rigid featured old maid” and a friend in the same predicament were emphasized as “chaste.”[90] One writer, “tired of celibacy,” was included among the “bachelors and spinsters.”[91] A new stress on explicit definition may evidence conceptual change — abstinence and singlehood being drawn closer together.

Of course, the increasing disdain for unmarried women was, like the prior tolerance, a product of multiple factors. As Zagarri argued, one was the need to drive women away from politics; the call for “republican motherhood” made spinsters at odds with societal needs and norms. Demographic change, however, was not likely a factor in the increasing contempt, for it continued the prior trend. Many counties in New England had female-heavy or even sex ratios from the 1820s and ’30s through the rest of the century.[92] White women’s average age of first marriage rose from 1800 onward (per available data; the trend likely began before this).[93] Demographics again made space for increasing positivity toward old maids, but they were counteracted by powerful cultural forces, toward which Zagarri’s work and this paper have drawn attention.

The crackdown on sexual excess repositioned the old maid and opened the door to harsher criticism. Once shielded by the culturally condoned ability to make love, a disassociation between marriage and sex, the unmarried woman was now assumed to be a virginal and unwanted. She was thus a failure in two ways. The old maid was not only failing to carry out her social duty by becoming a wife and mother, she was marked as undesirable, a failure of personality, character, appearance, and so on, due to the increasingly sexually restrictive world around her. This world lifted up the virgin, but there were limits — this could not continue when she was in her late twenties and thirties, when she was violating true womanhood and patriotism by failing to find a husband and have children, when society found it harder to imagine she would have sex, due to her new lustless nature and society’s new rules, and find fulfillment and love outside marriage. Recall the fact, cited earlier, that nonmarital pregnancies declined from before 1800 to mid-century, which may evidence less nonmarital sex as a result of Victorian ideology and norms (though other possible factors, such as increased contraceptive use, must be considered as well).

The factors behind tolerance for celibate or single women in a given human society may be too diverse to allow for any broader theory. In American society over the span of several decades alone we have a sexually permissive culture, demographic shifts, and revolutionary ideology at play. The idea that sexually liberal societies tend to have higher tolerance for celibate women cannot yet be asserted with confidence, nor the corollary that more restrictive societies tend to disdain them, despite a strong start to cross-cultural analyses of celibates in texts such as Celibacy, Culture, and Society: The Anthropology of Sexual Abstinence (editors Elisa Janine Sobo and Sandra Bell).[94] It remains convincing that sexually conservative cultures without a powerful emphasis on motherhood, for instance, would glorify the older, unwed, virginal woman. In medieval Christian Europe, chaste marriages and lifelong virginity were celebrated, as they signaled true purity and the deepest commitment to God.[95] Yet the eighteenth century may not be the only period in the American story where tolerance for celibate women and a sexually free culture went hand-in-hand. In the modern U.S., where as much as 95% of the population has sex before marriage, there is increasing recognition of celibacy as a sexual orientation.[96] Though some argue the “cat lady” has replaced the “old maid” and “spinster,” tolerance for and understanding of asexual individuals (not all of whom are virginal) is found in many corners.[97] While no one would argue that mockery of older virgins has disappeared, the increasing acceptance of “aces” should be seen as undermining the power of denigration. As in the eighteenth century, it should not be posited that a more sexually open society is the only factor that brought this about, but it is likely a contributing one. At the least, it is further evidence that less restrictive cultures and greater acceptance of celibates are not incompatible.

Overall, this paper sought to explore how changing societal realities and views of women’s nature affected attitudes toward old maids. Other scholars have considered this in the context of other nations, American regions, and eras; historians like Norton have observed the phenomenon in the setting and time considered here. This writing closely parallels Norton’s acknowledgement that an unbalanced sex ratio played a role in more tolerable views of old maids, in that it is vulnerable to criticism for being too correlative or speculative. Demographic change and perspective change may occur at the same time, but it is difficult to link them with primary sources; changes in the sexual culture and changes in perspective may likewise occur simultaneously, with causal bonds challenging to show. This thesis may be uncomfortably theoretical, and could benefit from future documentary discoveries, but, when laid out in its entirety, has a rational foundation and explanatory value.

In the early American republic, sexual excess had to be brought under control. Woman’s nature had to be redefined as devoid of lust. Marriage and family had to be made paramount — only within such confines should sex be experienced. Through this, old maids went from more tolerable to more despised. The unplucked rose violated and challenged the ideals of true womanhood that centered republican wives and mothers, but was also no longer protected by a brief disassociation between singleness and sexlessness. In looser times, the old maid may not have been a maid at all. She could be secretly desired by and involved with suitors; she could shed her virginal state at any time; marriage was no requirement for love. That was the common understanding. There was less fodder for castigation; a house of mockery would have to be built on sand. This ensured a relative tolerance, with other factors like fewer men and ideals of liberty at work as well. In the more restrictive, Victorian era, the old maid was more safely presumed to be a maid. We see this in the emphasis on definition in the historical record — possibly supported by lower rates of premarital pregnancy. Because she was unmarried, the old maid was unpleasured and unwanted, and everyone knew it — a metaphorical, strangely reversed scarlet letter. Singleness and sexlessness were sown together, a marriage into which the judgemental could sink their teeth.

For more from the author, subscribe and follow or read his books.


[1] Jack Larkin, The Reshaping of Everyday Life in the United States: 1790-1840 (New York: Harper Perennial, 1989), 26.

[2] Amy Froide, “Spinster, Old Maid, or Self-Partnered — Why Words for Single Women Have Changed Over Time,” UMBC Magazine, December 2, 2019, https://umbc.edu/stories/spinster-old-maid-or-self-partnered-why-words-for-single-women-have-changed-through-time/.

  Joseph Pickering, Emigration, or No Emigration (London: Longman, Rees, Orme, Brown, and Green, 1830), 29. Retrieved from https://www.loc.gov/resource/lhbtn.13760/?st=pdf&pdfPage=29.

[3] Mary Beth Norton, Liberty’s Daughters: The Revolutionary Experience of American Women, 1750-1800 (Ithaca, New York: Cornell University Press, 1996), 42.

[4] Susan Matthews, “Productivity, Fertility, and the Romantic ‘Old Maid,’” Romanticism 25, no. 3 (2019): 225-236. Retrieved from https://www.researchgate.net/publication/336190039_Productivity_Fertility_and_the_Romantic_’Old_Maid’.

[5] Alison Arant, “‘That Rotten Richness’: Old Maids and Reproductive Anxiety in U.S. Southern Fiction, 1923-1946,” doctoral dissertation, University of South Carolina, 2012. Retrieved from https://scholarcommons.sc.edu/etd/1044/.

[6] Rita Kranidis, The Victorian Spinster and Colonial Emigration: Contested Subjects (New York: St. Martin’s Press, 1999).

[7] Ibid.

[8] Richard Godbeer, Sexual Revolution in Early America (Baltimore: Johns Hopkins University Press, 2002), 228-229.

[9] Godbeer, Revolution, 228.

[10] Ibid. See also Larkin, Reshaping, and “Historian: Early Americans Led Lusty Sex Lives,” UPI, August 29, 1988, https://www.upi.com/Archives/1988/08/29/Historian-Early-Americans-led-lusty-sex-lives/7614588830400/.

[11] Laura Carpenter, Virginity Lost: An Intimate Portrait of First Sexual Experiences (New York: NYU Press, 2005),22.

[12] Godbeer, Revolution, 316.

[13] Ibid.

[14] “Early Americans,” UPI.

[15] Godbeer, Revolution, 229.

[16] Lisa Lauria, “Sexual Misconduct in Plymouth Colony,” The Plymouth Colony Archive Project, 1998, http://www.histarch.illinois.edu/plymouth/Lauria1.html#VII.

[17] Madeline Bilis, “Debunking the Myth Surrounding Puritans and Sex,” Boston Magazine, October 18, 2016, https://www.bostonmagazine.com/arts-entertainment/2016/10/18/puritans-and-sex-myth/.

[18] Godbeer, Revolution, 300, 334.

[19] Ibid., 300.

[20] Ibid., 271, and Rachel Hope Cleves, “Same-Sex Love among Early American Women,” Oxford Research Encyclopedia of American History, July 2018. Accessed March 8, 2023 from https://oxfordre.com/americanhistory/view/10.1093/acrefore/9780199329175.001.0001/acrefore-9780199329175-e-498.

[21] Godbeer, Revolution, 300.

[22] Clare A. Lyons, Sex Among the Rabble: An Intimate History of Gender and Power in the Age of Revolution, Philadelphia, 1730-1830 (Chapel Hill: Omohundro Institute and University of North Carolina Press, 2006), 393.

[23] Lyons, Sex, 393.

[24] Ibid., 309.

[25] Ibid., 309-310, 394. See also Kelly A. Ryan, “Making Chaste Citizens: Sexual Regulation and Reputation in the Early Republic,” Regulating Passion: Sexuality and Patriarchal Rule in Massachusetts, 1700–1830 (Oxford: Oxford University Press, 2014).

[26] Carol F. Karlsen, The Devil in the Shape of a Woman: Witchcraft in Colonial New England (New York: W.W. Norton & Company, 1998). See chapter 5, especially pages 153-162.

[27] Godbeer, Revolution, 266. See also Lyons, Sex, 393-394.

[28] Nancy F. Cott, “Passionlessness: An Interpretation of Victorian Sexual Ideology, 1790-1850,” Signs 4, no. 2 (1978): 220. http://www.jstor.org/stable/3173022.

[29] Ibid., 228.

[30] Lyons, Sex, 310.

[31] Ibid., 336-341, 352, 369, 385-388.

[32] Ryan, Regulating, chapter 6.

[33] “Early Americans,” UPI. See also Larkin, Reshaping.

[34] Carroll Smith-Rosenberg, “Sex as Symbol in Victorian America,” Prospects 5 (October 1980): 51-70. Retrieved from https://www.cambridge.org/core/journals/prospects/article/abs/sex-as-symbol-in-victorian-america/A2E807BC9DFEFC09CAD2B938EFE2337F.

[35] Carl N. Degler, “What Ought To Be and What Was: Women’s Sexuality in the Nineteenth Century,” The American Historical Review 79, no. 5 (1974): 1467–90. https://doi.org/10.2307/1851777.

[36] Lyons, Sex, 312 and chapter 6.

[37] Daniel Scott Smith and Michael S. Hindus, “Premarital Pregnancy in America 1640-1971: An Overview and Interpretation,” The Journal of Interdisciplinary History 5, no. 4 (1975): 538. https://doi.org/10.2307/202859.

[38] Norton, Daughters, and Rosemarie Zagarri, Revolutionary Backlash: Women and Politics in the Early American Republic (Philadelphia: University of Pennsylvania Press, 2007).

[39] Zagarri, Backlash, 2-9.

[40] Ibid.

[41] Norton, Daughters, 297.

[42] Kelly A. Ryan, “Regulating Passion: Sexual Behavior and Citizenship in Massachusetts, 1740-1820,” doctoral dissertation, University of Maryland, 2006. Retrieved from https://drum.lib.umd.edu/bitstream/handle/1903/4122/umi-umd-3913.pdf?sequence=1&isAllowed=y. See page 275.

[43] Lyons, Sex, 158.

[44] See also “From the Columbian Centinel,” “THE EXTRACT,” Gazette of the United States (Philadelphia, PA), April 28, 1796. Retrieved from https://www.loc.gov/resource/sn84026273/1796-04-28/ed-1/?sp=2&st=pdf. Notice the reference to a fortune teller using dark terms with an old maid — the future is not bright.

[45] Ryan, dissertation, 231.

[46] Norton, Daughters, 41.

[47] Ibid., 42.

[48] “FROM THE GENERAL ADVERTISER,” Gazette of the United States (New York, NY), January 22, 1791. Retrieved from https://www.loc.gov/resource/sn83030483/1790-01-22/ed-1/?st=pdf. Observe the language: “women must die old maids.”

[49] A. O. A. M., “For the NATIONAL GAZETTE,” National Gazette (Philadelphia, PA), August 21, 1793. Retrieved from https://www.loc.gov/resource/sn83025887/1793-08-21/ed-1/?st=pdf.

[50] Ryan, dissertation, 274.

[51] “[THE following plan for establishing a college for old Maids…],” National Gazette (Philadelphia, PA), October 3, 1792. Retrieved from https://www.loc.gov/resource/sn83025887/1792-10-03/ed-1/?sp=4&st=pdf.

[52] “The Corporal, No. V,” Gazette of the United States and Philadelphia Daily Advertiser (Philadelphia, PA), December 5, 1798. Retrieved from https://www.loc.gov/resource/sn83025881/1798-12-05/ed-1/?sp=2&st=pdf.

[53] Karin A. Wulf and Catherine La Courreye Blecki, eds., Milcah Martha Moore’s Book: A Commonplace Book from Revolutionary America (University Park: Penn State University Press, 1997), 95-96.

[54] “OLD MAIDS OF WINTER,” Gazette of the United States and Philadelphia Daily Advertiser (Philadelphia, PA), February 13, 1799. Retrieved from https://www.loc.gov/resource/sn83025881/1799-02-13/ed-1/?sp=2&st=pdf.

[55] Norton, Daughters, 240.

[56] Ibid.

[57] Ibid., 241.

[58] Ibid.

[59] Ibid., 241-242.

[60] Cleves, “Same-Sex Love.”

[61] Paul Lewis, “‘Lines Written by a Lady’: Judith Sargent Murray and a Mystery of Feminist Authorship,” The New England Quarterly 92, no. 4 (2019): 615–632. Retrieved from https://www.jstor.org/stable/26858283.

[62] Ibid., 617-618.

[63] Paul Lewis, “The Brief Career of ‘Betty Broadface’ Defender of ‘Old Maids,’” Early American Literature 57, no. 1 (2022): 221-235. Retrieved from https://muse.jhu.edu/article/846527/pdf.

[64] Ibid., 224.

[65] “To All Christian People,” Gazette of the United States and Daily Evening Advertiser (Philadelphia, PA), January 13, 1795. Retrieved from https://www.loc.gov/resource/sn84026271/1795-01-13/ed-1/?sp=2&st=pdf.

[66] Simon Searcher, “THE STUDENT — NO. I,” Gazette of the United States (New York, NY), December 9, 1790. Retrieved from https://www.loc.gov/resource/sn83030483/1790-12-09/ed-1/?sp=4&st=pdf.

[67] See for instance “THE DISH OF TEA,” National Gazette (Philadelphia, PA), July 7, 1792. Retrieved from https://www.loc.gov/resource/sn83025887/1792-07-07/ed-1/?sp=4&st=pdf.

[68] For instance, to Anne Royale in 1826, old maids were “odd” but also “very coy and very sensible.” See Anne Royale, Sketches of History, Life, and Manners in the United States (New Haven: Young Ladies Academy at the Convent of the Visitation in Georgetown, 1826). Retrieved from https://www.loc.gov/resource/lhbtn.18960/?st=pdf&pdfPage=157.

[69] Solomon Simple, “The Moral Dispensary,” Gazette of the United States and Daily Advertiser (Philadelphia, PA), July 1, 1800. Retrieved from https://www.loc.gov/resource/sn84026272/1800-07-01/ed-1/?sp=2&st=pdf.

[70] “Wooden Breast Bone, and Jackson’s Victory,” 1815 leaflet. Retrieved from https://www.loc.gov/item/rbpe.22803200/.

[71] George Fowler, ed., The Wandering Philanthropist (Philadelphia: Bartholomew Graves, 1810), 180. Retrieved from https://tile.loc.gov/storage-services/public/gdcmassbookdig/wanderingphilant00fowl/wanderingphilant00fowl.pdf.

[72] “The Old Maid: When I Was a Girl of Eighteen,” 1837, C. Bradlee (Boston). Retrieved from https://archive.org/details/sm_oldmaid/page/n3/mode/2up.

[73] Caleb Atwater, Writings of Caleb Atwater (Columbus: Scott and Wright, 1833), 333. Retrieved from https://www.loc.gov/resource/lhbtn.12883/?st=pdf&pdfPage=282.

[74] Hon. Mrs. Norton, “LAWRENCE BAYLEY’S TEMPTATION,” The Herald (New York, NY), February 18, 1836. Retrieved from https://www.loc.gov/resource/sn83030311/1836-02-18/ed-1/?sp=4&st=pdf.

[75] “MANAGER’S LAST KICK — ABOLITION,” Morning Herald (New York, NY), June 26, 1837. Retrieved from https://www.loc.gov/resource/sn83030312/1837-06-26/ed-1/?sp=2&st=pdf.

[76] “Leaf from a Loafer’s Log,” Morning Herald (New York, NY), May 29, 1838. Retrieved from https://www.loc.gov/resource/sn83030312/1838-05-29/ed-1/?sp=2&st=pdf.

[77] “The Follies of the Fashionable System of Female Education,” Morning Herald (New York, NY), September 3, 1839. Retrieved from https://www.loc.gov/resource/sn83030312/1839-09-03/ed-1/?sp=2&st=pdf.

[78] “From the Edingburgh Literary Journal: Monsters Not Mentioned in Linnaeus,” Cherokee Phoenix and Indians’ Advocate (New Echota, GA), September 9, 1829. Retrieved from https://www.loc.gov/resource/sn83020874/1829-09-09/ed-1/?sp=4&st=pdf.

[79] Norton, Daughters, 240-242, chapters six through nine.

[80] Ibid., 242.

[81] Carpenter, Virginity, 22.

[82] Norton, Daughters, 241.

[83] Naomi Braun Rosenthal, Spinster Tales and Womanly Possibilities (New York: SUNY Press, 2001).

[84] Solomon Simple, “The Moral Dispensary,” Gazette of the United States and Philadelphia Daily Advertiser (Philadelphia, PA), April 9, 1800. Retrieved from https://www.loc.gov/resource/sn83025881/1800-04-09/ed-1/?sp=3&st=pdf.

[85] “From the Wilmington Monitor,” Gazette of the United States and Daily Advertiser (Philadelphia, PA), August 4, 1800. Retrieved from https://www.loc.gov/resource/sn84026272/1800-08-04/ed-1/?sp=2&st=pdf.

[86] George Fibbleton [Asa Greene], Travels in America (New York: W. Pearson, P. Hill, and others, 1833), 80. Retrieved from https://www.loc.gov/resource/gdcmassbookdig.travelsinamerica00gree/?st=pdf&pdfPage=87.

[87] “NOT PARTICULAR,” The Madisonian (Washington, D.C.), December 5, 1837. Retrieved from https://www.loc.gov/resource/sn82015015/1837-12-05/ed-1/?sp=4&st=pdf.

[88] “Fashionables at Saratoga, 1837,” Morning Herald (New York, NY), July 22, 1837. Retrieved from https://www.loc.gov/resource/sn83030312/1837-07-22/ed-1/?sp=2&st=pdf.

[89] “AMERICAN INSTITUTE,” Morning Herald (New York, NY), November 1, 1837. Retrieved from https://www.loc.gov/resource/sn83030312/1837-11-01/ed-1/?sp=4&st=pdf.

[90] “EPHEMERA; OR ETCHINGS FROM LIFE,” The Native American (Washington, D.C.), March 3, 1838. Retrieved from https://www.loc.gov/resource/sn86053569/1838-03-03/ed-1/?sp=4&st=pdf.

[91] “Nuptial Soiree and Supper on Wednesday Night,” Morning Herald (New York, NY), February 14, 1838. Retrieved from https://www.loc.gov/resource/sn83030312/1838-02-14/ed-1/?sp=4&st=pdf.

[92] Lincoln Mullen, “Divergence in U.S. Sex Ratios by County, 1820–2010,” interactive map, http://lincolnmullen.com/projects/sex-ratios/. Derived from data via Minnesota Population Center, National Historical Geographic Information System: Version 2.0 (Minneapolis, MN: University of Minnesota, 2011), http://www.nhgis.org.

[93] Michael R. Haines, “Long-term Marriage Patterns in the United States from Colonial Times to the Present,” The History of the Family 1, no. 1 (1996): 15-39. Retrieved from https://www.tandfonline.com/doi/abs/10.1016/S1081-602X%2896%2990018-4.

[94] Elisa Janine Sobo and Sandra Bell, eds., Celibacy, Culture, and Society: The Anthropology of Sexual Abstinence (Madison: University of Wisconsin Press, 2001).

[95] Carpenter, Virginity, 19, and Karen Cheatham, “‘Let Anyone Accept This Who Can’: Medieval Christian Virginity, Chastity, and Celibacy in the Latin West,” in Carl Olson, ed., Celibacy and Religious Traditions (Oxford: University of Oxford Press, 2007).

[96] Benjamin Kahan, Celibacies: American Modernism and Sexual Life (Durham: Duke University Press, 2013).

    “Premarital Sex is Nearly Universal Among Americans, and Has Been for Decades,” Guttmacher Institute, December 19, 2006, https://www.guttmacher.org/news-release/2006/premarital-sex-nearly-universal-among-americans-and-has-been-decades.

[97] Katherine Barak, “Spinsters, Old Maids, and Cat Ladies: A Case Study in Containment Strategies,” doctoral dissertation, Bowling Green State University, 2014. Retrieved from https://etd.ohiolink.edu/apexprod/rws_etd/send_file/send?accession=bgsu1393246792&disposition=inline

   Jamie Wareham, “How to Be an Asexual Ally,” Forbes, October 25, 2020, https://www.forbes.com/sites/jamiewareham/2020/10/25/how-to-be-an-asexual-ally-learn-why-some-asexual-people-have-sex-and-accept-that-most-dont/?sh=56bc9e1148d8.

When to Stop Watching ‘It’s Always Sunny in Philadelphia’

The whacky, awful characters of It’s Always Sunny in Philadelphia will never be forgotten — Dennis the absolute psychopath, Charlie the stalker, Mac the Catholic determined not to be gay, Dee the bird who thinks she is funny, and Frank the, well, very short. The show was hilarious and bitingly clever for many years; even the astonishing sound of the gang screaming in argument was endearing, always delightfully punctuated and contrasted with that cheerful, chiming music. Unfortunately, the series’ later seasons grew a bit forgettable. When is the right time to jump ship before Always Sunny overstays its welcome?

I would suggest watching through season 10 and then stopping. (Although the second-to-last episode of the season sees Frank planning to retire and the others fighting for control of the bar, which could make for a nice series finale.) The group dating, Family Feud, and “Mac and Charlie Join a Cult” shenanigans of season 10 are all good fun, but there’s a scene in episode three that is unmissable. Stopping before this moment would be a crime.

Dennis: Dee? I swear you would be of more use to me if I skinned you and turned your skin into a lampshade. Or fashioned you into a piece of high-end luggage. I can even add you to my collection.

Dee: Are you saying that you have a collection of skin luggage?

Dennis: Of course I’m not, Dee. Don’t be ridiculous. Think of the smell. You haven’t thought of the smell, you bitch! Now you say another word and I swear to God I will dice you into a million little pieces. And put those pieces in a box, a glass box, that I will display on my mantel.

On the other side of the desk, a psychiatrist slowly reaches for his pen and notebook.

Seasons 11 and 12 are not bad by any means, but some of the issues that had been only stirring earlier on come into maturity. Things begin to feel, here and there, repetitive. Season 11’s first two episodes hit hard in this regard, with another episode of the gang playing their “Chardee MacDennis” game followed by a sort-of time travel episode back to season 1. A later episode tackles a trial over events that happened in an earlier season — and this is not the first courtroom appearance for the gang, either. The gimmicks ramp up, too — attempts to keep things fresh that often characterize a show running out of steam. “Being Frank” is a whole episode from Frank’s point-of-view. The gang magically turns black in season 12 (it’s also a musical). Then there’s the classic sitcom-esque episode, the documentary-like episode, the one where Frank and Mac get to be soldiers in (virtual reality) Iraq (Always Sunny essentially begins to morph into Community), and the outing devoted entirely to the side character of Cricket, the former priest who has been ruined and mutilated by the gang’s antics. Cricket is somewhat emblematic here, beyond him looking worse and worse in a show that may be getting worse over time: he seems to show up more, as if the writers have less to say about and through the main characters, and each time you see him he’s less interesting, he’s gotten old, like the project as a whole.

And, in the literal sense, so had the cast. Danny DeVito (Frank) was always older, of course, but suddenly, after twelve years, the other stars hit their forties, and perhaps the gang’s insanity and hijinks began to feel slightly less believable as their appearances matured. Further, old age can make you look tired, making a series feel the same way.

In any case, at this point even Glenn Howerton (Dennis) was burned out. The finale of season 12 set him up to leave the show to do new things, though he was, reportedly, in most of the episodes of season 13 and stayed on after that. I stopped watching after his pseudo-goodbye. If a star, writer, and producer of a show is checking out, it’s often best to do the same. Even wiser to do so earlier, in this instance. Again, this is not to say that anything after the tenth season isn’t entertaining. I might pop back into Always Sunny every once in a while and watch a later episode for a laugh. But if you’re looking to bail before the inevitable downhill slide of a long-running series, you now know when to do so.

Season 16 of Always Sunny has just premiered on FX and Hulu.

For more from the author, subscribe and follow or read his books.

Nonverbal People (And Mermaids) Can Consent

When it was announced that The Little Mermaid of 2023 would alter the lyrics of the 1989 original’s “Kiss the Girl,” two questions on consent arose — though their implications often went unexplored.

The first question related directly to the old song. “Yes, you want her,” the crab whispers to Prince Eric, who is on a romantic boat ride with the former mermaid Ariel. “Look at her, you know you do / Possible she wants you too / There is one way to ask her / It don’t take a word / Not a single word / Go on and kiss the girl.” This was changed to “Possible she wants you too / Use your words, boy, and ask her / If the time is right and the time is tonight / Go on and kiss the girl.” Boys can benefit from this (as can others), because framing a kiss as the “one way to ask” a girl if she “wants” you is backward. The kiss should come after there’s an understanding that you’re wanted. The change has some value and is, one must say after watching it, rather charming and humorous (“Use your words, boy” is incredible phrasing).

The second question is more muddled and interesting. Articles covering the lyrical change often drew attention to something else: in this scene, Ariel has already bargained away her voice. A writer for Glamour noted, without elaboration: “These lyrics suggest that Prince Eric doesn’t need Ariel’s verbal consent to kiss her, which of course he does, but there’s the slight issue of the fact that she cannot speak.” Insider wrote: “The song occurs during a point in the plot where Ariel has given up her speaking (and singing) voice for a pair of human legs, but the overall implication that Prince Eric should make a move on Ariel first and ask for consent later is likely troubling for some modern viewers.” A host of The View said, “With ‘Kiss the Girl,’ she gave her voice away so she could have legs, so I don’t know how she could talk… How do you consent if you can’t talk?,” to which a writer for CinemaBlend responded, “That’s very true… That would make it even worse for Prince Eric to kiss Ariel if she was literally in a position where she couldn’t speak up if she didn’t want to be kissed.” And so on (“Ariel’s voice is gone and she literally can’t offer verbal consent,” The Mary Sue).

This criticism may come from a noble place — affirmative statements are indeed valuable — but it has an odd implication. If verbal consent is always necessary, that precludes romance and sex for human beings who cannot speak. Selective mutism aside, there are various biological and neurological problems that can render someone voiceless. On the Left, we will race to be the most virtuous and woke, but this can sometimes erase or crush (other) marginalized people. These writers rush to say that “of course” Eric needs “Ariel’s verbal consent to kiss her,” and because she can’t speak it would be wrong for him to make an attempt. But unless one wants nonverbal people to never experience a kiss, unless we pretend such individuals have no agency, there needs to be room to demonstrate consent without verbal affirmation. There’s other linguistic forms like sign language and agreement in writing (which is often just a lame, whiny joke from the Right, but sometimes an actual thing), but also the nonverbal signals that sensible leftwing or liberal organizations and universities still point to when discussing safe sex. Moving closer, leaning in for the kiss, closing one’s eyes in anticipation, and so on. This is what Ariel does in the original film. She cannot speak, or use sign language, or in the moment write, but she is alive and has agency. As a writer for Jezebel put it: “Keep in mind that the plot leaves no question of Ariel’s consent. She huffs and puffs through the scene as Eric swerves her. It is her entire mission, in fact, to be kissed, as it will defeat Ursula’s curse and allow her to remain permanently human.” Actions can give consent.

Conversely, actions can revoke it, as when someone pulls away, lies inert, avoids eye contact, etc. This fact also points to the importance of not positioning affirmative, explicit statements (spoken, signed, or written) as the only way to consent. “Listening only for verbal signs of possible consent without paying attention to a person’s non-verbal cues is not a good way to determine consent either,” a sex ed organization once wrote. “For example, a person could say yes due to feeling pressured, and in a situation like that the verbal cue could be present alongside non-verbal signs of no consent.” Actions are just as important as words — they give consent, take it away, and even override affirmative statements. As Ursula once howled, “Don’t underestimate the importance of body language!” Actions can be misinterpreted, of course, in the same way an explicit Yes can be hollow. Romance and sex have to be navigated with care. (It goes almost without saying that intentional violations of nonverbal or verbal objections must be shown no mercy.)

Two ideas prompted this writing. First, the equating of an inability to speak with an inability to consent. It completely and obviously forgets a group of human beings. As if nonverbal people do not exist, have no agency, and can never enjoy love safely because they cannot literally say Yes. Second, there’s the drift away from what could be called sex realism. Framing the spoken, signed, or written word as the only way to actually consent marks anything else as nonconsensual. Is that realistic? Most human beings who have enjoyed a kiss or sex or anything in between would probably say No. They know pleasure and connection can be consensual without words. Even the most fervent Leftist is probably not, consistently, during every romantic encounter, saying “May I kiss you?” / “Kiss me”; “Can I touch you there?” / “Touch me here”; or “May I take this off?” / “Take this off” before the action occurs. I can offer no proof of this, of course, only the anecdotal — I date liberal and leftwing people, and nonverbal consent still seems to be standard practice. At times there is open communication about the big ones (“Are you ready for that?” / “Fuck me”), which is wonderful, but oftentimes you fall passionately into each other’s arms without any explicit statements, which is wonderful as well. Even those who have adopted a step-by-step, regular check-in approach to love probably take it seriously when with someone new, but let it fade when things advance into a relationship or marriage. On the one hand, this makes sense — you now know your person, what she likes, there’s trust and comfort, and so on. But on the other hand, it’s not fully clear why you shouldn’t continue to seek affirmative, explicit, linguistic agreement before taking any sort of action — if words are the only way to actually consent, what difference would it make if this is someone you met an hour ago or a husband of 30 years? Marital rape exists, partners can commit nonconsensual acts, consent can be violated. Perhaps some people actually do practice what they preach, not proceeding without a linguistic instruction or a positive response to an inquiry, regardless of whether they are with someone new or a longterm lover. Only they can condemn, without hypocrisy, other people for relying on nonverbal agreement. But all this is doubtful. More likely, people convey consent with their actions all the time. There’s performative demands on the internet, and then there’s how people actually behave when with someone they like.

Overall, it is a fine idea to modify lyrics to position a proper kiss as only coming after an understanding that such an act is desired. This understanding can be gained by simply asking, as the song urges; it is typically the clearest form of consent. But nonverbal communication also conveys this understanding. And acting on it is moral. To push nonverbal-spurred romance into the realm of the objectionable is to say nearly all human beings — mute or verbal, hookup or lifelong companion, male or female or nonbinary — are guilty of sexual violence. The spoken, signed, or written word cannot be the only way to agree to a kiss or sex. It may be valuable to encourage people to do this, especially kids and teens — the ones watching The Little Mermaid, after all — as they may be worse at perceiving or conveying nonverbal consent due to underdeveloped brains, worse impulse control, lack of experience and knowledge, etc. But romance without explicit statements can be consensual. Failure to procure them therefore can’t be castigated with any seriousness. The Little Mermaid of 2023 perhaps understands this — despite the new lyrics, Eric never actually asks Ariel if he can kiss her (she could have nodded). Like standard human beings, they lean in toward each other, their actions acknowledging their consent. The way most of us behave, after posting on the internet.

For more from the author, subscribe and follow or read his books.

A New Paradigm for Black History?

In May 2016, historians gathered in Washington, D.C., for “The Future of the African American Past” conference to share research and discuss new directions in black history. The second session, chaired by Eric Foner, was entitled “Slavery and Freedom” and summarized by Gregory P. Downs of the University of California, Davis in the following fashion for the conference blog: “Historians Debate Continued Relevance Of An Old Paradigm.”[1] The freedom paradigm used by historians places heavy emphasis on legal emancipation as a great turning point, a “historic rupture,” for African Americans.[2] The lay reader may wonder how this could be controversial — was not the end of slavery both a massive event and a new beginning? — but then think the same of the counterargument. Other scholars point out that such an emphasis on progress threatens to “underplay continuities between slavery and emancipation,” to quote Downs.[3] In many ways, it is argued, after their bondage blacks were not much better off. This is not to say that the freedom paradigm ignored the injustices that continued after slavery.[4] It did not. But it is to say that reframing history, black or otherwise, can open the door to important new discoveries. It concerns how to look at the past. A perspective that takes for granted a positive turning point may indeed have blinders to negative consequences and continuations; conversely, a perspective that focuses on darkness and limits may downplay progress and its significance. Neither paradigm is right, both are valuable, but one may be more useful now, given all the work that has come before. Has the freedom angle reached the end of its utility, as Foner asked his panelists?[5] If the old paradigm has been mined for many riches over many decades, is it time to see what knowledge a new perspective can uncover? To temper the celebrations of emancipation?

This paper critically examines the works of three historians, one who defends the continued usefulness of the freedom paradigm and two who suggest the field must move on to a fresh approach. Two of these scholars were on Foner’s panel at the conference, bringing papers to support their theses, while one published an influential article earlier on, in fact referenced by Foner in his opening remarks.[6] The work currently in your hands or on your screen weighs in on the historiographical debate represented by these papers, arguing that the freedom narrative remains relevant and satisfactory, due to its preexisting nuance and its closer adherence to reason.

Let us begin with the two reformers. In “Unwriting the Freedom Narrative: A Review Essay,” Carole Emberton of the University of Buffalo charts recent scholarship on the negative side effects and failures of official freedom, using it to argue that “our attention should turn” to the “tyrannies” that “long outlived slavery,” for emancipation was not a “wholly redemptive experience” for America.[7] For example, disease, displacement, and family separation were ruinous for large numbers of liberated blacks during the Civil War.[8] Some slaves were taken to Cuba and remained in bondage for years afterward.[9] Ideologies, such as the right to sell one’s labor, aided the cause of abolition before the war and worked against black rights afterward, one of many manners in which freedom was betrayed beyond the obvious backlash of Jim Crow segregation and the rise of the Ku Klux Klan.[10] For some, emancipation was not so revolutionary or celebratory. The rosy “old freedom narrative [is] outdated and oversimplified,” Emberton concludes.[11]

Walter Johnson of Harvard engages in this debate in a more philosophical way. His conference paper, “Slavery, Racial Capitalism, and Human Rights,” was a bit inaccessible and at times unsatisfyingly suggestive, but offered much to ponder. Johnson questioned the “rights-based version of human emancipation,” following Marx, who regarded “political emancipation” as having, in Johnson’s words, “terrific promises and bounded limits.”[12] Human rights are “not…nor in my view should [they] be…‘the final form of human emancipation.’”[13] Being universal, they are insufficient to address the specific wrongs, against a specific target, of slavery.[14] Johnson raises reparations as one way to approach real emancipation.[15] He also spends some time arguing against the use of terms such as “inhumane.” To say the actions of enslavers was inhumane separates them from normality, from known human capacity.[16] It creates a divide between them (inhuman) and us (human). As a whole, the work sides with Emberton in stressing the limits of official freedom and erasing troubling barriers between timeframes (implicitly: one that pretends a slave society was inhumane but after the war the humane was reached at last).

There is no denying that the Civil War and legal freedom deserve, as Emberton wrote, “critique…as a vehicle of liberation.”[17] New knowledge is being generated, on terrible side effects of emancipation and continued white oppression in new and familiar forms — African Americans became Sick from Freedom (Jim Downs), experienced Terror in the Heart of Freedom (Hannah Rosen), and needed More Than Freedom (Stephen Kantrowitz). Yet it is not clear that such important scholarship can or should displace the freedom narrative, for several reasons.

First, the old paradigm, while stressing the revolutionary nature of emancipation, has long allowed for critique of its limits. The current trend is more an expansion of that preexisting examination than a shift to a new paradigm. Consider the texts Emberton cites as evidence that scholars have moved beyond the freedom narrative. Most are works published from 2012 to 2016, the year of her review, with some from the early 2000s. “For nearly two decades,” Emerton writes, “historians have been grappling with the inadequacies of the freedom narrative for analyzing American history…”[18] In other words, this is a twenty-first century trend, accelerated by or intimately connected with the new public conversation on race of the Black Lives Matter era, upon which Emerton briefly comments.[19] Surveying how slavery shaped modern society and continues to do so, more Americans and historians are questioning whether emancipation was “clear or complete.”[20] But the field was doing the same in the 1970s, ’80s, and ’90s when it studied, for instance, the miseries of sharecropping, what Harold D. Woodman in 1977 called the “Sequel to Slavery” for black Southerners.[21] How are these studies different from more recent ones that consider other disasters for African Americans, such as disease? Emberton even uses one work to build her argument that showed how blacks in Boston had to fight for rights beyond the Thirteenth, Fourteenth, and Fifteenth Amendments — how is that new?[22] Virtually any work, including rather old ones, that addresses the segregation era and the civil rights movement acknowledges and highlights, explicitly or implicitly, the failures of emancipation as a liberatory event. A brutal postbellum history practically required any lens with which to examine the event to leave much space for critique. This lens could not glorify emancipation in the same way a paradigm glorified, for instance, “Great Men” until social history challenged it. Gregory P. Downs suggests that the limits of freedom have been an essential part of the freedom narrative — the revolutionary turning point was never thought to have had no “tragedies” or “unfinished work” — which is what makes the narrative powerful and enduring.[23] The push for a new paradigm seems to flirt with false dichotomy: you cannot call an event a historic rupture if it has setbacks, even major ones. Semantics aside (for the moment), if the twenty-first century trend is simply an expansion of that of the twentieth, it can, logically, still exist under the traditional narrative — for as long as emancipation is judged a net positive for African Americans and American society, a likely and rightly unalterable thesis.

Turning to Johnson, the approach is similar. The historian highlights the difference between “material inequalities” and the “abstract equality” of freedom.[24] He agrees with Marx that the latter was a “big step forward,” but the former problem is yet to be addressed.[25] Why should freedom be limited to the ability to exercise one’s, to quote Johnson, “independent will”?[26] The “wrongs [of slavery] might not be mended by universal rights,” but by something far less abstract.[27] (Though human rights, one could argue, may be a precondition of or helpful forerunner to material equality.) This inadequacy is true, but in the context of a debate over whether a new paradigm for black history is needed it begins to feel like the same false choice. An event cannot be a historic turning point for blacks if it does not go far enough. It is not revolutionary if it is not revolutionary enough. Granted, this may appear as much a truism as a false dichotomy — we may have stumbled upon something that is both, plus a contradiction, shattering logic forever — but that is the nature of the debate. Some scholars posit the field must step away from emancipation as a revolutionary happening due to its limits, others perceive it as limited but revolutionary enough for the label. We are where historians love to be: in the weeds splitting hairs. But again, it is difficult to see why four million people no longer being property should not confidently be called a seismic break with the past, despite any disastrous effects or material continuations, unless this was somehow a net negative for those millions — and tens of millions of descendants. Note that the only comprehensible rationalization forces one to lock oneself in a specific era — even if one tried to argue that the rest of the nineteenth century, for example, was not much better than slavery, with sharecropping, poverty, the Klan, segregation, sickness, industrial capitalism, and so on, all we need to do is instead consider slavery from the lens of the 2020s — a dangerous and unequal, but much improved, society. The twenty-second century may be better still, and so on. Emancipation, then, made a major difference. It may be so that calling the state of affairs before the break “inhumane” obscures the “fact that these are the things that human beings do to one another,” but that does not mean human behavior and societies have not grown more decent over time.[28]

Thavolia Glymph of Duke University defended the transformational and positive nature of emancipation at the conference with her “‘Between Slavery and Freedom’: Rethinking the Slaves’ War.” Citing “Unwriting the Freedom Narrative,” Sick from Freedom, and more, Glymph writes that historians challenging the old paradigm believe the historiography “that emphasized black agency and cultural resistance went too far.”[29] “I think we need to take a step back,” she continues, from the thesis that “black people emerged from the Civil War so damaged that they could hardly stand on the ground of freedom (if they lived to see it).”[30] For most slaves survived the war to celebrate liberty, and expected liberty to come with a high cost.[31] “They knew many of them would suffer and die before any of them experienced freedom…”[32] The miseries were inseparable from progress, Glymph seems to suggest. The good and bad went hand-in-hand. For example, for black women the refugee camps behind Northern lines were both places of horror and real stepping stones from enslavement to free lives.[33] “Some historians ask us to see [Margaret] Ferguson’s lost leg [and subsequent death in a camp] as symbolic of a damaged and lost people, as proof of the need to temper our judgement that freedom was liberating. But, I think, we ought to proceed with great caution,” for scholars must “weigh those losses against the success of black women” like Anna Ashby, who survived the war, the camps, and enjoyed freedom with her husband and children.[34] There was no liberty without sacrifice.

This is a compelling point. If the suffering was inseparable from positive change, this implies the former cannot undermine the significance of the latter. “The losses and violence black people suffered during the war mattered,” Glymph writes. Mattered. Indeed, the horrors meant something: the price of freedom, not its diminution. Notice this brings black Americans even deeper into “the making of freedom.”[35] Beyond the black troops that reinforced and saved the Union army, beyond the slaves who rebelled and escaped the South to at once find freedom and hurt the Confederate war effort, any form of suffering, from illness to amputation to brutal Jim Crow laws, was a price paid for emancipation. As long as it was worth it (net positive), the cost does not lessen the revolutionary nature of freedom. If anything, it enhances it. If one can ignore the obvious discomfort of speaking of figurative price and purchase in a discussion of human slavery, the point can be made. What comes with a high cost tends to be more valuable. Under the framework that suffering was a cost, to say the great tribulations of the black population took away from the meaning or value or significance of emancipation does not make sense.

Of course, even if miseries were expected (by all slaves) and were integral to the “making of freedom,” there is still some room to mull over associated agency. It was not exactly a mother’s will for her children to perish to disease. Nor the black will to be subjugated and terrorized after the war. It was, Glymph would seemingly posit, the will to accept potential and unknown consequences that mattered most. What occurred later in violation of one’s agency was in some fashion overridden by the initial attitude, the precondition. Initial agency gambled with later agency, and if it lost who could gripe? That was the risk. This is sensible. But other thinkers may disagree. After all, human beings may change their minds. Even if all African Americans later judged the passing of their children, the rise of Jim Crow, or their own imminent deaths after amputation as worth it to abolish slavery, that statement is true. If people are capable of changing their minds, why should later agency be in any way held hostage by initial? Emberton and Johnson’s position then looks a bit more sensible. Dying of disease caused by the Civil War is just as tragic a decimation of the black will as slavery itself. It is not a price paid, just an additional way one’s future can be cut short in American society. The side effects of emancipation were clearly ruinous, how can we lift it up so readily? All this is to say that Glymph’s efforts to emphasize black agency in this debate may face challenges. She wants to push against those who claim agency and resistance in the historiography have gone “too far,” and even compares this trend to when whites downplayed freedpeople’s involvement in the war and framed slaves as happy and benefiting from bondage.[36] But advocates of a new paradigm have an at least thought-provoking response to the position that the “come what may” attitude lifts up black agency as high as Glymph believes. Fortunately, her argument seems to function whether or not agency is taken into account. If suffering is irrevocably tied to progress, if that is the cost, it does not matter whether such suffering was a result of a victim’s agency. African Americans paid a price in the “making of freedom,” consciously or not.            

In sum, we have seen that scholars’ questioning of the freedom narrative is not so novel; it represents a real expansion of old ways of looking at history, but is not a new direction. It pushes the field toward false dichotomy — no major turning point can include disasters and continuations. And it overlooks the persuasive idea that hardships cannot take away from the significance of emancipation if they were inevitable, inherent products of that project. Of course, historians poking holes in emancipation as a triumphant event are offering important new knowledge and further nuance, which is always praiseworthy. Historians of the old school are likewise making progress: to answer Foner’s question on whether the freedom narrative is still useful, Brenda Stevenson of UCLA brought findings on how freedpeople at last formalized their marriages and what that meant to them.[37] We continue to see that in myriad ways, large and small, freedom was transformative. Millions of souls were no longer owned by others. The negative consequences and failures of the war and emancipation must be understood but cannot discount this. The idiom about the forest and the trees comes inevitably to mind.

For more from the author, subscribe and follow or read his books.


[1] Gregory P. Downs, “‘Slavery and Freedom’: Historians Debate Continued Relevance Of An Old Paradigm,” The Future of the African American Past, National Museum of African American History and Culture, accessed May 13, 2023, https://futureafampast.si.edu/blog/%E2%80%9Cslavery-and-freedom%E2%80%9D-historians-debate-continued-relevance-old-paradigm.

[2] Ibid.

[3] Ibid.

[4] Ibid.

[5] Ibid.

[6] Stephanie Smallwood, “Slavery And The Framing Of The African American Past: Reflections From A Historian Of The Transatlantic Slave Trade,” The Future of the African American Past, National Museum of African American History and Culture, accessed May 13, 2023, https://futureafampast.si.edu/blog/slavery-and-framing-african-american-past-reflections-historian-transatlantic-slave-trade.

[7] Carole Emberton, “Unwriting the Freedom Narrative: A Review Essay,” The Journal of Southern History 82, no. 2 (2016): 394. http://www.jstor.org/stable/43918587.

[8] Ibid., 379-382.

[9] Ibid., 394.

[10] Ibid., 384.

[11] Ibid., 394.

[12] Walter Johnson, “Slavery, Racial Capitalism, and Human Rights” (paper presented at The Future of the African American Past, Washington, D.C., May 20, 2016), 5, retrieved from https://futureafampast.si.edu/sites/default/files/02_Johnson%20Walter.pdf.

[13] Ibid., 7.

[14] Ibid., 8.

[15] Ibid., 16.

[16] Ibid., 2-3.

[17] Emberton, “Unwriting,” 394.

[18] Ibid., 378.

[19] Ibid.

[20] Ibid.

[21] Google Scholar, accessed May 13, 2023, https://scholar.google.com/scholar?q=%22slavery%22+%22sharecropping%22&hl=en&as_sdt=0%2C26&as_ylo=&as_yhi=1999. This URL displays search results for “slavery” + “sharecropping” before 1999.

   Woodman, Harold D. “Sequel to Slavery: The New History Views the Postbellum South.” The Journal of Southern History 43, no. 4 (1977): 523–54. https://doi.org/10.2307/2207004.

[22] Emberton, “Unwriting,” 383-384.

[23] Downs, “Debate.”

[24] Johnson, “Slavery,” 5.

[25] Ibid., 5, 7.

[26] Ibid., 4.

[27] Ibid., 8.

[28] Ibid., 4.

[29] Thavolia Glymph, “‘Between Slavery and Freedom’: Rethinking the Slaves’ War” (paper presented at The Future of the African American Past, Washington, D.C., May 20, 2016), 3, retrieved from https://futureafampast.si.edu/sites/default/files/002_Glymph%20Thavolia.pdf. See also footnote 6.

[30] Ibid., 3-4.

[31] Ibid., 4.

[32] Ibid.

[33] Ibid., 5-6.

[34] Ibid., 6.

[35] Ibid., 7.

[36] Ibid., 3.

[37] Brenda E. Stevenson, “‘Us never had no big funerals or weddin’s on de place’: Ritualizing Black Family in the Wake of Freedom” (paper presented at The Future of the African American Past, Washington, D.C., May 20, 2016), retrieved from https://futureafampast.si.edu/sites/default/files/02_Stevenson%20Brenda.pdf.

Socialism Is About Getting Filthy Rich

It’s important to distinguish between what we might call “cartoon socialism” — the imaginings of reactionaries and the uninformed — and the earnest twenty-first century socialist vision, how things would actually work. For example, cartoon socialism sounds like this: “They want total equality! To make everyone have the same wealth!”

Well, my philosophy of socialism — and modern democratic socialism in general — does not call for a perfect distribution of wealth. Not a one-time nor regular redistribution to ensure everyone is financially equal. But it does call for a society that establishes prosperity for all, resulting in a great reduction of inequality through tax-based redistribution and doing away with capitalist owners. While some will earn and own more wealth than others, all will have a comfortable life through guaranteed jobs or income, the co-ownership of one’s place of work, universal healthcare and education, and so on. Similarly, to narrow in on another myth, ownership of the workplace isn’t simply about dividing up every cent of revenue among the workers.

What’s useful about stopping to play in the sandbox of cartoon socialism is that it drives certain truths home in a powerful way. Say you took the net wealth of all U.S. households — $147 trillion at the end of 2022 — and divided it up among all 131 million households. Each household would have $1.1 million in assets. Not bad, considering “the bottom 50% [of households] own just 1% of the wealth in the U.S. and have a median net worth less than $122,000.” Nearly half the nation is poor or close to poor, with incomes in the $30,000s or lower. The “bottom” 80% of Americans have about 16% of the total wealth (all possessing less than $500,000). We would go from 12% of Americans being millionaires to essentially 100% overnight. Yet such a dramatic redistribution is not the strategy to abolish poverty that most democratic socialists advocate (the pursuit of greater personal wealth offers some benefits to any economic system that entails currency and consumption, i.e. the individual who leaves her current worker cooperative [see below] to launch a new enterprise, hoping she can earn more; this new business may be quite valuable to society, and, given the diversity of human motivations, may not have existed without the possibility of personal enrichment). Many, myself included, don’t even call for a maximum income. But the hypothetical makes the point: we have the means to create a much better civilization, one where all are prosperous. (With such means, is it moral to allow the material miseries of millions to persist?) Heavier taxation on the top 10-20% of Americans, where nearly all the wealth is currently pocketed, as well as on the largest corporations (worker cooperatives later) will be the actual redistributive program, funding income, jobs, healthcare, education, and more for the lower class and everyone else (see What is Socialism? and Guaranteed Income vs. Guaranteed Work). Reactionaries can thank their lucky stars the “All Millionaires, Total Equality” plan isn’t presently on the agenda.

Likewise, consider worker ownership of businesses. In 2022, Amazon made $225 billion in profit (new money after expenses). Walmart became $144 billion richer. Apple made $171 billion in profit. The lowest-paid employees at the first two firms made a dismal $30,000 a year full time. Amazon had 1.5 million employees, Walmart 2.3 million, Apple 164,000. Outsourced labor working in miserable conditions overseas of course helps fuel these companies and should also be made wealthy, but for this illustration official employees will demonstrate the point. If these corporations were socialized, workers could use such profits to award themselves a bonus of $150,000 (Amazon workers), $63,000 (Walmart workers), or over $1 million (Apple workers). That’s on top of an annual salary, and could be repeated every year, sometimes less and sometimes more depending on profits. But that’s not exactly how modern worker cooperatives function. Like everything else, what to do with profits is determined by all workers democratically or by elected managers. Like capitalist owners, worker-owners have to balance what is best for their compensation with what is best for the enterprise as a whole. In cooperatives, as I wrote in For the Many, Not the Few: A Closer Look at Worker Cooperatives, worker-owners decide “together how they should use the profits created by their collective labor, be it improving production through technology, taking home bigger incomes, opening a new facility, hiring a new worker, lowering the price of a service, producing something new, and all other conceivable matters of business.” Predictably and properly, worker-owners do take home larger incomes and bonuses. But the idea that businesses will never grow, or will collapse into ruin, because the greedy workers will divide every penny of revenue amongst themselves is cartoon socialism, belied by the thriving cooperatives operating all around the globe today. The point is that ordinary people have greater power to build their wealth. Why tolerate scraps from a capitalist boss when you can rake in cash as a co-owner in a socialist society?

“Yeah, socialism is about getting rich — by stealing,” the reactionary says. A common perspective, but consider two points. First, the transformation of the American workplace could indeed be said to involve theft: individuals and small groups of people will lose ownership of their businesses (a slightly less painful transition might center around inheritance laws, with firms passing to all workers instead of a capitalist’s offspring; no one who created a business would have it wrestled away from her until death). But the obvious riposte is that capitalist ownership is theft. As I put it in How Capitalism Exploits Workers:

In the beginning the founder creates the good or provides the service (creating the wealth), but without workers he or she cannot produce on a scale larger than him- or herself. Would Bill Gates be where he is today without employees? The founder must hire workers and become a manager, leaving the workers to take his place as producer. The capitalist exploits workers because it is they who create the wealth by producing the good or providing the service. For the capitalist, the sale of each good or service must cover the cost of production, the cost of labor (worker compensation), and a little extra: profit the owner uses as he or she chooses. Therefore workers are not paid the full value of what they produce. This is exploitation. The wealth the workers produce is controlled and pocketed by the capitalist. The capitalist awards herself much while keeping worker wages as low as possible — to increase profits. The capitalist holds all decision-making power, making capitalism authoritarian as well as a grand theft from the people who generate wealth. Capitalism is the few growing rich off the labor of the many.

The only way to end this is to refashion capitalist businesses into cooperatives. To rob the thief. “Taking back what was taken from you” is a bit simplistic, given that the workers did not start the business and put in the blood, sweat, and tears to do so, but to a large degree this framing is true. Exploitation begins the moment the founder hires a non-owner and it continues every day thereafter, growing larger and larger with more people hired to produce goods and enact services, until companies are making hundreds of billions in new money a year, with owners awarding themselves hundreds of millions per year, while the workers who make it all possible, who make the engine go by producing something sellable, get next to nothing. They do not control or enjoy the profits they create. So one is forced to make a moral choice: permit the few to rob the many every single day and make themselves extremely wealthy, leaving the many with crumbs…or permit the many to rob the few (who previously robbed them) just once, helping all people to be prosperous forever. Not a difficult decision.

Second, there’s the other sense of theft under socialism, the taxing of the rich to redistribute money to the many in the form of free income, medical treatment, college, and so forth. “You want to steal from the rich to benefit yourself!” This is closely tied to the “Taxation Is Theft” mantra of the libertarians. On the one hand, this has some truth to it — money is taken from you without your direct consent. On the other hand, we live in a democracy, and there was no tax that emerged from nothingness, none divorced from the decisions of representatives. “Taxation Is the Product of Democracy” would be more accurate. (Socialism will also be a product of democracy, or it will not exist. And it will let you vote on tax policy!) Jury duty may be a theft of your time, but it was created through representative democracy and could be undone by the same — but isn’t because it is deemed important to a decent, functioning society. Now, once again, it could be noted that much of the wealth owned by the rich was stolen from the workers who made it possible. So redistribution makes some sense in that regard. But those against taxing the rich to fund universal services typically do not have much of a leg upon which to stand anyway. Sure, if you do not believe in any form of taxation whatsoever — no local, state, or federal taxes, meaning no U.S. military, no functioning governments, no free roads or highways, nor a million other things of value — then you can honestly crow that taxation is theft. At least you’re being a person of principle. But as soon as you allow for some kind of taxation as necessary to a modern society, you’ve essentially lost the argument. Then it simply becomes a disagreement over what taxes should be used for (bombs or healthcare) and how rates should be enacted (extremely progressive, progressive, regressive [includes flat taxes], extremely regressive). Theft is a nonissue.

“Heavier taxes on the rich is theft” is an entirely empty statement unless you believe all taxation is theft and must be abolished. If you don’t believe this, then you won’t make much sense: why would taking more be theft but taking not? If taking some isn’t stealing, it is difficult to see any justification for why taking more would be. As if swiping one item from the store is fine, but three wrong! As if a certain dollar amount or percentage tax rate magically reaches the level of theft. And why exactly is seizing a limited percentage from a middle-income family not theft while taking a larger one from a rich family is? Isn’t it involuntary either way? “Some” taxes are “necessary,” but “more” are “unnecessary” doesn’t work either, as how necessary something is deemed doesn’t impact whether it was stolen (see next paragraph). People can disagree on how progressive or regressive taxes should be. But the “theft” rhetoric, for all but the most crazed libertarian anarchists, is illogical.

Further, “Using taxes on the wealthy for Universal Basic Income is theft” makes as much sense, whether much or little, as “Using taxes on the wealthy for the highways or military is theft.” If all taxation is theft, fine. But for other conservatives, is it only theft depending on what the money is used for? If it’s a road, that’s not stealing…if it’s a direct deposit in the account of a poor family, it is? Both a highway system and a UBI would be beneficial to Americans. Isn’t this just a disagreement on what a government “for the people” should offer? Over what is necessary for a good society, a simple opinion? A difference may be that roads can be used by all, and a military protects all, but a direct deposit belongs to one person. Public v. private use. The socialist may counter that true UBI and other services like healthcare and education would be distributed and available to everyone — but would have to admit that the personal rewards for a wealthy person will be small compared to her personal (tax) cost. Is this an impasse? The conservative considers taxes for private use to be theft, for public use not theft; the Leftist considers neither theft. It all still feels a bit silly. Taking for purpose A is robbery, but taking for purpose B is not? In either case, money is seized from the rich against their will. It should be growing clear that any conservative who acknowledges some taxes are necessary has little rational basis for accusing the socialist program of tax-related theft. Such thinking is incoherent. They simply disagree with socialists on what tax rates and purposes should be, no theft in sight.

The title of this article is obviously a bit tongue-in-cheek. Socialism is about broadening democracy, ending exploitation, preventing economic crises, saving the environment, wiping out poverty, meeting medical needs, and many other things. But why should capitalism be the ideology to center a “get rich” framing? Sure, it allows the few to grow insanely wealthy off the labor of the many. But socialism allows the many to keep more of the profits created by their labor, and enjoy the financial and other benefits offered by a State that exists to meet human needs. It spreads the wealth and makes far more people well-off than capitalism. When you’re giving yourself a $50,000 or $500,000 bonus in December and your children resume university courses in January for free, you’ll wonder why you ever defended the old ways. Socialism is the way to get rich, and it’s time to advertise that unashamedly.

For more from the author, subscribe and follow or read his books.

When to Stop Watching ‘Law & Order: SVU’

This article must address two aspects of Law & Order: Special Victims Unit, ideology and quality. Each will produce a different answer to the titular question, and we will begin with the first, being the most important.

SVU can evoke mixed emotions these days. On the one hand, it is addictively cathartic to see rapists and domestic abusers experience the harsh hand of justice (or Elliot Stabler) over and over again. On the other, the show glorifies the police and offers a distorted view of the criminal justice system. John Oliver had a good exposé on this recently, highlighting studies that show consumers of crime dramas have rosier views of the police. Others have drawn attention to the literature as well, criticizing the erasure of racism, miserable clearance rates, apathy or neglect, and other real-world problems. The research on SVU alone is growing quite sizable. At the same time, it has been found viewers of SVU better understand, on average, the meaning of consent, sexual assault, and more. In Oliver’s piece, actress Mariska Hargitay speaks of fans being inspired by the show to report, to take rape kits, and so on. She writes elsewhere: “Normally, I’d get letters saying ‘Hi, can I please have an autographed picture,’ but now it was different: ‘I’m fifteen and my dad has been raping me since I was eleven and I’ve never told anyone.’ I remember my breath going out of me when the first letter came, and I’ve gotten thousands like it since then. That these individuals would reveal something so intensely personal—often for the very first time—to someone they knew only as a character on television demonstrated to me how desperate they were to be heard, believed, supported, and healed.” Hargitay started a foundation to educate the public on sex crimes and push police departments to actually test their rape kits (yes, at times meaning advocating more funding). All this is to say that the impact of SVU is complex, therefore the discussion must be nuanced.

Of course, Oliver’s conclusion was a bit confused: “Honestly, I am not even telling you not to watch it. It’s completely fine to enjoy it.” This holds only if one determines the show’s negative real-world effects are rather unserious. Obviously, it is views that keep a series going. Millions of regular viewers are why Olivia Benson remains the longest-running live-action character in primetime television history, why SVU is the lengthiest live-action show in primetime history. Others have called for Hargitay to blow up the show by quitting or for all police shows to be cancelled. This is the moral question for us Leftists and our favorite copaganda. Is the series doing enough damage to public perception for me to stop watching? Enough to warrant cancellation? This is not so easy to answer, the extent of the harm. Cop shows may attract people who already have a rosier view of the police, impacting various studies, in addition to creating such views (in the same way, SVU may attract those with pre-existing higher understandings of sexual assault, alongside having an effect on others). These differences can be difficult to parse out. Yet even where correlational direction is clearly established, finding the show guilty of perspective creation far more than facilitation or reinforcement (admittedly, a problem in itself, to a lesser degree), then the real challenge arises. Answering the question that matters most. Is the series doing enough damage to actually delay or prevent crucial police reforms? Or abolition, if that is your philosophy. The instinctive answer is yes. How could more favorable views of law enforcement not hinder reform efforts? But, like demonstrating the extent that copaganda is increasing popular devotion to the police, the extent that this devotion would actually prevent the sweeping changes to policing necessary for a more decent society remains unclear. One needs sufficient evidence, serious research that this writer is unsure exists. At this stage, we have a vague understanding that these shows spread positive, unrealistic views of the criminal justice system, which theoretically could make public policy changes harder to pass — but it could turn out that the effects, regarding both points, are too minimal to warrant much concern. We might cancel cop shows and have virtually no impact, longterm or otherwise, on conservative ideology, which emerges from and is maintained by many sources. We do not know.

This means that each person must choose for herself. We need more nuance than Oliver provided, though the solution is about as ambiguous. If you imagine the show is meaningfully stalling social change, the answer to the headline is obvious: stop watching immediately. It is not “completely fine” to continue. But if you suspect that reforms (or abolition) will be about as difficult to win with or without the existence of SVU, or change behavior based only on sufficient evidence, keeping Detective Benson as a guilty pleasure is not such a big deal. Either path could be correct, given our limited knowledge at this time. Personally, as may be obvious, I somewhat question the efficacy of cop shows delaying social change, but acknowledge this serves nostalgia and bias (freeing me to continue watching without guilt) and may not be the most moral position (why risk a delay of any kind, with black folk being murdered in the streets for no reason?), which pushes me in the other direction. I wrestle with this, but my doubts have not yet allowed for a goodbye. No serious advice can be offered here — no “stop watching” or “enjoy.” Whether you earnestly think all this is doing serious societal harm will determine your answer.

This will help answer other questions, too, such as Is it hypocritical to be a leftwing critic of the police while enjoying copaganda? Or Does a negative impact on viewers affect whether Mariska Hargitay can be called one of the greatest, if not the greatest, female leads in television history? And, perhaps naively, Could these fictions be reframed in the public mind as aspirational? In other words, real-world policing is dreadful, what reforms can we pass to make it more like a televised ideal? (No, SVU is not actually ideal or the best model in any fashion, simply a tiny step up in a few ways, with officers who care, justice that’s done, racism under control, bad cops intolerable and locked away, etc.) One’s answers to these things depend on how powerful the medium is judged to be.

For those who are still watching, in more than one sense, we can turn to quality (more like my piece When to Stop Watching ‘The Walking Dead’), a much shorter discussion that includes a couple spoilers.

In my view, SVU was a well-made show for an exceptionally long time. Even after Stabler vanished after season 12, the Amaro, Barba, and Carisi era was not to be missed. The writing, of both story and dialogue, remained compelling, as did the acting. The viewer’s cycle of tears, rage, and satisfaction was as powerful as ever. Of course, the show’s attempts to tackle race in 2013, around the beginning of Black Lives Matter, were predictably disastrous (Reverend Curtis Scott is the new black pastor character who represents both fictional and real-world hyperbolic protesters foolishly questioning police decisions), about as painfully cringe-inducing as the Brooklyn Nine-Nine try at blending comedy, lovable goofball cops, and serious criticism of racial injustice in its final season. Beyond this, and the fact that practically everyone Olivia Benson knows is revealed to be a rapist, seasons 14 through 17 remain highly watchable. Season 18 offers new opportunities to relive trauma, running in 2016-2017 and copy-pasting horrific events from the Trump era, of course without saying his name, such as the wave of hate crimes that occurred after his election. I wish I had stopped watching after season 17. In the finale a character dies and there is a gut-wrenching funeral scene; let the show be buried there. I did not care to experience various Trump headlines again. Season 18 does maintain its quality, however, so it might be worth it to some. But go no further!

In season 19, everything begins to go wrong. The dialogue and acting feel slightly off, as if they took a 5% hit in quality. It’s not huge, but it gnaws at you. The story writing really starts to slide. Benson’s son is nearly taken away from her due to a custodial fight, nearly taken away from her due to a bruise on his arm, and then finally is taken away from her in a kidnapping, all in the space of some 10 episodes! It all has a more melodramatic, soap opera vibe. The rest of the outings begin to feel a bit repetitive, too, despite a slight shift to the personal lives of the main characters — after nearly two decades of episodes, that happens. Barba leaves, which is almost as sad as his entirely unconvincing, lifeless near-relationship with Benson (the idea that Barba isn’t gay is absurd). I quit before the season was over, wishing I had earlier. From what I hear, the situation has only gotten worse.

SVU has just been renewed for a 25th season.

For more from the author, subscribe and follow or read his books.

The Ku Klux Klan as Extension

In 1871, a congressional committee investigated Ku Klux Klan terror in the Reconstruction South. The testimony offered to (and the findings of) the “Joint Select Committee to Inquire into the Condition of Affairs in the Late Insurrectionary States” aids scholars in answering an important historical question: How did Americans — Northerners, Southerners, black, white, white hooded, and more — view Klan activities and violence as they related to Southern history, whether recent or deep? Based on the evidence, it is safe to posit that Northern sympathizers viewed the Klan as an extension of historical Southern disorder, while Southern apologists saw it as rooted in traditions of Southern order. Both historical contexts were of course defined by the need to preserve white supremacy. Interestingly, this thesis also prompts us to consider where Americans placed the Confederate army on a spectrum of blame for the war.

The majority report, issued by the Republicans on the committee, drew a connection between the South’s insurrectionary strain in the early 1860s and the “cowardly midnight prowlers and assassins who scourge and kill the poor and defenseless” that followed.[1] Although “less than obedience” from Southerners “the Government cannot accept,” Klan sentiment was comprehensible, even expected. “The strong feeling which led to rebellion and sustained brave men, however mistaken, in resisting the Government…cannot be expected to subside at once, nor in years,” the majority wrote.[2] The South’s rebellious streak was not yet wholly tamed. “It required full forty years to develop disaffection into sedition, and sedition into treason. Should we not be patient if in less than ten we have a fair prospect of seeing so many who were armed enemies becoming obedient citizens?”[3] In other words, while the Klan tortured, raped, and murdered blacks for exercising their new rights as citizens and achieving economic success and community development, many white Southerners had fallen back in line — the mindset of disorder and insurrection was being purged, but more time was needed.

Interestingly, while centering the Klan in “remnants of rebellious feeling, the antagonisms of race, [and] the bitterness of political partisanships,” the Republicans also sought to frame the organization as a disgrace to the Confederate army, as if the military had been divorced from such elements.[4] Confederate soldiers were “brave men,” as noted, who made an “enormous sacrifice of life and treasure,” truly “magnanimous enemies,” but the Klan “degrade[d] the soldiers of Lee and Johnston into” nothing but cutthroat bandits.[5] The committee majority understood that former Confederate soldiers and Klansmen were often one and the same.[6] Here the Republicans issued an appeal to soldierly pride and military order or decorum — the Confederate army was an honorable force, operating under the rules of war, it and each combatant simply following orders; the Klan was lawless, its vigilante violence in homes and churches a far cry from proper clashes on the battlefield. It was no place for a good soldier. The KKK, then, was an extension of Southern rebelliousness, but not an extension (rather, a devolution) of the mechanism of that rebellion, the Confederate military. These ideas were expressed in the same paragraph of the report, and it appears no contradiction was found, which may suggest that Republican officials of the era indeed saw the rebel army as in some fashion outside insurrectionary elements of the South, or secondary to them, i.e. a mere tool of secessionist public officials. If this public presentation represented sincere belief, no inconsistency exists. Yet it could be, if Republicans privately thought differently, that this was a valid contradiction far too useful to be noticed or corrected: it was too important to both find the roots of the Klan in Southern disobedience to government and to urge true soldiers not to partake in disorder (the press covered the hearings closely, so the appeal would find readers).[7]

White Southerners and Klansmen, of course, saw the KKK as evolving from rather different historical trends. How explicit was former Confederate soldier William M. Lowe of Alabama when he testified before the committee that “The justification or excuse which was given for the organization of the Ku-Klux Klan was, that it was essential to preserve society,” for given “the feebleness with which the laws were executed, the disturbed state of society, it was necessary that there should be some patrol… [This] had been a legal and recognized mode of preserving the peace and keeping order in the former condition of these States.”[8] “And it was, therefore,” a committee member asked, “natural that it should be resumed?” Lowe confirmed. The Klan, then, was an extension of the slave patrols of the antebellum South. Interest in maintaining law and order was again rooted in the control and subjugation of blacks, evidenced not only by Klansmen’s documented terror but by how they described perceived threats to white society during the hearings.[9] For example, General Nathan Bedford Forrest, likely the founder of the KKK, testified that blacks were “becoming very insolent,” and Southern whites were “alarmed,” afraid they would be “attacked.”[10] White “ladies were ravished by some of these negroes,” who would also “kill stock” and “carry arms.”[11] The Klan formed to “protect the weak; to protect the women and children,” and to prevent “insurrection” and black vengeance.[12] (Identical concerns motivated whites to fight for the Confederacy, according to historian Chandra Manning.[13]) Haiti had fallen to black revolutionaries, Forrest said, and it was critical the same did not occur in the South.[14] In sum, the Klan was not the real lawless force — it existed to “enforce the laws” in a dangerous time.[15] This indeed mirrored the function of slave patrols, which sought to maintain white dominance.[16] The Klan was seen as the natural successor to or resumption of former systems of order and oppression.

Of course, the irony of insurrectionist soldiers framing their violence against black voters, politicians, landowners, businesses, churches, schools, etc. as preventing insurrection was either lost or ignored — or contemporarily nonexistent.[17] It is difficult to know which from these texts. Again, there is room for questions concerning how 1870s Americans, this time including Southerners, saw the Confederate army. If it was judged far less culpable in the rebellion as Confederate legislators, a simple tool, then irony would be more a modern construction, imagined by a resident of the twenty-first century with rather different views. But if the army was thought less outside the insurrection, as central as politicians, then Forrest’s framing was cynical, hypocritical. Given Manning’s research on soldiers’ motivations, cited above, there may be a case for this. Still, popular assessment of institutional responsibility could nevertheless remain distinct from common individual motivations.

To conclude, the idea that Northerners and Southerners viewed the Ku Klux Klan differently, as an extension of rebellious tendencies or proper white law enforcement, is as well-supported in the 1871 hearing documents as it is expected. Yet its full exploration not only replaces mere assumption with historical evidence, it reveals unexpected nuances and generates new historical questions. Future studies should examine Americans’ private thoughts on “the Klan in historical context,” the Klan as successor, utilizing letters, journals, and so on — the hearings only offer public sentiments. Historians should also explore the new, associated problems, gathering public and private texts. Outlining to what extent the Confederate army was considered insurrectionary, compared to state leaders, will advance our understanding of the mentalities of hearing participants, and be a worthwhile contribution to the field in its own right.

For more from the author, subscribe and follow or read his books.


[1] Shawn Leigh Alexander, Reconstruction Violence and the Ku Klux Klan Hearings (New York: Bedford/St. Martin’s, 2015), 127.

[2] Ibid.

[3] Ibid.

[4] Ibid.

[5] Ibid.

[6] Ibid., 127, 102, 113.

[7] Ibid., 10.

[8] Ibid., 118.

[9] Ibid., 35-102 for testimony on KKK violence and intimidation.

[10] Ibid., 108.

[11] Ibid., 108, 113.

[12] Ibid., 109.

[13] Chandra Manning, What This Cruel War Was Over: Soldiers, Slavery, and the Civil War (New York: Vintage, 2008), 12, 36-39, 217-218.

[14] Alexander, Hearings, 112.

[15] Ibid., 110. See Lowe’s remarks on page 118.

[16] Vanessa Holden, Surviving Southampton: African American Women and Resistance in Nat Turner’s Community (Urbana: University of Illinois Press, 2021), especially chapter one.

[17] Alexander, Hearings, 7, 35-102.

The Lincoln-Douglas Debates: Questioning Supreme Court Power

A study of events relating to the Supreme Court’s Dred Scott decision of 1857, such as the Lincoln-Douglas debates that occurred the following year, can help answer an interesting historical question: How did politicians of 1850s America understand the concept of checks and balances? Or, more specifically, how did they want judicial checks to be publicly understood? This is not so straightforward. True, the concept of “checks and balances” and its functionality to the “separation of powers” — terms coined by Montesquieu in his 1748 The Spirit of the Laws — were foundational to the design of the U.S. Constitution of 1787 (justified in the Federalist Papers, such as 47, 48, and 51). But the Lincoln-Douglas debates suggest (public-facing) perceptions of mutual regulation could differ dramatically among antebellum politicians, and were a bit dissimilar to modern understandings, in that efficacy was more doubted.

The Dred Scott decision declared that black Americans, even free persons, were not citizens of the United States and were not entitled to associated rights. Further, it was decreed unconstitutional to prohibit slavery in U.S. territories. The Missouri Compromise of 1820, which had created such a prohibition, and the Kansas-Nebraska Act of 1854, which had allowed territorial residents to decide the issue for themselves, were nullified. This was only the second time the Supreme Court had overturned federal law, and the first time it had rejected a major one.[1] Marbury v. Madison in 1803 explicitly established the Supreme Court’s power to wield such a check when it overturned a minor provision of federal legislation.[2] Article III of the Constitution, rather short indeed, does not specifically grant this power; it had to be extrapolated from an interpretation. It should be understood that the novelty of what occurred with Dred Scott left room for many questions.

Interestingly, the pro-slavery politician Stephen A. Douglas, Abraham Lincoln’s rival candidate for a U.S. Senate seat representing Illinois, publicly questioned the effectiveness of the Dred Scott ruling. He had no qualms about the decision — he would “always bow in deference” to the Court, and thought Lincoln’s objections were misguided for a nation “made by the white man, for the benefit of the white man.”[3] But he wondered in the famous debates whether total openness to slavery in the territories could be enforced, saying that “if the people of a territory want slavery they will have it, and if they do not want it they will drive it out… Slavery cannot exist a day in the midst of an unfriendly people with unfriendly laws.”[4] Americans out west would need the proper local legislation and police enforcement to either ban slavery or protect it.[5] The Court’s ruling did not matter. Here Douglas’ view was perhaps slanted by an earnest devotion to popular sovereignty. He wrote the Kansas-Nebraska Act, and then in 1857 had opposed the creation of Kansas as a slave state because the residents did not desire it.[6] He defended the popular will more fiercely than slavery. But Douglas was perhaps also aiming not to lose moderate Illinois voters — too much allegiance to Dred Scott in a free state could be a political mistake, therefore it was better to stress a respect for the decision but doubt its effectiveness. In any event, we observe an interesting view on a seismic judicial check in the 1850s: it is meaningless in practice. In modern times, with many Supreme Court declarations of unconstitutionality under our belt, such a perspective is more rare. Lincoln derided Douglas’ theory of slavery’s survival, saying it was historically untrue and that territorial legislatures would have no choice but to tolerate slavery, just as they had earlier been forced to tolerate freedom.[7]

Lincoln also questioned the Court’s efficacy, from a different angle. After criticizing perceptions of the “sacredness” of the Dred Scott ruling, pointing out that courts change their minds and overrule their prior decisions all the time, Lincoln essentially wondered whether Congress could overturn or ignore a decision of the Supreme Court. Again, the relative novelty of what the Court had done, Lincoln’s sincere perspectives on checks and balances, and his desire to gain anti-slavery voters must all be considered as factors in such an astounding proposition. “Douglas will have it,” Lincoln said, “that all hands must take this extraordinary decision, made under these extraordinary circumstances, and give their vote in Congress in accordance with it, yield to it and obey it in every possible sense.”[8] He then pointed out that a couple decades prior, the Court had ruled that a national bank was constitutional (note this did not throw out established federal legislation, but upheld it). But later President Andrew Jackson “said that the Supreme Court had no right to lay down a rule to govern a co-ordinate branch of the government…”[9] He vetoed Congress’ recharter of the bank, declaring it unconstitutional. In the 1830s, the belief that the judicial branch could regulate and guide the legislative branch was not so universal, despite Marbury. One may be tempted to wave this off as part of Jackson’s personal penchant for ignoring restrictions on his authority, but here Lincoln is asking similar questions about the Court’s power in the 1850s, and pointing out Douglas once asked them as well: “I will venture here to say, that I have heard Judge Douglas say that he approved of General Jackson for that act.”[10] Lincoln insisted that “each member [of Congress] had sworn to support [the] Constitution as he understood it.”[11] Should Supreme Court understandings supercede congressional or presidential understandings? What Lincoln heavily implies here — he never goes so far as confident assertion — is that if another branch of government could reject a judicial finding of constitutionality, could one not reject a finding of unconstitutionality?[12] Douglas, despite any earlier ideas on a similar but not identical case, marveled that anyone would insinuate Congress could reverse a Court decision.[13]

Research of other documents will provide a wider view of what 1850s public officials made of the Court’s first overthrow of major federal legislation. Based on the debates, Douglas and Lincoln agreed with Chief Justice Roger B. Taney, who pushed the Dred Scott decision through with questionable constitutional interpretations, that the Court had such a right.[14] Did others disagree? More likely, were more politicians speculating on what came after judicial rulings — such as whether federal, state, or territorial legislatures could wave them aside, as both Douglas and Lincoln suggested? What other concerns existed? And in the same way the debates do not evidence how widespread such questions were, they cannot be utilized to parse earnest belief from political theatre. How much did Lincoln and Douglas actually believe what they were saying, and how much was for power interests, or ideological interests? We would need to turn to their private letters or journal entries and hope for comparative material, doing the same with other public officials offering subversive questions and bold interpretations in front of voters.

For more from the author, subscribe and follow or read his books.


[1] Paul Finkelman, Dred Scott v. Sandford: A Brief History with Documents (New York: Bedford/St. Martin’s, 2017), 7.

[2] Ibid.

[3] Ibid., 195.

[4] Ibid., 204.

[5] Ibid.

[6] Ibid., 179-180.

[7] Ibid., 212-213.

[8] Ibid., 199.

[9] Ibid.

[10] Ibid.

[11] Ibid.

[12] Lincoln only flirts with contradiction concerning a legislature’s response to a Supreme Court ruling. When Douglas insisted that enslavement would not survive in some territories due to local lawmakers refusing to pass affirmative legislation or enforce the Court’s decision, Lincoln argued that they would nevertheless be forced to permit slavery (p. 213). To refuse was to “violate and disregard your oath” to the Constitution, and besides, “how long would it take the courts to hold your votes unconstitutional and void? Not a moment” (ibid). But here (two months earlier), Lincoln wonders whether Congress could flout the Court’s ruling, adding that “If I were in Congress, and a vote should come up on a question whether slavery should be prohibited in a new territory, in spite of that Dred Scott decision, I would vote that it should” (198). Members of Congress should vote how they understand the Constitution. However, this is not a true contradiction, as Lincoln appears to see Congress, not territorial legislatures, as possibly having the power to override the Court. One legislature is a federal branch, the other not. Douglas also comes close to contradiction with his insistence that the Dred Scott ruling be respected — it is “a rule of law binding on this country” made by “the ultimate tribunal on earth” (201) — while also insisting popular sovereignty would reign in the territories despite the decision. But he has wiggle room as well: what ought to be respected not always is.

[13] Finkelman, Dred Scott, 201.

[14] Ibid., 30-38, 194-195, 198-199.

Manliness in Grey and Blue

What role did manhood play in the Civil War? Beyond soldiering and fighting for country being a manly activity and duty, two historical realities stand out.

In What This Cruel War Was Over, historian Chandra Manning posits that ideologies of gender, while one factor of many, motivated Confederate soldiers to fight to preserve slavery. “Slavery,” she writes, “was necessary to white Southerners’ conception of manhood…” (Manning, 12). Its abolition would undermine gender constructions of the 1860s South. To be a man was to possess “mastery” over blacks, women, and children; it was also to see to the prosperity and protection of one’s family (ibid). Emancipation would overturn the social order and unleash violent acts of black vengeance, both destroying white families (12, 217-218). At the extreme, Southern soldiers feared white enslavement. The “hellish undertaking,” an Alabama private wrote, of “Lincoln & his hirelings” would ensure whites were “doomed to slavery” (39). Abolition would mean, another Confederate opined, “fire, sword, and even poison as instruments in desolating our homes, ruining us…” (38). White male control over white women would slip away alongside control over blacks, with one soldier from Georgia writing that slaves were already discussing “whom they would make their wives among the young [white] ladies” (36). Slavery had to be protected to preserve authority over others and the security of families, which were central to white male identity.

Manning further argues that black men recognized a link between slavery and manhood. (Of course, this was likewise not the only reason they fought for abolition.) Slavery stripped a man of what he held dear: the ability to protect his family, his humanity and dignity, and so on (12, 219). Only through abolition could the black man become a full man, in the individual and collective sense. Myths of inferiority, animality, and childishness could be washed away with the courage, agency, principles, and effectiveness displayed while serving in the Union army (129). A black soldier wrote of fighting for “the foundation of our liberty” and the “liberty of the soul” to “sho forth our manhood” (130). Another, from Missouri, aimed to reestablish possession and protection of his children when he wrote to their mistress and declared he was coming for them: the mistress would “burn in hell” if she further interfered with his “God given rite” to have his own children (ibid). Another black volunteer declared the war would help the race “attain greatness as a type in the human family” (ibid). For African American troops, to be a man was to be free, to be independent, to protect one’s family; it was also to be considered as much a man as any white male. Thus, slavery had to be destroyed.

For more from the author, subscribe and follow or read his books.

Glorifying the Bible, Constitution, or Declaration Is Always a Moral Dead End

Originalism — trying to follow the intent of the writers of the Constitution — is a risky business. So is basing one’s ethics on the bible. Why? Because you may end up looking like Mr. Taney or Mr. Dew.

The Supreme Court’s 1857 Dred Scott decision declared that black Americans, even free ones, could not be citizens of the United States and were not entitled to rights. The majority opinion, written by Chief Justice Roger B. Taney, stated that blacks

are not included, and were not intended to be included, under the word “citizens” in the Constitution, and can therefore claim none of the rights and privileges which that instrument provides for and secures to citizens of the United States. On the contrary, they were at that time [1787] considered as a subordinate and inferior class of beings who had been subjugated by the dominant race, and, whether emancipated or not, yet remained subject to their authority, and had no rights or privileges but such as those who held the power and the government might choose to grant them. It is not the province of the court to decide upon the justice or injustice, the policy or impolicy, of these laws. The decision of that question belonged to the political or lawmaking power, to those who formed the sovereignty and framed the Constitution. The duty of the court is to interpret the instrument they have framed with the best lights we can obtain on the subject, and to administer it as we find it, according to its true intent and meaning when it was adopted…

Taney reiterated:

[Blacks] had for more than a century before been regarded as beings of an inferior order, and altogether unfit to associate with the white race either in social or political relations, and so far inferior that they had no rights which the white man was bound to respect, and that the Negro might justly and lawfully be reduced to slavery for his benefit. He was bought and sold, and treated as an ordinary article of merchandise and traffic whenever a profit could be made by it. This opinion was at that time fixed and universal in the civilized portion of the white race.

The Declaration is mentioned often as well:

“We hold these truths to be self-evident: that all men are created equal; that they are endowed by their Creator with certain unalienable rights; that among them is [sic] life, liberty, and the pursuit of happiness…”

The general words above quoted would seem to embrace the whole human family, and if they were used in a similar instrument at this day would be so understood. But it is too clear for dispute that the enslaved African race were not intended to be included…

Clearly, basing one’s beliefs and policy positions on older documents from barbaric times is a fine way to continue the barbarism. This is true whether or not you judge originalism to be the proper method of legal interpretation. In the context of American slavery, the same continuation occurred with the bible. In the antebellum era Thomas R. Dew, president of the College of William and Mary, denied

most positively that there is anything in the Old or New Testament which would go to show that slavery, when once introduced, ought at all events to be abrogated, or that the master commits any offense in holding slaves. The children of Israel themselves were slaveholders and were not condemned for it. All the patriarchs themselves were slaveholders; Abraham had more than three hundred, Isaac had a “great store” of them; and even the patient and meek Job himself had “a very great household.” When the children of Israel conquered the land of Canaan, they made one whole tribe “hewers of wood and drawers of water,” and they were at that very time under the special guidance of Jehovah; they were permitted expressly to purchase slaves of the heathen and keep them as an inheritance for their posterity; and even the children of Israel might be enslaved for six years.

When we turn to the New Testament, we find not one single passage at all calculated to disturb the conscience of an honest slaveholder. No one can read it without seeing and admiring that the meek and humble Saviour of the world in no instance meddled with the established institutions of mankind; he came to save a fallen world, and not to excite the black passions of man and array them in deadly hostility against each other. From no one did he turn away; his plan was offered alike to all—to the monarch and the subject, the rich and the poor, the master and the slave. He was born in the Roman world, a world in which the most galling slavery existed, a thousand times more cruel than the slavery in our own country; and yet he nowhere encourages insurrection, he nowhere fosters discontent; but exhorts always to implicit obedience and fidelity.

What a rebuke does the practice of the Redeemer of mankind imply upon the conduct of some of his nominal disciples of the day, who seek to destroy the contentment of the slave, to rouse their most deadly passions, to break up the deep foundations of society, and to lead on to a night of darkness and confusion! “Let every man,” (says Paul) “abide in the same calling wherein he is called. Art thou called being a servant? Care not for it; but if thou mayest be made free, use it rather” (I Corinth. vii. 20, 21)… Servants are even commanded in Scripture to be faithful and obedient to unkind masters. “Servants,” (says Peter) “be subject to your masters with all fear; not only to the good and gentle but to the froward. For what glory is it if when ye shall be buffeted for your faults ye take it patiently; but if when ye do will and suffer for it, yet take it patiently, this is acceptable with God” (I Peter ii. 18, 20). These and many other passages in the New Testament most convincingly prove that slavery in the Roman world was nowhere charged as a fault or crime upon the holder, and everywhere is the most implicit obedience enjoined.

Here Dew argues that the bible looks upon slavery approvingly, which justifies American slavery. One should avoid saying “The bible was used to justify slavery,” as is common. First, this implies the bible was twisted, distorted in some way. Not really: the text was written in a slave society — of course it isn’t going to declare slavery immoral and worthy of abolition. It was written in a society of absolute male rule and horror over homosexuality, of course it calls for a boot on the neck of women and gays. These were primitive desert tribes. Their characters, including God himself and the biblical heroes, ordered and carried out such oppression (see Absolutely Horrific Things You Didn’t Know Were in the Bible). Even those who insist that God decided to switch from barbarism to loving one’s neighbor with the arrival of Christ — who believe that Jesus marked the change for humanity, when the crushing of slaves, women, and gays suddenly became immoral and against God’s Will — will notice that the oppression of all three groups continues in the New Testament (which is still the inspired and flawless Word of God), as seen in Dew’s writing and my Horrific Things. As one might expect from the brutal Iron Age of the Middle East. (Note how an atheist in the twenty-first century and the religious, pro-slavery head of an Anglican college in the early 1800s can agree: it’s fairly obvious the bible has no moral issue with slavery.) Second, “The bible was used to justify slavery” is passive voice that erases the doer and implies that religious beliefs were solely an afterthought in propping up the “peculiar institution.” “Many Christians used the bible to justify slavery” is better — someone is involved at last — but “Many Christians believed the bible justified slavery and said so” is best. These weren’t all just enslavers searching for ways to excuse what they were doing and at some point thought the bible could help them out. Perhaps some followed that path, but most Southerners were Christians (like most Americans) who believed in the scriptures long before they began defending slavery publicly. It’s how people were raised, in the one true religion that condoned enslavement. Most slavery advocates were sincere believers, some even pastors, who did not consider slavery wrong because of what their sacred text said. “Whoever believes that the written word of God is verity itself,” a Richmond paper noted, “must consequently believe in the absolute rectitude of slave-holding.” No one can deny the economic and racial motives of pro-slavery Americans, but neither should earnest religious belief be ignored. Many factors were at work.

Taney and Dew held repugnant views, all will agree. But many today race to be just like them. The bible oppresses women and gays, therefore gays should have no right to marry (58% of weekly churchgoers still oppose same-sex marriage), adopt, or be served in places of business, and women should not be pastors (the largest Protestant denomination just expelled five churches for having female ministers, citing scripture). Many deeply conservative Christians would nod approvingly over the former, while frowning in distaste at the latter. The question for them is obvious. If Dew was wrong, why are you right? Why is it permissible for the modern believer to reject the bible’s approval of slavery or women’s subordination, but not its condemnation of gay people? Cherrypicking indeed. And if Taney’s originalist view of the Constitution led him to moral trouble, and brought calamity upon black Americas, we should probably be more skeptical of the document, more careful not to glorify. Constitutions or declarations of independence written in 2023 wouldn’t accept slavery or racism, wouldn’t tolerate unfree persons worth three-fifths of a human being, nor edicts that slaves who make it to free states are not free, nor “merciless Indian Savages,” for one minute. (See also How the Founding Fathers Protected Their Own Wealth and Power.) Our patriotic texts were written in an indecent time as well. We live in a more civilized society now. You can still believe originalist readings are best legal practice, but you must recognize that original intentions can be wrong and must be willing to push wholeheartedly for amendments to eradicate such wrongs.

Old texts are troublesome. See, Taney and Dew were right — the bible does offer plenty of support for slavery, the Founding Fathers did not envision black political equality. The lesson here is to think more critically about documents of the past. To recognize the risk of going to morally flawed works for moral or legal guidance.

Of course, there are plenty of moral edicts and actions to be found in the bible (“Be kind and compassionate to one another, forgiving each other, just as in Christ God forgave you,” Ephesians 4:32), and other antebellum Christians believed the bible did not approve of slavery and said so — as most Americans were Christians, it is also the case that most Northerners and abolitionists were Christians (study the fiery, admirable Quakers, for instance). All sorts of beliefs and interpretations can spring from books containing much good and much bad. Obviously, there is also much that is valuable in the Declaration and Constitution. I wrote elsewhere that “the U.S. Constitution was a strong step forward for representative democracy, secular government, and personal rights, despite the obvious exclusivity, compared to Europe’s systems.” There is a lot to appreciate. And sometimes originalism produces moral outcomes, as one would expect from documents with much good in them (liberal justices use originalism as well; both sides use it when beneficial and reject it when inconvenient). We simply have to recognize the bad that comes with the good, and do something about it. The Constitution should be changed for the better, as it has been over two dozen times since the national founding, with amendments overriding the original articles. The moral flaws of the bible can simply be ignored, rejected from personal belief and public policy; most believers ignore how the New Testament lifts up slavery and male rule already — society can be far more decent than that — and should do the same with its antigay sentiment.

That’s how the moral person regards foundational texts from more backward, oppressive times. Don’t glorify. Keep what’s good. Burn the rest.

For more from the author, subscribe and follow or read his books.

‘The Chinese Question’: How Economics Molded Racism, Which Molded Economics

With good reason, a 2022 Bancroft Prize went to The Chinese Question: The Gold Rushes, Chinese Migration, and Global Politics, by historian Mae Ngai of Columbia University. Ngai makes two major contributions. First, the work adds to the field’s understanding of how politico-economic concerns can create or influence racist beliefs. Second, it offers an important new observation on how China fell behind the West, adding late nineteenth-century factors to those of the eighteenth and nineteenth centuries highlighted by scholars like Kenneth Pomeranz. Ngai’s central contributions are in fact linked: economics and politics impacted racism, which impacted economics and politics.

In the United States, Australia, and South Africa, Chinese migrants and entrepreneurs pursuing gold mining and other enterprises met fierce resistance from whites. In California, for instance, politicians seeking votes whipped white miners, already concerned about the growing Chinese population, into a frenzy in the 1850s, resulting in discriminatory acts and violence (Ngai 85-88). Miners from China were framed as threats to white jobs, as a danger to the entire labor system. Whites falsely cast the Chinese as “coolies,” or indentured workers to bosses in China, servile by nature and paid little if anything to travel overseas and mine for gold. How could U.S. mining companies and white workers with higher wages — the free labor system — possibly compete with this? All this paralleled white worker anxiety over black slavery, concerns over displacement (ibid), only it was perhaps made more acute by the understanding that the valuable metal was a limited resource. The Chinese were invaders, robbing the U.S. of gold and with it opportunity (85, 134). This perception led to racist legislation, including exclusion laws that in the 1870s barred the immigration of ethnic Chinese persons (149-153). Though the British colonies of Australia and South Africa had unique experiences, certain interests in these places also stoked racism, at times centered around the Chinese race as a “moral menace,” “industrial evil” (269), or heathen invasion force (111), and eventually led to exclusion as well.

These policies hurt the growth of China, Ngai argues. “Anglo-American settler racism” played a role in “the development of global capitalism” (5). “Exclusion” specifically was “integral” (2). Opponents of exclusion had warned that ending migration would be a blow to trade and commerce, and it appears they were correct (274). “Exclusion meant fewer outlets for Chinese merchants and investors abroad,” as they were denied entry and business (274-275). Chinese capitalists were cut off from the most powerful and richest Western nations. They had to focus on southeast Asia. The restriction of the Western market decimated China’s tea industry, previously 55% of all exports (280). “Between 1886 and 1905, the volume of China’s annual tea exports fell by more than half, from 246 million pounds to 112 million pounds” (ibid). The U.S. had brought in 65% of its tea from China in 1867, but by 1905 it was down to 23% (281). Animus against the Eastern nation and its people had come to “outweigh…all other considerations, including those of a commercial nature,” an Australian analyst noted at the time (ibid). “The myriad nations all trade with each other,” Huang Zunxiang bemoaned in his poem “Exclusion of the Immigrants,” so “how can the Chinese be refused?” (271). As a further economic consequence of immigration bans, the Chinese in British colonies and America found it more difficult to send money (the strong pound and U.S. dollar) back to families in China, and the number of such workers who could even attempt this was of course capped (286-288).

Ngai writes that her “intention is to clarify racism’s historical origins and reproduction as a strategy of political interests” (xviii). Placing anti-coolieism, anti-Chinese racism, under the lights is as important as past scholarship that considered how racist depictions of Africans developed to justify slavery (see Harman, A People’s History of the World) — unintelligent savages could only benefit from enslavement. Economic interests pushed forward racist narratives. Likewise, notions of servile, inferior Chinese slaves (85, 107-108) served and protected white miners. Ngai’s text further serves to “illuminate how the politics of the Chinese Question was part of the ‘great divergence’ between the West and China in the nineteenth century” (310). The exclusion acts that shut the door on Chinese immigration and trade were a contributing factor, or perhaps a solidifying one, in the West surpassing China in economic might and development, alongside the factors uncovered by prior historians, such as proximity to coal and the existence of overseas colonies. It represents another blow to the old notion that the “Great Divergence” was a story of “inherent superiority or inferiority of Western versus Asian civilizations” or their capitalisms (309). Demonstrating economic contributions to racial ideologies would make for a powerful book, as would showing how racist beliefs and their policies impacted the relative power of global economies — doing both is award-worthy.

For more from the author, subscribe and follow or read his books.

A Scathing Review of the Last History Book I Read

Historian Vanessa M. Holden’s Surviving Southampton: African American Women and Resistance in Nat Turner’s Community argues that the August 1831 slave uprising in Virginia commonly known as Nat Turner’s Rebellion was in fact a community-wide rebellion involving black women, both free and unfree.[1] Holden writes that the event should be called the “Southampton Rebellion,” indicative of the county, for it “was far bigger than one man’s inspired bid for freedom.”[2] A community “produced [Turner]” and “the success of the Southampton Rebellion was the success of a community of African Americans.”[3] The scholar charts not only women’s everyday resistance prior to the revolt, participation in the uprising, and endurance of its aftermath, but also that of children. Sources are diverse, including early nineteenth-century books and Works Progress Administration interviews, and much material from archives at the Library of Virginia, the Virginia Historical Society, and the Virginia Museum of History and Culture.[4] Holden is an associate professor of history and African American studies at the University of Kentucky; her work has appeared in several journals, but Surviving Southampton appears to be her first book.[5] Overall, it is one of mixed success, for while community involvement in the revolt is established, some of Holden’s major points suffer from limited evidence and unrefined rhetoric.

This is a work of — not contradictions, but oddities. Not fatal flaws, but sizable question marks. For a first point of critique, we can examine Holden’s second chapter, “Enslaved Women and Strategies of Evasion and Resistance.” While it considers enslaved women’s important “everyday resistance” such as “work stoppages, sabotage, feigned illness, and truancy,” plus the use of code and secret meetings, that occurred before the revolt, it offers limited examples of women’s direct participation in the Southampton Rebellion.[6] There are two powerful incidents. A slave named Charlotte attempted to stab a white owner to death, while Lucy held down a mistress at another farm to prevent her escape.[7] After the revolt was quelled, both were executed.[8] The chapter also details more minor happenings: Cynthia cooked for Nat Turner and the other men, Venus passed along information, and Ester, while also taking over a liberated household, stopped Charlotte from killing that owner, which one might describe as counterrevolutionary.[9] This is all the meaningful evidence that comprises a core chapter of the text. (It is telling that this chapter has the fewest citations.[10]) It is true that Holden seeks to show women’s participation in resistance before and after the Southampton Rebellion, not just during its three days. Looking at the entire book, this is accomplished. But to have so few incidents revealing women’s involvement in the central event creates the feeling that this work is a “good start,” rather than a finished product. And it stands in uncomfortable contrast to the language of the introduction.

Holden notes in the first few pages of Surviving Southampton that historians have begun adopting wider perspectives on slave revolts.[11] As with her work, there is increasing focus on slave communities, not just the men after whom the revolts are named. “However,” Holden writes, “even though new critiques have challenged the centrality of individual male enslaved leaders and argued for the inclusion of women in a broader definition of enslaved people’s resistance, violent rebellion remains the prerogative of enslaved men in the historiography.”[12] To scholars, Holden declares, “enslaved men rebel while enslaved women resist.”[13] She is of course right to challenge this gendered division. But a chapter 2 that is light on evidence does not suffice to fully address the problem. The rest of the book does not help much — chapter 3, on free blacks’ involvement in the revolt, features just one free woman of color, who testified, possibly under coercion, in defense of an accused rebel, stating that she had urged him not to join Turner.[14] Not exactly a revolutionary urging, though she was saving a man’s life in court, a resistive act. Charlotte and Lucy were certainly rebels, and one might describe those who provided nourishment, information, or legal defense to the men using the same phrasing, but more evidence is needed to strengthen the case. Holden’s women-as-rebels argument is not wrong, it just needs more support than two to five historical events.

The position would be further aided by excising or editing bizarre, undermining elements, such as a passage at the end of the second chapter. There is a mention of the “divergent actions of Ester and Charlotte,” followed by a declaration that “instead of labeling enslaved women as either for or against the rebellion, it is more useful to understand enslaved women as embedded in its path and its planning.”[15] It is fair to say that we cannot fully know Ester’s stance on the revolution — she could have been against it and saved that enslaver, or she could have been for it and taken the same action. We do not actually know if she was counterrevolutionary. But Charlotte’s violent action surely reveals an embrace of the revolt. It is at least a safe assumption. Is Holden’s statement not stripping female slaves of their agency? Not for nor against rebellion, just in its path, swept up in the events of men? How can women be rebels if they are not for the rebellion? Here we do have a contradiction, and not just of the introduction, for nine pages earlier in chapter 2 the author wrote: “Past histories of the Southampton Rebellion regard Ester and Charlotte’s story as anomalous and their actions as spontaneous. However, their motives were not different from those of male rebels.”[16] Here the women have agency, their revolutionary motives purportedly known. The attempted stabbing was “as much a part of the Southampton Rebellion” as anything else.[17] It is a strange shift from empowering Ester and enslaved women as freedom fighters to downplaying Charlotte and advising one not to mark women as for the rebellion.

Language is a consistent problem in the book, and this is intertwined with organization and focus issues. This is apparent from the beginning. First one reads the aforementioned pages of the introduction, where it is clear Holden wants to erase a gendered division in scholarship and lift the black woman to one who “rebels,” not simply “resists.” The reader may then sit up, turn the book over, and wonder about the subtitle: African American Women and Resistance in Nat Turner’s Community. True, as we have seen, much of the text concerns everyday resistance before and after the uprising, but the fact that “rebellion” is not in the title instead is just slightly inconsistent with what Holden is rightly trying to do.

Similarly, look to an entire chapter that stands out as odd in a book allegedly focused on African American Women. Chapter 4 concerns children’s place in the Southampton Rebellion, and focuses almost exclusively on boys. In a short text — only 125 pages — an entire chapter is a significant portion. Why has Holden shifted away from women? Recall, returning to the introduction, that the University of Kentucky scholar aims to show that the revolt was a community-wide event. It was not solely defined by the deeds of men, nor women, nor slaves, nor freepersons — it also involved children, four of whom stood trial and were expelled from Virginia.[18] Here Surviving Southampton has a bit of an identity crisis. It cannot fully decide if it wants to focus on women or on the community as a whole. The title centers black women, as does Holden’s rebuke of the historiography for never framing women “as co-conspirators in violent rebellion…[only] as perpetrators of everyday resistance.”[19] Chapter 2 covers women to correct this. But the thesis has to do with the idea that “whole neighborhoods and communities” were involved.[20] Thus, the book has a chapter on children (boys), free black men alongside women, and so on. The subtitle of this work should have centered the entire community, not just women, and the introduction should have brought children as deeply into the historiographical review as women.

Finally, we turn to the author’s use of the phrases “geographies of evasion and resistance” and “geographies of surveillance and control.”[21] What this means is the how and where of oppressive tactics and resistive action. Geographies of resistance could include a slave woman’s bed, as when Jennie Patterson let a fugitive stay the night.[22] There existed a place (bed, cabin) and method (hiding, sheltering) of disobedience — this was a “geography.” Likewise, slave patrols operated at certain locations and committed certain actions, to keep slaves under the boot — a geography.[23] At times, Holden writes, these where-hows, these sites of power, would overlap.[24] The kitchen was a place of oppression and revolt for Charlotte.[25] Just as Patterson’s cabin was a geography of resistance, it was also one of control, as slave patrols would “visit all the negro quarters, and other places of suspected assemblies of unlawful assemblies of slaves…”[26] Thus, the scholar posits, blacks in Southampton County had to navigate these overlaps and use their knowledge of oppressive geographies “when deciding when and how to resist,” when creating liberatory geographies.[27]

As an initial, more minor point of critique, use of this language involves much repetition and redundancy. Repetitive phrasing spans the entire work, but can also be far more concentrated: “Enslaved women and free women of color were embedded in networks of evasion and resistance. They navigated layered geographies of surveillance and control. They built geographies of evasion and resistance. These women demonstrate how those geographies become visible in Southampton County through women’s actions.”[28] Rarely are synonyms considered. As an example of redundancy, observe: “These geographies of surveillance and control were present on individual landholdings, in the neighborhood where the rebellion took place, and throughout the country.”[29] Geographies were present? In other words, oppressive systems Holden bases on place were at places. There are many other examples of such things.[30]

The “geography of evasion and resistance” is not only raised ad nauseam, it seems to be a dalliance with false profundity.[31] It has the veneer of theory, but in reality little explanatory value. Of course oppressive systems and acts of rebellion operated in the same spaces; of course experience with and knowledge of the former informed the latter (and vice versa). This is far too trite to deserve such attention; it can be noted where appropriate, without fanfare. “Layered geographies of surveillance and survival” sounds profound, and its heavy use implies the same (note also that theory abhors a synonym), but it is largely mere description. Does the concept really help us answer questions? Does it actually deepen our understanding of what occurred in Patterson’s cabin or Charlotte’s kitchen? Of causes and effects? Does it mean anything more than that past experience (knowledge, actions, place) influences future experience, which is important to show in a work of history but is nevertheless a mere truism?

Granted, Holden never explicitly frames her “geography” as theory. But the historian consistently stresses its importance (“mapping” a resistive geography appears in the introduction and in the last sentence of the last chapter) and ascribes power to it.[32] After charting the ways enslaved women resisted before the rebellion, Holden writes: “Understanding the layered social and physical geography of slavery in Southampton and Virginia is important for understanding Black women’s roles in the Southampton Rebellion more broadly. Most remained firmly rooted to the farms where they labored as men visited rebellion on farm after farm late in the summer of 1831.”[33] Well, of course patterns — places, actions — of everyday resistance might foreshadow and inform women’s wheres and hows once Turner began his campaign. Elsewhere Holden notes that small farms and the nature of women’s work allowed female slaves greater mobility and proximity to white owners, a boon to resistance.[34] Women were “uniquely placed to learn, move through, and act within the layered physical and social geographies of each farm.”[35] Again, this is fancy language that merely suggests certain realities had advantages and could be helpful to future events. It goes no deeper, and it is truly puzzling that it is so emphasized. Such facts could have been briefly mentioned without venturing into the realm of theme and pseudo-theory.

Overall, Surviving Southampton deserves credit for bringing the participation of women, children, and free blacks in the 1831 uprising into the conversation. Our field’s understanding of this event is indeed broadened. But this would have been a much stronger work with further evidence and editing. Quality writing and sufficient proof are subjective notions, but that in no way diminishes their importance to scholarship. As it stands, this text feels like an early draft. Both general readers and history students should understand its limitations.

For more from the author, subscribe and follow or read his books.


[1] Vanessa Holden, Surviving Southampton: African American Women and Resistance in Nat Turner’s Community (Urbana: University of Illinois Press, 2021), 5-10. 

[2] Ibid., 7.

[3] Ibid., 2, 6.

[4] Ibid., x, 132-134 for example.

[5] “Vanessa M. Holden,” The University of Kentucky, accessed March 2, 2023, ​​https://history.as.uky.edu/users/vnho222.

[6] Holden, Surviving Southampton, 23, 35.

[7] Ibid., 28, 36.

[8] Ibid., 37, 81.

[9] Ibid., 28, 36.

[10] Ibid., 132-134.

[11] Ibid., 5.

[12] Ibid.

[13] Ibid., 6.

[14] Ibid., 52.

[15] Ibid., 37.

[16] Ibid., 28.

[17] Ibid.

[18] Ibid., 79.

[19] Ibid., 6.

[20] Ibid.

[21] Ibid., chapter 1 for instance.

[22] Ibid., 24.

[23] Ibid., 12-22.

[24] Ibid., 12.

[25] Ibid., 28.

[26] Ibid., 20.

[27] Ibid., 12.

[28] Ibid., 37.

[29] Ibid., 8.

[30] See ibid., 9: “The generational position of Black children as the community of the future was culturally significant and a pointed concern for African American adults, whose strategies for resistance and survival necessarily accounted for these children. Free and enslaved Black children and youths were a significant part of their community’s strategies for resistance and survival.”

[31] The near-irony of this paper’s phrasing is not lost.

[32] Holden, Surviving Southampton, 8, 120.

[33] Ibid., 25.

[34] Ibid., 34-35.

[35] Ibid., 34.

What Star Trek Can Teach Marvel/DC About Hero v. Hero Fights

What misery has befallen iconic franchises these days! From Star Wars to The Walking Dead, it’s an era of mediocrity. Creative bankruptcy, bad writing, and just plain bizarre decisions are characteristic, and will persist — fanbases will apparently continue paying for content no matter how dreadful, offering little incentive for studios to alter course. Marvel, for instance, appears completely out of gas. While a Spiderman film occasionally offers hope, I felt rather dead inside watching Thor: Love and Thunder, Dr. Strange in the Multiverse of Madness, and Wakanda Forever. Admittedly, I have not bothered with She-Hulk, Quantumania, Hawkeye, Ms. Marvel, Eternals, Black Widow, Loki, Shang-Chi, WandaVision, or Falcon and the Winter Soldier, and probably never will, but reviews from those I trust often don’t rise above “meh.” Of course, I do not glorify Marvel’s 2008-2019 (Iron Man to End Game) period as quite the Golden Age some observers do; there were certainly better movies produced then, but also some of the OKest or most forgettable: Incredible Hulk, Iron Man 2, Thor: The Dark World, Age of Ultron, Captain Marvel, Civil War, and the first 30 minutes of Iron Man 3 (I turned it off).

DC, as is commonly noted, has been a special kind of disaster. While Joker, Wonder Woman, The Batman, and Zack Snyder’s Justice League were pretty good, Justice League, Batman v. Superman, Suicide Squad, and Wonder Woman 1984, among others I’m sure, were atrocious. Two of these were so bad they were simply remade — try to imagine Marvel doing that, it’s difficult to do. Man of Steel, kicking off the series in 2013, was rather average. I liked the choice of a darker, grittier superhero universe, to stand in contrast to Marvel. But it wasn’t well executed. Remember Nolan’s The Dark Knight from 2008? That’s darkness done right. Joker and the others did it decently, too. But most did not. The DCEU is now being rebooted entirely, under the leadership of the director of the best Marvel film, Guardians of the Galaxy.

But Star Trek, it seems, has crashed and burned unlike any other franchise. Star Trek used to be about interesting, “what if” civilizations and celestial phenomena. It placed an emphasis on philosophy and moral questions, forcing characters to navigate difficult or impossible choices. It was adventurous, visually and narratively bright, and optimistic about the future of the human race, which finally unites and celebrates its infinite diversity and tries to do the same with other species it encounters. These things defined the series I watched growing up: The Next Generation, Voyager, Deep Space Nine, and Enterprise. The 2009 reboot Star Trek was more a dumb action movie (the sequels were worse), but at least it was a pretty fun ride. By most accounts the new television series since 2017 are fairly miserable: they’re dark, violent, gritty, stupid, with about as much heart as a Transformers movie (which is what Alex Kurtzman, the helmsman of New Trek, did prior). I have only seen clips of these shows and watched many long reviews from commentators I trust, save for one or two full episodes I stumbled upon which confirmed the nightmare. Those who have actually seen the shows start to finish may have a more accurate perspective. Regardless, when I speak of Star Trek being able to teach Marvel and DC anything, I mean Old Trek.

Batman v. Superman and Captain America: Civil War were flawed films (one more so) that got heroes beating each other up. A fun concept that I’m sure the comics do a million times better than these duds. The methods of getting good guys to fight, in my view, were painfully ham-fisted and unconvincing. The public is upset in both movies about collateral damage that the heroes caused when saving the entire world? Grow the fuck up, you all would have died. Batman wants to kill Superman because he might turn evil one day? Why not just work on systems of containment, with kryptonite, and use them if that happens? Aren’t you a good guy? Superman fights Batman because Lex Luthor will kill his mother if he doesn’t, when trying to enlist Batman’s help might be more productive? (Note that Batman finds Martha right away when their fight ends and they do talk; not sure how, but it happens.) Talking to Batman, explaining the situation, and working through the problem together may sound lame or impossible, but recall that these are both good guys. That’s probably what they would do. Superman actually tries to do this, right before the battle starts. The screenwriters make a small attempt to hold together this ridiculous house of cards, while still making sure the movie happens. Superman is interrupted by Batman’s attack. Then he’s too mad to just blurt it out at any point. “I need your help! We’re being manipulated! My mother’s in danger!” When your conflict hinges completely on two justice-minded people not having a short conversation, it’s not terribly convincing.

Civil War has the same problems: there’s a grand manipulator behind the scenes and our heroes won’t say obvious things that would prevent the conflict. They must be dumbed down. Zemo, the antagonist, wants the Avengers to fall apart, so he frames the Winter Soldier for murder. Tony Stark and allies want to bring the Winter Soldier in dead or alive, while Captain America and allies want to protect him and show that he was framed. If Cap had set up a Zoom call, he could have calmly explained the reasons why he believed Bucky was innocent; he could have informed Tony and the authorities that someone was clearly out to get the Winter Soldier, even brainwashing him after the framing to commit other violent acts. Steve Rogers’ dear friends and fellow moral beings probably would have listened. Instead, all the good guys have a big brawl at the airport (of course, no one dies in this weak-ass “Civil War”). Then Zemo reveals that the Winter Soldier murdered Tony Stark’s parents decades ago. This time Cap does try to explain. “It wasn’t him, Hydra had control of his mind!” He could have kept yelling it, but common sense must be sacrificed on the altar of the screenplay. Iron Man is now an idiot, anyway, a blind rage machine incapable of rational thought. Just like Superman. Who cares if Bucky wasn’t in control of his actions? Time to kill! So the good guy ignores the sincere words of the other good guy — his longtime friend — and they have another pointless fight.

Of course, these movies do other small things to create animosity between heroes, which is beneficial. Superman has a festering dislike of Batman’s rough justice, such as the branding of criminals. Batman is affected by the collateral damage of Superman saving the day in Man of Steel (how Lex Luthor knows Batman hates Superman, or manipulates him into hating the Kryptonian, is not explained). Tony Stark wants the government to determine when and how the Avengers act, while Steve Rogers wants to maintain independence. (The first position is a stretch for any character, as “If we can’t accept limitations we’re no different than the bad guys” is obviously untrue, given motivations, and limitations will almost certainly prevent these heroes from saving the entire world. Remember how close it came a few times? Imagine if you had to wait for the committee vote; imagine if the vote was “sit this one out.” It’s fairly absurd. But it would make a tiny bit more sense to have Captain America — the Boy Scout, the soldier — be the bootlicker following orders, not the rebellious billionaire playboy.) Still, the fisticuffs only come about because protagonists go stupid.

There are better ways to get heroes battling. If you want an evil manipulator and good guys incapable of communicating, just have one hero be mind controlled. Or, if you want to maintain agency, do what Star Trek used to do so well and create a true moral conundrum. Not “should we be regulated” or some such nonsense. A “damned if you do, damned if you don’t” scenario, with protagonists placed on either side. In the Deep Space Nine episode “Children of Time,” the crew lands on a planet that has a strange energy barrier. They discover a city of 8,000 people — their own descendants! They are in a time paradox. When the crew attempts to leave the planet, the descendants say, the energy barrier throws them 200 years into the past and their ship is damaged beyond repair in the ensuing crash. They have no choice but to settle there — leaving behind loved ones off-world and in another time, mourning their friends who died in the crash, and, most importantly, unable to return to the war that threatens the survival of Earth. The crew tries to figure out a way to escape the paradox. But they have a terrible moral choice to make. If they escape the energy barrier, they will end the existence of 8,000 people to save their own skins — the crash will never have occurred, thus no descendants. If they decide not to escape, not to avoid the crash, they will never see their loved ones again, friends will die, and the Federation may lose the war. This is a dilemma in the original sense of the word: there are no good options. Characters fall on different sides of the decision. No, Deep Space Nine isn’t dumb enough for everyone to begin punching each other in the face, but you see a fine foundation for such a thing to occur in a superhero film. You see the perspectives of both sides, and they actually make sense. You can see how, after enough time and argument and tension, good people might be willing to use violence against other good people, their comrades, to either save a civilization or win a war.

As a similar example, there’s the Voyager episode “Tuvix,” in which two members of another crew are involved in a transporter accident. The beaming combines them into a single, new individual. He has personality traits and memories of the two crewmen, but is a distinct, unique person. The shocked crew must come to terms with this event and learn to accept Tuvix. A month or two later comes the ethical dilemma: a way to reverse the fusion is developed. The two original crewmen can be restored, but Tuvix will cease to exist. Tears in his eyes, he begs for his life. What do you do? Kill one to save two? Kill a stranger to save a friend? Can’t you see Captain America standing up for the rights of a new being, while Iron Man insists that the two originals have an overriding right to life? Give good people good reason to come to blows. Such ideas and crises can be explored in the superhero realm just as easily as in Star Trek.

This is much more powerful and convincing than disagreements over — yawn — treaties and whether arm boy should die for events he had no control over.

For more from the author, subscribe and follow or read his books.