With Afghanistan, Biden Was in the ‘Nation-building Trap.’ And He Did Well.

You’ve done it. You have bombed, invaded, and occupied an oppressive State into a constitutional democracy, human rights and all. Now there is only one thing left to do: attempt to leave — and hope you are not snared in the nation-building trap.

Biden suffered much criticism over the chaotic events in Afghanistan in August 2021, such as the masses of fleeing Afghans crowding the airport in Kabul and clinging to U.S. military planes, the American citizens left behind, and more, all as the country fell to the Taliban. Yet Biden was in a dilemma, in the 16th century sense of the term: a choice between two terrible options. That’s the nation-building trap: if your nation-building project collapses after or as you leave, do you go back in and fight a bloody war a second time, or do you remain at home? You can 1) spend more blood, treasure, and years reestablishing the democracy and making sure the first war was not in vain, but risk being in the exact same situation down the road when you again attempt to leave. Or 2) refuse to sacrifice any more lives (including those of civilians) or resources, refrain from further war, and watch oppression return on the ruins of your project. This is a horrific choice to make, and no matter what you would choose there should be at least some sympathy for those who might choose the other.

Such a potentiality should make us question war and nation-building, a point to which we will return. But here it is important to recognize that the August chaos was inherent in the nation-building trap. Biden had that dilemma to face, and his decision came with unavoidable tangential consequences. For example, the choice, as the Taliban advanced across Afghanistan, could be reframed as 1) send troops back in, go back to war, and prevent a huge crowd at the airport and a frantic evacuation, or 2) remain committed to withdraw, end the war, but accept that there would be chaos as civilians tried to get out of the country. Again, dismal options.

This may seem too binary, but the timeline of events appears to support it. With a withdraw deadline of August 31, the Taliban offensive began in early May. By early July, the U.S. had left its last military base, marking the withdraw as “effectively finished” (this is a detail often forgotten). Military forces only remained in places like the U.S. embassy in Kabul. In other words, from early May to early July, the Taliban made serious advances against the Afghan army, but the rapid fall of the nation occurred after the U.S. and NATO withdraw — with some Afghan soldiers fighting valiantly, others giving up without a shot. There are countless analyses regarding why the much larger, U.S.-trained and -armed force collapsed so quickly. U.S. military commanders point to our errors like: “U.S. military officials trained Afghan forces to be too dependent on advanced technology; they did not appreciate the extent of corruption among local leaders; and they didn’t anticipate how badly the Afghan government would be demoralized by the U.S. withdrawal.” In any event, one can look at either May-June (when U.S. forces were departing and Taliban forces were advancing) or July-August (when U.S. forces were gone and the Taliban swallowed the nation in days) as the key decision-making moment(s). Biden had to decide whether to reverse the withdraw, send troops back in to help the Afghan forces retake lost districts (and thus avoid the chaos of a rush to the airport and U.S. citizens left behind), or hold firm to the decision to end the war (and accept the inevitability of turmoil). Many will argue he should have chosen option one, and that’s an understandable position. Even if you had to fight for another 20 years, and all the death and maiming that comes with it, and face the same potential scenario when you try to withdraw in 2041, some would support it. But for those who desired an end to war, it makes little sense to criticize Biden for the airport nightmare, or the Taliban takeover or American citizens being left behind (more on that below). “I supported withdraw but not the way it was done” is almost incomprehensible. In the context of that moment, all those things were interconnected. In summer 2021, only extending and broadening the war could have prevented those events. It’s the nation-building trap — it threatens to keep you at war forever.

The idea that Biden deserves a pass on the American citizens unable to be evacuated in time may draw special ire. Yes, one may think, maybe ending the war in summer 2021 brought an inevitable Taliban takeover (one can’t force the Afghan army to fight, and maybe we shouldn’t fight a war “Afghan forces are not willing to fight themselves,” as Biden put it) and a rush to flee the nation, but surely the U.S. could have done more to get U.S. citizens (and military allies such as translators) out of Afghanistan long before the withdraw began. This deserves some questioning as well — and as painful as it is to admit, the situation involved risky personal decisions, gambles that did not pay off. Truly, it was no secret that U.S. forces would be leaving Afghanistan in summer 2021. This was announced in late February 2020, when Trump signed a deal with the Taliban that would end hostilities and mark a withdraw date. U.S. citizens (most dual citizens) and allies had over a year to leave Afghanistan, and the State Department contacted U.S. citizens 19 times to alert them of the potential risks and offer to get them out, according to the president and the secretary of state. Thousands who chose to stay changed their minds as the Taliban advance continued. One needn’t be an absolutist here. It is possible some Americans fell through the cracks, or that military allies were given short shrift. And certainly, countless Afghan citizens had not the means or finances to leave the nation. Not everyone who wished to emigrate over that year could do so. Yet given that the withdraw date was known and U.S. citizens were given the opportunity to get out, some blame must necessarily be placed on those who wanted to stay despite the potential for danger — until, that is, the potential became actual.

Biden deserves harsh criticism, instead, for making stupid promises, for instance that there would be no chaotic withdraw. The world is too unpredictable for that. Further, for a drone strike that blew up children before the last plane departed. And for apparently lying about his generals’ push to keep 2,500 troops in the country.

That is a good segue for a few final thoughts. The first revolves around the question: “Regardless of the ethics of launching a nation-building war, is keeping 2,500 troops in the country, hypothetically forever, the moral thing to do to prevent a collapse into authoritarianism or theocracy?” Even if one opposed and condemned the invasion as immoral, once that bell has been rung it cannot be undone, and we’re thus forced to consider the ethics of how to act in a new, ugly situation. Isn’t 2,500 troops a “small price to pay” to preserve a nascent democracy and ensure a bloody war was not for nothing? That is a tempting position, and again one can have sympathy for it even if disagreeing, favoring full retreat. The counterargument is that choosing to leave a small force may preserve the nation-building project but it also incites terrorism against the U.S. We know that 9/11 was seen by Al-Qaeda as revenge for U.S. wars and military presences in Muslim lands, and the War on Terror has only caused more religious radicalization and deadly terrorist revenge, in an endless cycle of violence that should be obvious to anyone over age three. So here we see another dilemma: leave, risk a Taliban takeover, but (begin to) extricate yourself from the cycle of violence…or stay, protect the democracy, but invite more violence against Americans. This of course strays dangerously close to asking who is more valuable, human beings in Country X or Country Y, that old, disgusting patriotism or nationalism. But this writer detests war and nation-building and imperialism and the casualties at our own hands (our War on Terror is directly responsible for the deaths of nearly 1 million people), and supports breaking the cycle immediately. That entails total withdraw and living with the risk of the nation-building endeavor falling apart.

None of this is to say that nation-building cannot be successful in theory or always fails in practice. The 2003 invasion of Iraq, which like that of Afghanistan I condemn bitterly, ended a dictatorship; eighteen years later a democracy nearly broken by corruption, security problems, and the lack of enforcement of personal rights stands in its place, a flawed but modest step in the right direction. However, we cannot deny that attempting to invade and occupy a nation into a democracy carries a high risk of failure. For all the blood spilled — ours and our victims’ — the effort can easily end in disaster. (Beyond a flawed democracy and massive Iraqi civilian body count, our invasion plunged the nation into civil war and birthed ISIS.) War and new institutions and laws hardly address root causes of national problems that can tear a new country apart, such as religious extremism, longstanding ethnic conflict, and so on. It may in fact make such things worse. This fact should make us question the wisdom of nation-building. As discussed, you can “stay until the nation is ready,” which may mean generations. Then when you leave, the new nation may still collapse, as with Afghanistan, not being as ready as you thought. Thus a senseless waste of lives and treasure. Further, why do we never take things to their logical conclusion? Why tackle one or two brutal regimes and not all the others? If we honestly wanted to use war to try to bring liberty and democracy to others, the U.S. would have to bomb and occupy nearly half the world. Actually “spreading freedom around the globe” and “staying till the job’s done” means wars of decades or centuries, occupations of almost entire continents, countless millions dead. Why do ordinary Americans support a small-scale project, but are horrified at the thought of a large-scale one? That is a little hint that what you are doing needs to be rethought.

Biden — surprisingly, admirably steadfast in his decision despite potential personal political consequences — uttered shocking words to the United States populace: “This decision about Afghanistan is not just about Afghanistan. It’s about ending an era of major military operations to remake other countries.” Let’s hope that is true.

For more from the author, subscribe and follow or read his books.

Hegemony and History

The Italian Marxist Antonio Gramsci, writing in the early 1930s while imprisoned by the Mussolini government, theorized that ruling classes grew entrenched through a process called cultural hegemony, the successful propagation of values and norms, which when accepted by the lower classes produced passivity and thus the continuation of domination and exploitation from above. An ideology became hegemonic when it found support from historical blocs, alliances of social groups (classes, religions, families, and so on) — meaning broad, diverse acceptance of ideas that served the interests of the bourgeoisie in a capitalist society and freed the ruling class from some of the burden of using outright force. This paper argues that Gramsci’s theory is useful for historians because its conception of “divided consciousness” offers a framework for understanding why individuals failed to act in ways that aligned with their own material interests or acted for the benefit of oppressive forces. Note this offering characterizes cultural hegemony as a whole, but it is divided consciousness that permits hegemony to function. Rather than a terminus a quo, however, divided consciousness can be seen as created, at least partially, by hegemony and as responsible for ultimate hegemonic success — a mutually reinforcing system. The individual mind and what occurs within it is the necessary starting point for understanding how domineering culture spreads and why members of social groups act in ways that puzzle later historians.

Divided (or contradictory) consciousness, according to Gramsci, was a phenomenon in which individuals believed both hegemonic ideology and contrary ideas based on their own lived experiences. Cultural hegemony pushed such ideas out of the bounds of rational discussion concerning what a decent society should look like. Historian T.J. Jackson Lears, summarizing sociologist Michael Mann, wrote that hegemony ensured “values rooted in the workers’ everyday experience lacked legitimacy… [W]orking class people tend to embrace dominant values as abstract propositions but often grow skeptical as the values are applied to their everyday lives. They endorse the idea that everyone has an equal chance of success in America but deny it when asked to compare themselves with the lawyer or businessman down the street.”[1] In other words, what individuals knew to be true from simply functioning in society was not readily applied to the nature of the overall society; some barrier, created at least in part by the process of hegemony, existed. Lears further noted the evidence from sociologists Richard Sennett and Jonathon Cobb, whose subaltern interviewees “could not escape the effect of dominant values” despite also holding contradictory ones, as “they deemed their class inferiority a sign of personal failure, even as many realized they had been constrained by class origins that they could not control.”[2] A garbage collector knew the fact that he was not taught to read properly was not his fault, yet blamed himself for his position in society.[3] The result of this contradiction, Gramsci observed, was often passivity, consent to oppressive systems.[4] If one could not translate and contrast personal truths to the operation of social systems, political action was less likely.

To understand how divided consciousness, for Gramsci, was achieved, it is necessary to consider the breadth of the instruments that propagated dominant culture. Historian Robert Gray, studying how the bourgeoisie achieved hegemony in Victorian Britain, wrote that hegemonic culture could spread not only through the state — hegemonic groups were not necessarily governing groups, though there was often overlap[5] — but through any human institutions and interactions: “the political and ideological are present in all social relations.”[6] Everything in Karl Marx’s “superstructure” could imbue individuals and historical blocs with domineering ideas: art, media, politics, religion, education, and so on. Gray wrote that British workers in the era of industrialization of course had to be pushed into “habituation” of the new and brutal wage-labor system by the workplace itself, but also through “poor law reform, the beginnings of elementary education, religious evangelism, propaganda against dangerous ‘economic heresies,’ the fostering of more acceptable expressions of working-class self help (friendly societies, co-ops, etc.), and of safe forms of ‘rational recreation.’”[7] The bourgeoisie, then, used many social avenues to manufacture consent, including legal reform that could placate workers. Some activities were acceptable under the new system (joining friendly societies or trade unions) to keep more radical activities out of bounds.[8] It was also valuable to create an abstract enemy, a “social danger” for the masses to fear.[9] So without an embrace of the dominant values and norms of industrial capitalism, there would be economic disaster, scarcity, loosening morals, the ruination of family, and more.[10] The consciousness was therefore under assault by the dominant culture from all directions, heavy competition for values derived from lived experience, despite the latter’s tangibility. In macro, Gramsci’s theory of cultural hegemony, to quote historian David Arnold, “held that popular ideas had as much historical weight or energy as purely material forces” or even “greater prominence.”[11] In micro, it can be derived, things work the same in the individual mind, with popular ideas as powerful as personal experience, and thus the presence of divided consciousness.

The concept of contradictory consciousness helps historians answer compelling questions and solve problems. Arnold notes Gramsci’s questions: “What historically had kept the peasants [of Italy] in subordination to the dominant classes? Why had they failed to overthrow their rulers and to establish a hegemony of their own?”[12] Contextually, why wasn’t the peasantry more like the industrial proletariat — the more rebellious, presumed leader of the revolution against capitalism?[13] The passivity wrought from divided consciousness provided an answer. While there were “glimmers” of class consciousness — that is, the application of lived experience to what social systems should be, and the growth of class-centered ideas aimed at ending exploitation — the Italian peasants “largely participated in their own subordination by subscribing to hegemonic values, by accepting, admiring, and even seeking to emulate many of the attributes of the superordinate classes.”[14] Their desires, having “little internal consistency or cohesion,” even allowed the ruling class to make soldiers of peasants,[15] meaning active participation in maintaining oppressive power structures. Likewise, Lears commented on the work of political theorist Lawrence Goodwyn and the question of why the Populist movement in the late nineteenth century United States largely failed. While not claiming hegemony as the only cause, Lears argued that the democratic movement was most successful in parts of the nation with democratic traditions, where such norms were already within the bounds of acceptable discussion.[16] Where they were not, where elites had more decision-making control, the “received culture” was more popular, with domination seeming more natural and inevitable.[17] Similarly, Arnold’s historiographical review of the Indian peasantry found that greater autonomy (self-organization to pursue vital interests) of subaltern groups meant hegemony was much harder to establish, with “Gandhi [coming] closest to securing the ‘consent’ of the peasantry for middle-class ideological and political leadership,” but the bourgeoisie failing to do the same.[18] Traditions and cultural realities could limit hegemonic possibilities; it’s just as important to historians to understand why something does not work out as it is to comprehend why something does. As a final example, historian Eugene Genovese found that American slaves demonstrated both resistance to and appropriation of the culture of masters, both in the interest of survival, with appropriation inadvertently reinforcing hegemony and the dominant views and norms.[19] This can help answer questions regarding why slave rebellions took place in some contexts but not others, or even why more did not occur — though, again, acceptance of Gramscian theory does not require ruling out all causal explanations beyond cultural hegemony and divided consciousness. After all, Gramsci himself favored nuance, with coexisting consent and coercion, consciousness of class or lived experience mixing with beliefs of oppressors coming from above, and so on.

The challenge of hegemonic theory and contradictory consciousness relates to parsing out aforementioned causes. Gray almost summed it up when he wrote, “[N]or should behavior that apparently corresponds to dominant ideology be read at face value as a direct product of ruling class influence.”[20] Here he was arguing that dominant culture was often imparted in indirect ways, not through intentionality of the ruling class or programs of social control.[21] But one could argue: “Behavior that apparently corresponds to dominant ideology cannot be read at face value as a product of divided consciousness and hegemony.” It is a problem of interpretation, and it can be difficult for historians to parse out divided consciousness or cultural hegemony from other historical causes and show which has more explanatory value. When commenting on the failure of the Populist movement, Lears mentioned “stolen elections, race-baiting demagogues,” and other events and actors with causal value.[22] How much weight should be given to dominant ideology and how much to stolen elections? This interpretive nature can appear to weaken the usefulness of Gramsci’s model. Historians have developed potential solutions. For instance, as Lears wrote, “[O]ne way to falsify the hypothesis of hegemony is to demonstrate the existence of genuinely pluralistic debate; one way to substantiate it is to discover what was left out of public debate and to account historically for those silences.”[23] If there was public discussion of a wide range of ideas, many running counter to the interests of dominant groups, the case for hegemony is weaker; if public discussion centered around a narrow slate of ideas that served obvious interests, the case is stronger. A stolen election may be assigned less casual value, and cultural hegemony more, if there existed restricted public debate. However, the best evidence for hegemony may remain the psychoanalysis of individuals, as seen above, that demonstrate some level of divided consciousness. Even in demonstrability, contradictory consciousness is key to Gramsci’s overall theory. A stolen election may earn less casual value if such insightful individual interviews can be submitted as evidence.  

In sum, for Gramscian thinkers divided consciousness is a demonstrable phenomenon that powers (and is powered by) hegemony and the acceptance of ruling class norms and beliefs. While likely not the only cause of passivity to subjugation, it offers historians an explanation as to why individuals do not act in their own best interests that can be explored, given causal weight, falsified, or verified (to degrees) in various contexts. Indeed, Gramsci’s theory is powerful in that it has much utility for historians whether true or misguided.

For more from the author, subscribe and follow or read his books.


[1] T.J. Jackson Lears, “The Concept of Cultural Hegemony: Problems and Possibilities,” The American Historical Review 90, no. 3 (June 1985): 577.

[2] Ibid, 577-578.

[3] Ibid, 578.

[4] Ibid, 569.

[5] Robert Gray, “Bourgeois Hegemony in Victorian Britain,” in Tony Bennet, ed., Culture, Ideology and Social Process: A Reader (London: Batsford Academic and Educational, 1981), 240.

[6] Ibid, 244.

[7] Ibid.

[8] Ibid, 246.

[9] Ibid, 245.

[10] Ibid.

[11] David Arnold, “Gramsci and the Peasant Subalternity in India,” The Journal of Peasant Studies 11, no. 4 (1984):158.

[12] Ibid, 157.

[13] Ibid, 157.

[14] Ibid, 159.

[15] Ibid.

[16] Lears, “Hegemony,” 576-577.

[17] Ibid.

[18] Arnold, “India,” 172.

[19] Lears, “Hegemony,” 574.

[20] Gray, “Britain,” 246.

[21] Ibid, 245-246.

[22] Ibid, 276.

[23] Lears, “Hegemony,” 586.

20% of Americans Are Former Christians

It’s relatively well-known that religion in this country is declining, with 26% of Americans now describing themselves as nonreligious (9% adorning the atheist or agnostic label, 17% saying they are “nothing in particular”). Less discussed is where these growing numbers come from and just how much “faith switching” happens here.

For example, about 20% of citizens are former Christians, one in every five people you pass on the street. Where these individuals go isn’t a foregone conclusion — at times it’s to Islam (77% of new converts used to be Christians), Hinduism, or other faiths (“Members of non-Christian religions also have grown modestly as a share of the adult population,” the Pew Research Center reports). But mostly it’s to the “none” category, which has thus risen dramatically and is the fastest-growing affiliation. In a majority-Christian country that is rapidly secularizing, all this makes sense. (For context, 34% of Americans — one in three people — have abandoned the belief system in which they were raised, this group including atheists, Christians, Buddhists, Muslims, everyone. 4% of Americans used to be nonreligious but are now people of faith.)

While Islam is able to gain new converts at about the same rate it loses members, thus keeping Islam’s numbers steady (similar to Hinduism and Judaism), Christianity loses far more adherents than it brings in, and is therefore seeing a significant decline (77% to 65% of Americans in just 10 years):

19.2% of all adults…no longer identify with Christianity. Far fewer Americans (4.2% of all adults) have converted to Christianity after having been raised in another faith or with no religious affiliation. Overall, there are more than four former Christians for every convert to Christianity.

This statistic holds true for all religions, as well: “For every person who has left the unaffiliated and now identifies with a religious group more than four people have joined the ranks of the religious ‘nones.'”

This is so even though kids raised to be unaffiliated are somewhat less likely to remain unaffiliated! 53% of Americans raised nonreligious remain so. This is better than the 45% of mainstream protestants who stick with their beliefs, but worse than the 59% of Catholics or 65% of evangelical protestants. (Hinduism, Islam, and Judaism again beat everyone — one shouldn’t argue that high retention rates, or big numbers, prove beliefs true, nor low ones false.) Yet it is simply the case that there are currently many more religious people to change their minds than there are skeptics to change theirs:

The low retention rate of the religiously unaffiliated may seem paradoxical, since they ultimately obtain bigger gains through religious switching than any other tradition. Despite the fact that nearly half of those raised unaffiliated wind up identifying with a religion as adults, “nones” are able to grow through religious switching because people switching into the unaffiliated category far outnumber those leaving the category.

Overall, this knowledge is valuable because the growing numbers of atheists, agnostics, and the unaffiliated are occasionally seen as coming out of nowhere, rather than out of Christianity itself. (And out of other faiths, to far lesser degrees: Muslims are 1% of the population, Jews 2%.) As if a few dangerous, free-thinking families were suddenly having drastically more children, or a massive influx of atheistic immigrants was pouring into the U.S., skewing the percentages. Rather, the 26% of Americans who are nonreligious is comprised of much of the 20% of Americans who have abandoned Christianity. The call’s coming from inside the church.

For more from the author, subscribe and follow or read his books.

How Should History Be Taught?

Debate currently rages over how to teach history in American public schools. Should the abyss of racism receive full attention? Should we teach our children that the United States is benevolent in its wars and use of military power — did we not bring down Nazi Germany? Is the nation fundamentally good based on its history, worthy of flying the flag, or is it responsible for so many horrors that an ethical person would keep the flag in the closet or burn it in the streets? Left and Right and everyone in between have different, contradictory perspectives, but to ban and censor is not ideal. Examining the full spectrum of views will help students understand the world they inhabit and the field of history itself.

While there was once an imagining of objectivity, historians now typically understand the true nature of their work. “Through the end of the twentieth century,” Sarah Maza writes in Thinking About History, “the ideal of historical objectivity was undermined from within the historical community… The more different perspectives on history accumulated, the harder it became to believe that any historian, however honest and well-intentioned, could tell the story of the past from a position of Olympian detachment, untainted by class, gender, racial, national, and other biases.” Selecting and rejecting sources involves interpretation and subconsciously bent decisions. Historians looking at the same sources will have different interpretations of meaning, which leads to fierce debates in scholarly journals. Teachers are not value-neutral either. All this is taken for granted. “It is impossible to imagine,” Maza writes, “going back to a time when historians imagined that their task involved bowing down before ‘the sovereignty of sources.'” They understand it’s more complex than that: “The history of the American Great Plains in the nineteenth century has been told as a tale of progress, tragedy, or triumph over adversity,” depending on the sources one is looking at and how meaning is derived from them.

But this is a positive thing. It gives us a fuller picture of the past, understanding the experiences of all actors. “History is always someone’s story, layered over and likely at odds with someone else’s: to recognize this does not make our chronicles of the past less reliable, but more varied, deeper, and more truthful.” It also makes us think critically — what interpretation makes the most sense to us, given the evidence offered? Why is the evidence reliable?

If historians understand this, why shouldn’t students? Young people should be taught that while historical truth exists, any presentation of historical truth — a history book, say — was affected by human action and sentiment. This is a reality that those on the Left and Right should be able to acknowledge. Given this fact, and that both sides are after the same goal, to teach students the truth, the only sensible path forward is to offer students multiple interpretations. Read A Patriot’s History of the United States (Schweikart, Allen) and A People’s History of the United States (Zinn). There are equivalent versions of these types of texts for elementary and middle schoolers. Read about why World War II was “The Good War” in your typical textbook, alongside Horrible Histories: Woeful Second World War. Have students read history by conservatives in awe of a greatest country in the whole wide world, as well as by liberals fiercely critical of the nation and many of its people for keeping liberty and democracy exclusively for some for far longer than many other countries. They can study top-down history (great rulers, generals, and leaders drive change) and bottom-up social history (ordinary people coming together drives change). Or compare primary sources from the late nineteenth century to the early twentieth demanding or opposing women’s rights. Read the perspectives of both Native Americans and American settlers in the plains. Why not? This gives students a broader view of the past, shows them why arguments and debates over history exist, and helps them understand modern political ideologies.

Most importantly, as noted, it helps students think critically. Many a teacher has said, “I don’t want to teach students what to think, but rather how to think.” Apart from exploring the logical fallacies, which is also important, this doesn’t seem possible without exploring varying perspectives and asking which one a young person finds most convincing and why. One can’t truly practice the art of thinking without one’s views being challenged, being forced to justify the maintenance of a perspective or a deviation based on newly acquired knowledge. Further, older students can go beyond different analyses of history and play around with source theories: what standard should there be to determine if a primary source is trustworthy? Can you take your standard, apply it to the sources of these two views, and determine which is more solid by your metric? There is much critical thinking to be done, and it makes for a more interesting time for young people.

Not only does teaching history in this way reflect the professional discipline, and greatly expand student knowledge and thought, it aligns with the nature of public schools, or with what the general philosophy of public schools should be. The bent of a history classroom, or the history segment of the day in the youngest grades, is determined by the teacher, but also by the books, curricula, and standards approved or required by the district, the regulations of the state, and so forth. So liberal teachers, districts, and states go their way and conservative teachers, districts, and states go theirs. But who is the public school classroom for, exactly? It’s for everyone — which necessitates some kind of openness to a broad range of perspectives (public universities are the same way, as I’ve written elsewhere).

This may be upsetting and sensible at the same time. On the one hand, “I don’t want my kid, or other kids, hearing false, dangerous ideas from the other side.” On the other, “It would be great for my kid, and other kids, to be exposed to this perspective when it so often is excluded from the classroom.” Everyone is happy, no one is happy. Likely more the latter. First, how can anyone favor bringing materials full of falsities into a history class? Again, anyone who favors critical thinking. Make that part of the study — look at the 1619 Project and the 1776 Report together, and explore why either side finds the other in error. Second, how far do you go? What extreme views will be dignified with attention? Is one to bring in Holocaust deniers and square their arguments up against the evidence for the genocide? Personally, this writer would support that: what an incredible exercise in evaluating and comparing the quantity and quality of evidence (and “evidence”). Perhaps others will disagree. But none of this means there can’t be reasonable limits to presented views. If an interpretation or idea is too fringe, it may be a waste of time to explore it. There is finite time in a class period and in a school year. The teacher, district, and so on will have to make the (subjective) choice (no one said this was a perfect system) to leave some things out and focus on bigger divides. If Holocaust denial is still relatively rare, controversy over whether the Civil War occurred due to slavery is not.

Who, exactly, is afraid of pitting their lens of history against that of another? Probably he who is afraid his sacred interpretation will be severely undermined, she who knows her position is not strong. If you’re confident your interpretation is truthful, backed by solid evidence, you welcome all challengers. Even if another viewpoint makes students think in new ways, even pulling them away from your lens, you know the latter imparted important knowledge and made an impression. As the author of a book on racism used in high schools and colleges, what do I have to fear when some conservative writes a book about how things really weren’t so bad for black Kansas Citians over the past two centuries? By all means, read both books, think for yourself, decide which thesis makes the most sense to you based on the sources — or create a synthesis of your own. The imaginary conservative author should likewise have no qualms about such an arrangement.

I have thus far remained fairly even-handed, because Leftists and right-wingers can become equally outraged over very different things. But here I will wonder whether the Right would have more anxiety over a multiple-interpretation study specifically. Once a student has learned of the darkness of American history, it is often more difficult to be a full-throated, flag-worshiping patriot. This risk will drive some conservatives berserk. Is the Leftist parent equally concerned that a positive, patriotic perspective on our past alongside a Zinnian version will turn her child into someone less critical, more favorable to the State, even downplaying the darkness? I’m not sure if the Leftist is as worried about that. My intuition, having personally been on both sides of the aisle, is that the risk would be more disturbing for conservatives — the horrors still horrify despite unrelated positive happenings, but the view of the U.S. as the unequivocal good guy is quickly eroded forever. Hopefully I am wrong and that is the mere bias of a current mindset talking. Either way, this pedagogy, the great compromise, is the right thing to do, for the reasons outlined above.

In conclusion, we must teach students the truth — and Americans will never fully agree on what that is, but the closest one could hope for is that this nation and its people have done horrific things as well as positive things. Teaching both is honest and important, and that’s what students will see when they examine different authors and documents. In my recent review of a history text, I wrote that the Left “shouldn’t shy away from acknowledging, for instance, that the U.S. Constitution was a strong step forward for representative democracy, secular government, and personal rights, despite the obvious exclusivity, compared to Europe’s systems.” Nor should one deny the genuine American interest in rescuing Europe and Asia from totalitarianism during World War II. And then there’s inventions, art, scientific discoveries, music, and many other things. The truth rests in nuance, as one might expect. James Baldwin said that American history is “more beautiful and more terrible than anything anyone has ever said about it.” (What nation does not have both horrors and wonderful things in its history? Where would philosophy be without the German greats?) I’ve at times envisioned writing a history of the U.S. through a “hypocrisy” interpretation, but it works the same under a “mixed bag” framing: religious dissenters coming to the New World for more freedom and immediately crushing religious dissenters, the men who spoke of liberty and equality who owned slaves, fighting the Nazi master race with a segregated army, supporting democracy in some cases but destroying it in others, and so on. All countries have done good and bad things.

That is a concept the youngest children — and the oldest adults — can understand.

For more from the author, subscribe and follow or read his books.

Big Government Programs Actually Prevent Totalitarianism

There is often much screaming among conservatives that big government programs — new ones like universal healthcare, universal college education, or guaranteed work, and long-established ones like Social Security, Medicaid, and Medicare — somehow lead to dictatorship. There is, naturally, no actual evidence for this. The imagined correlation is justified with nothing beyond “that’s socialism, which always becomes totalitarianism,” ignorance already addressed. The experience of advanced democracies around the world, and indeed the U.S. itself, suggests big government programs, run by big departments with big budgets and big staffs helping tens of millions of citizens, can happily coexist alongside elected governing bodies and presidents, constitutions, and human rights, as one would expect.

Threats to democracy come from elsewhere — but what’s interesting to consider is how conservatives have things completely backward. Big government programs — the demonstration that one’s democracy is a government “for the people,” existing to meet citizen needs and desires — are key to beating back the real threats to a republic.

In a recent interview with The Nation, Bernie Sanders touched on this:

“Why it is imperative that we address these issues today is not only because of the issues themselves—because families should not have to spend a huge proportion of their income on child care or sending their kid to college—but because we have got to address the reality that a very significant and growing number of Americans no longer have faith that their government is concerned about their needs,” says the senator. “This takes us to the whole threat of Trumpism and the attacks on democracy. If you are a worker who is working for lower wages today than you did 20 years ago, if you can’t afford to send your kid to college, etc., and if you see the very, very richest people in this country becoming phenomenally rich, you are asking yourself, ‘Who controls the government, and does the government care about my suffering and the problems of my family?’”

Sanders argues that restoring faith in government as a force for good is the most effective way to counter threats to democracy.

And he’s right. Empirical evidence suggests economic crises erode the rule of law and faith in representative democracy. Depressions are not the only force that pushes in this direction, but they are significant and at times a killing blow to democratic systems. Unemployment, low wages, a rising cost of living — hardship and poverty, in other words — drive citizens toward extreme parties and voices, including authoritarians. Such leaders are then elected to office, and begin to dismantle democracy with support of much of the population. Europe in the 1930s is the oft-cited example, but the same has been seen after the global recession beginning in 2008, with disturbing outgrowths of recent declining trust in democracy: the success of politicians with demagogic and anti-democratic bents like Trump, hysteria over fictional stolen elections that threatens to keep unelected people in office, and dangerous far-right parties making gains in Europe. The Eurozone and austerity crisis, the COVID-induced economic turmoil, and more have produced similar concerns.

What about the reverse? If economic disaster harms devotion to real democracy and politicians who believe in it, does the welfare state increase support for and faith in democracy? Studies also suggest this is so. Government tackling poverty through social programs increases satisfaction with democratic systems! The perception that inequality is rising and welfare isn’t doing enough to address it does the exact opposite. A helping hand increases happiness, and is expected from democracies, inherently linking favorability views on republics and redistribution. If we wish to inoculate the citizenry against authoritarian candidates and anti-democratic practices within established government, shoring up loyalty to democracy through big government programs is crucial.

It is as Sanders said: the most important thing for the government to do to strengthen our democracy and even heal polarization (“Maybe the Democrats putting $300 per child per month in my bank account aren’t so evil”), is simply to help people. To work for and serve all. Healthcare, education, income support, jobs…such services help those on the Right, Left, and everyone in between. This should be done whether there is economic bust or boom. People hold fast to democracy, a government of and by the people, when it is clearly a government for the people. If we lose the latter, so too the former.

For more from the author, subscribe and follow or read his books.