Comparative Power

The practice of reconstructing the past, with all its difficulties and incompleteness, is aided by comparative study. Historians, anthropologists, sociologists, and other researchers can learn a great deal about their favored society and culture by looking at others. This paper makes that basic point, but, more significantly, makes a distinction between the effectiveness of drawing meaning from cultural similarity/difference and doing the same from one’s own constructed cultural analogy, while acknowledging both are valuable methods. In other words, it is argued here that the historian who documents similarities and differences between societies stands on firmer methodological ground for drawing conclusions about human cultures than does the historian who is forced to fill in gaps in a given historical record by studying other societies in close geographic and temporal proximity. Also at a disadvantage is the historian working comparatively with gaps in early documentation that are filled in later documentation. This paper is a comparison of comparative methods — an important exercise, because such methods are often wielded due to a dearth of evidence in the archives. The historian should understand the strengths and limitations of various approaches (here reciprocal comparison, historical analogy, and historiographic comparison) to this problem.

To begin, a look at reciprocal comparison and the meaning derived from such an effort, derived specifically from likenesses or distinctions. Historian Robert Darnton found meaning in differences in The Great Cat Massacre: and Other Episodes in French Cultural History. What knowledge, Darnton wondered in his opening chapter, could we gain of eighteenth century French culture by looking at peasant folk tales and contrasting them to versions found in other places in Europe? Whereas similarities might point to shared cultural traits or norms, differences would isolate the particular mentalités of French peasants, how they viewed the world and what occupied their thoughts, in the historical tradition of the Annales School.[1] So while the English version of Tom Thumb was rather “genial,” with helpful fairies, attention to costume, and a titular character engaging in pranks, in the French version the Tom Thumb character, Poucet, was forced to survive in a “harsh, peasant world” against “bandits, wolves, and the village priest by using his wits.”[2] In a tale of a doctor cheating Death, the German version saw Death immediately kill the doctor; with a French twist, the doctor got away with his treachery for some time, becoming prosperous and living to old age — cheating paid off.[3] Indeed, French tales focused heavily on survival in a bleak and brutal world, and on this world’s particularities. Characters with magical wishes asked for food and full bellies, they got rid of children who did not work, put up with cruel step-mothers, and encountered many beggars on the road.[4] Most folk tales mix fictional elements like ogres and magic with socio-economic realities from the place and time they are told, and therefore the above themes reflect the ordinary lives of French peasants: hunger, poverty, the early deaths of biological mothers, begging, and so on.[5] In comparing French versions with those of the Italians, English, and Germans, Darnton noticed unique fixations in French peasant tales and then contrasted these obsessions with the findings of social historians on the material conditions of peasant life, bringing these things together to find meaning, to create a compelling case for what members of the eighteenth century French lower class thought about day to day and their attitudes towards society.

Now, compare Darnton’s work to ethno-historian Helen Rountree’s “Powhatan Indian Women: The People Captain John Smith Barely Saw.” Rountree uses ethnographic analogy, among other tools, to reconstruct the daily lives of Powhatan women in the first years of the seventeenth century. Given that interested English colonizers had limited access to Powhatan women and a “cloudy lens” of patriarchal eurocentrism through which they observed native societies, and given that the Powhatans left few records themselves, Rountree uses the evidence of daily life in nearby Eastern Woodland tribes to describe the likely experiences of Powhatan women.[6] For example: “Powhatan women, like other Woodland Indian women, probably nurse their babies for well over a year after birth, so it would make sense to keep baby and food source together” by bringing infants into the fields with them as the women work.[7] Elsewhere “probably” is dropped for more confident takes: “Powhatan men and women, like those in other Eastern Woodland tribes, would have valued each other as economic partners…”[8] A lack of direct archival knowledge of Powhatan society and sentiments is shored up through archival knowledge of other native peoples living in roughly the same time and region. The meaning Rountree derives from ethnographic analogy, alongside other techniques and evidence, is that the English were wrong, looking through their cloudy lens, to believe Powhatan women suffered drudgery and domination under Powhatan men. Rather, women experienced a great deal of autonomy, as well as fellowship and variety, in their work, and were considered co-equal partners with men in the economic functioning of the village.[9]  

Both Darnton and Rountree admit their methods have challenges where evidence is concerned. Darnton writes that his examination of folktales is “distressingly imprecise in its deployment of evidence,” the evidence is “vague,” because the tales were written down much later — exactly how they were orally transmitted at the relevant time cannot be known.[10] In other words, what if the aspect of a story one marks as characteristic of the French peasant mentalité was not actually in the verbal telling of the tale? It is a threat to the legitimacy of the project. Rountree is careful to use “probably” and “likely” with most of her analogies; the “technique is a valid basis for making inferences if used carefully” (emphasis added), and one must watch out for the imperfections in the records of other tribes.[11] For what if historical understanding of another Eastern Woodland tribe is incorrect, and the falsity is copied over to the narrative of the Powhatan people? Rountree and Darnton acknowledge the limitations of their methods even while firmly believing they are valuable for reconstructing the past. This paper does not dispute that — however, it would be odd if all comparative methods were created equal.

Despite its challenges, reciprocal comparison rests on safer methodological ground, for it at least boasts two actually existing elements to contrast. For instance, Darnton has in his possession folktales from France and from Germany, dug up in the archives, and with them he can notice differences and thus derive meaning about how French peasants viewed the world. Such meaning may be incorrect, but is less likely to be so with support from research on the material conditions of those who might be telling the tales, as mentioned. Rountree, on the other hand, wields a tool that works with but one existing element. Historical, cultural, or ethnographic analogy takes what is known about other peoples and applies it to a specific group suffering from a gap in the historical record. This gap, a lack of direct evidence, is filled with an assumption — which may simply be wrong, without support from other research, like Darnton enjoys, to help out (to have such research would make analogy unnecessary). Obviously, an incorrect assumption threatens to derail derived meaning. If the work of Powhatan women differed in a significant way from other Eastern Woodland tribes, unseen and undiscovered and even silenced by analogy, the case of Powhatan economic equality could weaken. Again, this is not to deny the method’s value, only to note the danger that it carries compared to reciprocal comparison. Paradoxically, the inference that Powhatan society resembled other tribes nearby seems as probable and reasonable as it is bold, risky.

Similarly, Michel-Rolph Trouillot, in Silencing the Past: Power and the Production of History, also found meaning with absence when examining whether Henri Christophe, monarch of Haiti after its successful revolution against the French from 1791 to 1804, was influenced by Frederick the Great of Prussia when Christophe named his new Milot palace “San Souci.” Was the palace named after Frederick’s own in Potsdam, or after Colonel San Souci, a revolutionary rival Christophe killed? Trouillot studied the historical record and found that opportunities for early observers to mention a Potsdam-Milot connection were suspiciously ignored.[12] For example, Austro-German geographer Karl Ritter, a contemporary of Christophe, repeatedly described his palace as “European” but failed to mention it was inspired by Frederick’s.[13] British consul Charles Mackenzie, “who visited and described San Souci less than ten years after Christophe’s death, does not connect the two palaces.”[14] Why was a fact that was such a given for later writers not mentioned early on if it was true?[15] These archival gaps of course co-exist with Trouillot’s positive evidence (“Christophe built San Souci, the palace, a few yards away from — if not exactly — where he killed San Souci, the man”[16]), but are used to build a case that Christophe had Colonel San Souci in mind when naming his palace, a detail that evidences an overall erasure of the colonel from history.[17] By contrasting the early historical record with the later one, Trouillot finds truth and silencing.

This historiographic comparison is different from Rountree’s historical analogy. Rountree fills in epistemological gaps about Powhatan women with the traits of nearby, similar cultures; Trouillot judges the gaps in early reports about Haiti’s San Souci palace to suggest later writers were in error and participating in historical silencing (he, like Darnton, is working with two existing elements and weighs the differences). Like Rountree’s, Trouillot’s method is useful and important: the historian should always seek the earliest writings from relevant sources to develop an argument, and if surprising absences exist there is cause to be suspicious that later works created falsities. However, this method too flirts with assumption. It assumes the unwritten is also the unthought, which is not always the case. It may be odd or unlikely that Mackenzie or Ritter would leave Potsdam unmentioned if they believed in its influence, but not impossible or unthinkable. It further assumes a representative sample size — Trouillot is working with very few early documents. Would the discovery of more affect his thesis? As we see with Trouillot and Rountree, and as one might expect, a dearth in the archives forces assumptions.

While Trouillot’s conclusion is probable, he is nevertheless at greater risk of refutation than Darnton or, say, historian Kenneth Pomeranz, who also engaged in reciprocal comparison when he put China beside Europe during the centuries before 1800. Unlike the opening chapter of The Great Cat Massacre, The Great Divergence finds meaning in similarities as well as differences. Pomeranz seeks to understand why Europe experienced an Industrial Revolution instead of China, and must sort through many posited causal factors. For instance, did legal and institutional structures more favorable to capitalist development give Europe an edge, contributing to greater productivity and efficiency?[18] Finding similar regulatory mechanisms like interest rates and property rights, and a larger “world of surprising resemblances” before 1750, Pomeranz argued for other differences: Europe’s access to New World resources and trade, as well as to coal.[19] This indicates that Europe’s industrialization occurred not due to the superior intentions, wisdom, or industriousness of Europeans but rather due to unforeseen, fortunate happenings, or “conjunctures” that “often worked to Western Europe’s advantage, but not necessarily because Europeans created or imposed them.”[20] Reciprocal comparison can thus break down eurocentric perspectives by looking at a broader range of historical evidence. No assumptions need be made (rather, assumptions, such as those about superior industriousness, can be excised). As obvious as it is to write, a wealth of archival evidence, rather than a lack, makes for safer methodological footing, as does working with two existing evidentiary elements, no risky suppositions necessary.

A future paper might muse further on the relationship between analogy and silencing, alluded to earlier — if Trouillot is correct and a fact-based narrative is built on silences, how much more problematic is the narrative based partly on analogy?[21] As for this work, in sum, the historian must use some caution with historical analogy, historiographic comparison, and other tools that have an empty space on one side of the equation. These methods are hugely important and often present theses of high probability. But they are by nature put at risk by archival gaps; reciprocal comparison has more power in its derived meanings and claims about other cultures of the past — by its own archival nature.

For more from the author, subscribe and follow or read his books.


[1] Anna Green and Kathleen Troup, eds., The Houses of History: A Critical Reader in Twentieth-Century History and Theory, 2nd ed. (Manchester: Manchester University Press, 2016), 111.

[2] Robert Darnton, The Great Cat Massacre: And Other Episodes in French Cultural History (New York: Basic Books, 1984), 42.

[3] Ibid, 47-48.

[4] Ibid, 29-38.

[5] Ibid, 23-29.

[6] Helen C. Rountree, “Powhatan Indian Women: The People Captain John Smith Barely Saw,” Ethnohistory 45, no. 1 (winter 1998): 1-2.

[7] Ibid, 4.

[8] Ibid, 21.

[9] Ibid, 22.

[10] Darnton, Cat Massacre, 261.

[11] Rountree, “Powhatan,” 2.

[12] Michel-Rolph Trouillot, Silencing the Past: Power and the Production of History (Boston: Beacon Press, 1995), 61-65.

[13] Ibid, 63-64.

[14] Ibid, 62.

[15] Ibid, 64.

[16] Ibid, 65.

[17] Ibid, chapters 1 and 2.

[18] Kenneth Pomeranz, The Great Divergence: China, Europe, and the Making of the Modern World Economy (Princeton: Princeton University Press, 2000), chapters 3 and 4.

[19] Ibid, 29, 279-283.

[20] Ibid, 4.

[21] Trouillot, Silencing, 26-27.

Will Capitalism Lead to the One-Country World?

In Why America Needs Socialism, I offered a long list of ways the brutalities and absurdities of capitalism necessitate a better system, one of greater democracy, worker ownership, and universal State services. The work also explored the importance of internationalism, moving away from nationalistic ideas (the simpleminded worship of one’s country) and toward an embrace of all peoples — a world with one large nation. Yet these ideas could have been more deeply connected. The need for internationalism was largely framed as a response to war, which, as shown, can be driven by capitalism but of course existed before it and thus independently of it. The necessity of a global nation was only briefly linked to global inequality, disastrous climate change, and other problems. In other words, one could predict that the brutalities and absurdities of international capitalism, such as the dreadful activities of transnational corporations, will push humanity toward increased global political integration.

As a recent example of a (small) step toward political integration, look at the 2021 agreement of 136 nations to set a minimum corporate tax rate of 15% and tax multinational companies where they operate, not just where they are headquartered. This historic moment was a response to corporations avoiding taxes via havens in low-tax countries, moving headquarters, and other schemes. Or look to the 2015 Paris climate accords that set a collective goal of limiting planetary warming to 1.5-2 degrees Celsius, a response to the environmental damage wrought by human industry since the Industrial Revolution. There is a recognition that a small number of enormous companies threaten the health of all people. Since the mid-twentieth century, many international treaties have focused on the environment and labor rights (for example, outlawing forced labor and child labor, which were always highly beneficial and profitable for capitalists). The alignment of nations’ laws is a remarkable step toward unity. Apart from war and nuclear weapons, apart from the global inequality stemming from geography (such as an unlucky lack of resources) or history (such as imperialism), the effects and nature of modern capitalism alone scream for the urgency of internationalism. Capital can move about the globe, businesses seeking places with weaker environmental regulations, minimum wages, and safety standards, spreading monopolies, avoiding taxes, poisoning the biosphere, with an interconnected global economy falling like a house of cards during economic crises. The movement of capital and the interconnectivity of the world necessitate further, deeper forms of international cooperation.

Perhaps, whether in one hundred years or a thousand, humanity will realize that the challenges of multi-country accords — goals missed or ignored, legislatures refusing to ratify treaties, and so on — would be mitigated by a unified political body. A single human nation could address tax avoidance, climate change, and so on far more effectively and efficiently.

On the other hand, global capitalism may lead to a one-nation world in a far more direct way. Rather than the interests of capitalists spurring nations to work together to confront said interests, it may be that nations integrate to serve certain interests of global capitalism, to achieve unprecedented economic growth. The increasing integration of Europe and other regions provides some insight. The formation of the European Union’s common market eliminated taxes and customs between countries, and established a free flow of capital, goods, services, and workers, generating around €1 trillion in economic benefit annually. The EU market is the most integrated in the world, alongside the Caribbean Single Market and Economy, both earning sixes out of seven on the scale of economic integration, one step from merging entirely. Other common markets exist as well, being fives on the scale, uniting national economies in Eurasia, Central America, the Arabian Gulf, and South America; many more have been proposed. There is much capitalists enjoy after single market creation: trade increases, production costs fall, investment spikes, profits rise. Total economic and political unification may be, again, more effective and efficient still. Moving away from nations and toward worldwide cohesion could be astronomically beneficial to capitalism. Will the push toward a one-nation world come from the need to reign in capital, to serve capital, or both?

For more from the author, subscribe and follow or read his books.

When The Beatles Sang About Killing Women

Move over, Johnny Cash and “Cocaine Blues.” Sure, “Early one mornin’ while making the rounds / I took a shot of cocaine and I shot my woman down… Shot her down because she made me slow / I thought I was her daddy but she had five more” are often the first lyrics one thinks of when considering the violent end of the toxic masculinity spectrum in white people music. (Is this not something you ponder? Confront more white folk who somehow only see these things in black music, you’ll get there.) But The Beatles took things to just as dark a place.

Enter “Run For Your Life” from their 1965 album Rubber Soul, a song as catchy as it is chilling: “You better run for your life if you can, little girl / Hide your head in the sand, little girl / Catch you with another man / That’s the end.” Jesus. It’s jarring, the cuddly “All You Need Is Love” boy band singing “Well, I’d rather see you dead, little girl / Than to be with another man” and “Let this be a sermon / I mean everything I’ve said / Baby, I’m determined / And I’d rather see you dead.” But jealous male violence in fact showed up in other Beatles songs as well, and in the real world, with the self-admitted abusive acts and attitudes of John Lennon, later regretted but no less horrific for it.

This awfulness ensured The Beatles would be viewed by many of posterity as a contradictory element, with proto-feminist themes and ideas of the 1960s taking root in their music alongside possessive, murderous sexism. That is, if these things are noticed at all.

For more from the author, subscribe and follow or read his books.

With Afghanistan, Biden Was in the ‘Nation-building Trap.’ And He Did Well.

You’ve done it. You have bombed, invaded, and occupied an oppressive State into a constitutional democracy, human rights and all. Now there is only one thing left to do: attempt to leave — and hope you are not snared in the nation-building trap.

Biden suffered much criticism over the chaotic events in Afghanistan in August 2021, such as the masses of fleeing Afghans crowding the airport in Kabul and clinging to U.S. military planes, the American citizens left behind, and more, all as the country fell to the Taliban. Yet Biden was in a dilemma, in the 16th century sense of the term: a choice between two terrible options. That’s the nation-building trap: if your nation-building project collapses after or as you leave, do you go back in and fight a bloody war a second time, or do you remain at home? You can 1) spend more blood, treasure, and years reestablishing the democracy and making sure the first war was not in vain, but risk being in the exact same situation down the road when you again attempt to leave. Or 2) refuse to sacrifice any more lives (including those of civilians) or resources, refrain from further war, and watch oppression return on the ruins of your project. This is a horrific choice to make, and no matter what you would choose there should be at least some sympathy for those who might choose the other.

Such a potentiality should make us question war and nation-building, a point to which we will return. But here it is important to recognize that the August chaos was inherent in the nation-building trap. Biden had that dilemma to face, and his decision came with unavoidable tangential consequences. For example, the choice, as the Taliban advanced across Afghanistan, could be reframed as 1) send troops back in, go back to war, and prevent a huge crowd at the airport and a frantic evacuation, or 2) remain committed to withdraw, end the war, but accept that there would be chaos as civilians tried to get out of the country. Again, dismal options.

This may seem too binary, but the timeline of events appears to support it. With a withdraw deadline of August 31, the Taliban offensive began in early May. By early July, the U.S. had left its last military base, marking the withdraw as “effectively finished” (this is a detail often forgotten). Military forces only remained in places like the U.S. embassy in Kabul. In other words, from early May to early July, the Taliban made serious advances against the Afghan army, but the rapid fall of the nation occurred after the U.S. and NATO withdraw — with some Afghan soldiers fighting valiantly, others giving up without a shot. There are countless analyses regarding why the much larger, U.S.-trained and -armed force collapsed so quickly. U.S. military commanders point to our errors like: “U.S. military officials trained Afghan forces to be too dependent on advanced technology; they did not appreciate the extent of corruption among local leaders; and they didn’t anticipate how badly the Afghan government would be demoralized by the U.S. withdrawal.” In any event, one can look at either May-June (when U.S. forces were departing and Taliban forces were advancing) or July-August (when U.S. forces were gone and the Taliban swallowed the nation in days) as the key decision-making moment(s). Biden had to decide whether to reverse the withdraw, send troops back in to help the Afghan forces retake lost districts (and thus avoid the chaos of a rush to the airport and U.S. citizens left behind), or hold firm to the decision to end the war (and accept the inevitability of turmoil). Many will argue he should have chosen option one, and that’s an understandable position. Even if you had to fight for another 20 years, and all the death and maiming that comes with it, and face the same potential scenario when you try to withdraw in 2041, some would support it. But for those who desired an end to war, it makes little sense to criticize Biden for the airport nightmare, or the Taliban takeover or American citizens being left behind (more on that below). “I supported withdraw but not the way it was done” is almost incomprehensible. In the context of that moment, all those things were interconnected. In summer 2021, only extending and broadening the war could have prevented those events. It’s the nation-building trap — it threatens to keep you at war forever.

The idea that Biden deserves a pass on the American citizens unable to be evacuated in time may draw special ire. Yes, one may think, maybe ending the war in summer 2021 brought an inevitable Taliban takeover (one can’t force the Afghan army to fight, and maybe we shouldn’t fight a war “Afghan forces are not willing to fight themselves,” as Biden put it) and a rush to flee the nation, but surely the U.S. could have done more to get U.S. citizens (and military allies such as translators) out of Afghanistan long before the withdraw began. This deserves some questioning as well — and as painful as it is to admit, the situation involved risky personal decisions, gambles that did not pay off. Truly, it was no secret that U.S. forces would be leaving Afghanistan in summer 2021. This was announced in late February 2020, when Trump signed a deal with the Taliban that would end hostilities and mark a withdraw date. U.S. citizens (most dual citizens) and allies had over a year to leave Afghanistan, and the State Department contacted U.S. citizens 19 times to alert them of the potential risks and offer to get them out, according to the president and the secretary of state. Thousands who chose to stay changed their minds as the Taliban advance continued. One needn’t be an absolutist here. It is possible some Americans fell through the cracks, or that military allies were given short shrift. And certainly, countless Afghan citizens had not the means or finances to leave the nation. Not everyone who wished to emigrate over that year could do so. Yet given that the withdraw date was known and U.S. citizens were given the opportunity to get out, some blame must necessarily be placed on those who wanted to stay despite the potential for danger — until, that is, the potential became actual.

Biden deserves harsh criticism, instead, for making stupid promises, for instance that there would be no chaotic withdraw. The world is too unpredictable for that. Further, for a drone strike that blew up children before the last plane departed. And for apparently lying about his generals’ push to keep 2,500 troops in the country.

That is a good segue for a few final thoughts. The first revolves around the question: “Regardless of the ethics of launching a nation-building war, is keeping 2,500 troops in the country, hypothetically forever, the moral thing to do to prevent a collapse into authoritarianism or theocracy?” Even if one opposed and condemned the invasion as immoral, once that bell has been rung it cannot be undone, and we’re thus forced to consider the ethics of how to act in a new, ugly situation. Isn’t 2,500 troops a “small price to pay” to preserve a nascent democracy and ensure a bloody war was not for nothing? That is a tempting position, and again one can have sympathy for it even if disagreeing, favoring full retreat. The counterargument is that choosing to leave a small force may preserve the nation-building project but it also incites terrorism against the U.S. We know that 9/11 was seen by Al-Qaeda as revenge for U.S. wars and military presences in Muslim lands, and the War on Terror has only caused more religious radicalization and deadly terrorist revenge, in an endless cycle of violence that should be obvious to anyone over age three. So here we see another dilemma: leave, risk a Taliban takeover, but (begin to) extricate yourself from the cycle of violence…or stay, protect the democracy, but invite more violence against Americans. This of course strays dangerously close to asking who is more valuable, human beings in Country X or Country Y, that old, disgusting patriotism or nationalism. But this writer detests war and nation-building and imperialism and the casualties at our own hands (our War on Terror is directly responsible for the deaths of nearly 1 million people), and supports breaking the cycle immediately. That entails total withdraw and living with the risk of the nation-building endeavor falling apart.

None of this is to say that nation-building cannot be successful in theory or always fails in practice. The 2003 invasion of Iraq, which like that of Afghanistan I condemn bitterly, ended a dictatorship; eighteen years later a democracy nearly broken by corruption, security problems, and the lack of enforcement of personal rights stands in its place, a flawed but modest step in the right direction. However, we cannot deny that attempting to invade and occupy a nation into a democracy carries a high risk of failure. For all the blood spilled — ours and our victims’ — the effort can easily end in disaster. (Beyond a flawed democracy and massive Iraqi civilian body count, our invasion plunged the nation into civil war and birthed ISIS.) War and new institutions and laws hardly address root causes of national problems that can tear a new country apart, such as religious extremism, longstanding ethnic conflict, and so on. It may in fact make such things worse. This fact should make us question the wisdom of nation-building. As discussed, you can “stay until the nation is ready,” which may mean generations. Then when you leave, the new nation may still collapse, as with Afghanistan, not being as ready as you thought. Thus a senseless waste of lives and treasure. Further, why do we never take things to their logical conclusion? Why tackle one or two brutal regimes and not all the others? If we honestly wanted to use war to try to bring liberty and democracy to others, the U.S. would have to bomb and occupy nearly half the world. Actually “spreading freedom around the globe” and “staying till the job’s done” means wars of decades or centuries, occupations of almost entire continents, countless millions dead. Why do ordinary Americans support a small-scale project, but are horrified at the thought of a large-scale one? That is a little hint that what you are doing needs to be rethought.

Biden — surprisingly, admirably steadfast in his decision despite potential personal political consequences — uttered shocking words to the United States populace: “This decision about Afghanistan is not just about Afghanistan. It’s about ending an era of major military operations to remake other countries.” Let’s hope that is true.

For more from the author, subscribe and follow or read his books.

Hegemony and History

The Italian Marxist Antonio Gramsci, writing in the early 1930s while imprisoned by the Mussolini government, theorized that ruling classes grew entrenched through a process called cultural hegemony, the successful propagation of values and norms, which when accepted by the lower classes produced passivity and thus the continuation of domination and exploitation from above. An ideology became hegemonic when it found support from historical blocs, alliances of social groups (classes, religions, families, and so on) — meaning broad, diverse acceptance of ideas that served the interests of the bourgeoisie in a capitalist society and freed the ruling class from some of the burden of using outright force. This paper argues that Gramsci’s theory is useful for historians because its conception of “divided consciousness” offers a framework for understanding why individuals failed to act in ways that aligned with their own material interests or acted for the benefit of oppressive forces. Note this offering characterizes cultural hegemony as a whole, but it is divided consciousness that permits hegemony to function. Rather than a terminus a quo, however, divided consciousness can be seen as created, at least partially, by hegemony and as responsible for ultimate hegemonic success — a mutually reinforcing system. The individual mind and what occurs within it is the necessary starting point for understanding how domineering culture spreads and why members of social groups act in ways that puzzle later historians.

Divided (or contradictory) consciousness, according to Gramsci, was a phenomenon in which individuals believed both hegemonic ideology and contrary ideas based on their own lived experiences. Cultural hegemony pushed such ideas out of the bounds of rational discussion concerning what a decent society should look like. Historian T.J. Jackson Lears, summarizing sociologist Michael Mann, wrote that hegemony ensured “values rooted in the workers’ everyday experience lacked legitimacy… [W]orking class people tend to embrace dominant values as abstract propositions but often grow skeptical as the values are applied to their everyday lives. They endorse the idea that everyone has an equal chance of success in America but deny it when asked to compare themselves with the lawyer or businessman down the street.”[1] In other words, what individuals knew to be true from simply functioning in society was not readily applied to the nature of the overall society; some barrier, created at least in part by the process of hegemony, existed. Lears further noted the evidence from sociologists Richard Sennett and Jonathon Cobb, whose subaltern interviewees “could not escape the effect of dominant values” despite also holding contradictory ones, as “they deemed their class inferiority a sign of personal failure, even as many realized they had been constrained by class origins that they could not control.”[2] A garbage collector knew the fact that he was not taught to read properly was not his fault, yet blamed himself for his position in society.[3] The result of this contradiction, Gramsci observed, was often passivity, consent to oppressive systems.[4] If one could not translate and contrast personal truths to the operation of social systems, political action was less likely.

To understand how divided consciousness, for Gramsci, was achieved, it is necessary to consider the breadth of the instruments that propagated dominant culture. Historian Robert Gray, studying how the bourgeoisie achieved hegemony in Victorian Britain, wrote that hegemonic culture could spread not only through the state — hegemonic groups were not necessarily governing groups, though there was often overlap[5] — but through any human institutions and interactions: “the political and ideological are present in all social relations.”[6] Everything in Karl Marx’s “superstructure” could imbue individuals and historical blocs with domineering ideas: art, media, politics, religion, education, and so on. Gray wrote that British workers in the era of industrialization of course had to be pushed into “habituation” of the new and brutal wage-labor system by the workplace itself, but also through “poor law reform, the beginnings of elementary education, religious evangelism, propaganda against dangerous ‘economic heresies,’ the fostering of more acceptable expressions of working-class self help (friendly societies, co-ops, etc.), and of safe forms of ‘rational recreation.’”[7] The bourgeoisie, then, used many social avenues to manufacture consent, including legal reform that could placate workers. Some activities were acceptable under the new system (joining friendly societies or trade unions) to keep more radical activities out of bounds.[8] It was also valuable to create an abstract enemy, a “social danger” for the masses to fear.[9] So without an embrace of the dominant values and norms of industrial capitalism, there would be economic disaster, scarcity, loosening morals, the ruination of family, and more.[10] The consciousness was therefore under assault by the dominant culture from all directions, heavy competition for values derived from lived experience, despite the latter’s tangibility. In macro, Gramsci’s theory of cultural hegemony, to quote historian David Arnold, “held that popular ideas had as much historical weight or energy as purely material forces” or even “greater prominence.”[11] In micro, it can be derived, things work the same in the individual mind, with popular ideas as powerful as personal experience, and thus the presence of divided consciousness.

The concept of contradictory consciousness helps historians answer compelling questions and solve problems. Arnold notes Gramsci’s questions: “What historically had kept the peasants [of Italy] in subordination to the dominant classes? Why had they failed to overthrow their rulers and to establish a hegemony of their own?”[12] Contextually, why wasn’t the peasantry more like the industrial proletariat — the more rebellious, presumed leader of the revolution against capitalism?[13] The passivity wrought from divided consciousness provided an answer. While there were “glimmers” of class consciousness — that is, the application of lived experience to what social systems should be, and the growth of class-centered ideas aimed at ending exploitation — the Italian peasants “largely participated in their own subordination by subscribing to hegemonic values, by accepting, admiring, and even seeking to emulate many of the attributes of the superordinate classes.”[14] Their desires, having “little internal consistency or cohesion,” even allowed the ruling class to make soldiers of peasants,[15] meaning active participation in maintaining oppressive power structures. Likewise, Lears commented on the work of political theorist Lawrence Goodwyn and the question of why the Populist movement in the late nineteenth century United States largely failed. While not claiming hegemony as the only cause, Lears argued that the democratic movement was most successful in parts of the nation with democratic traditions, where such norms were already within the bounds of acceptable discussion.[16] Where they were not, where elites had more decision-making control, the “received culture” was more popular, with domination seeming more natural and inevitable.[17] Similarly, Arnold’s historiographical review of the Indian peasantry found that greater autonomy (self-organization to pursue vital interests) of subaltern groups meant hegemony was much harder to establish, with “Gandhi [coming] closest to securing the ‘consent’ of the peasantry for middle-class ideological and political leadership,” but the bourgeoisie failing to do the same.[18] Traditions and cultural realities could limit hegemonic possibilities; it’s just as important to historians to understand why something does not work out as it is to comprehend why something does. As a final example, historian Eugene Genovese found that American slaves demonstrated both resistance to and appropriation of the culture of masters, both in the interest of survival, with appropriation inadvertently reinforcing hegemony and the dominant views and norms.[19] This can help answer questions regarding why slave rebellions took place in some contexts but not others, or even why more did not occur — though, again, acceptance of Gramscian theory does not require ruling out all causal explanations beyond cultural hegemony and divided consciousness. After all, Gramsci himself favored nuance, with coexisting consent and coercion, consciousness of class or lived experience mixing with beliefs of oppressors coming from above, and so on.

The challenge of hegemonic theory and contradictory consciousness relates to parsing out aforementioned causes. Gray almost summed it up when he wrote, “[N]or should behavior that apparently corresponds to dominant ideology be read at face value as a direct product of ruling class influence.”[20] Here he was arguing that dominant culture was often imparted in indirect ways, not through intentionality of the ruling class or programs of social control.[21] But one could argue: “Behavior that apparently corresponds to dominant ideology cannot be read at face value as a product of divided consciousness and hegemony.” It is a problem of interpretation, and it can be difficult for historians to parse out divided consciousness or cultural hegemony from other historical causes and show which has more explanatory value. When commenting on the failure of the Populist movement, Lears mentioned “stolen elections, race-baiting demagogues,” and other events and actors with causal value.[22] How much weight should be given to dominant ideology and how much to stolen elections? This interpretive nature can appear to weaken the usefulness of Gramsci’s model. Historians have developed potential solutions. For instance, as Lears wrote, “[O]ne way to falsify the hypothesis of hegemony is to demonstrate the existence of genuinely pluralistic debate; one way to substantiate it is to discover what was left out of public debate and to account historically for those silences.”[23] If there was public discussion of a wide range of ideas, many running counter to the interests of dominant groups, the case for hegemony is weaker; if public discussion centered around a narrow slate of ideas that served obvious interests, the case is stronger. A stolen election may be assigned less casual value, and cultural hegemony more, if there existed restricted public debate. However, the best evidence for hegemony may remain the psychoanalysis of individuals, as seen above, that demonstrate some level of divided consciousness. Even in demonstrability, contradictory consciousness is key to Gramsci’s overall theory. A stolen election may earn less casual value if such insightful individual interviews can be submitted as evidence.  

In sum, for Gramscian thinkers divided consciousness is a demonstrable phenomenon that powers (and is powered by) hegemony and the acceptance of ruling class norms and beliefs. While likely not the only cause of passivity to subjugation, it offers historians an explanation as to why individuals do not act in their own best interests that can be explored, given causal weight, falsified, or verified (to degrees) in various contexts. Indeed, Gramsci’s theory is powerful in that it has much utility for historians whether true or misguided.

For more from the author, subscribe and follow or read his books.


[1] T.J. Jackson Lears, “The Concept of Cultural Hegemony: Problems and Possibilities,” The American Historical Review 90, no. 3 (June 1985): 577.

[2] Ibid, 577-578.

[3] Ibid, 578.

[4] Ibid, 569.

[5] Robert Gray, “Bourgeois Hegemony in Victorian Britain,” in Tony Bennet, ed., Culture, Ideology and Social Process: A Reader (London: Batsford Academic and Educational, 1981), 240.

[6] Ibid, 244.

[7] Ibid.

[8] Ibid, 246.

[9] Ibid, 245.

[10] Ibid.

[11] David Arnold, “Gramsci and the Peasant Subalternity in India,” The Journal of Peasant Studies 11, no. 4 (1984):158.

[12] Ibid, 157.

[13] Ibid, 157.

[14] Ibid, 159.

[15] Ibid.

[16] Lears, “Hegemony,” 576-577.

[17] Ibid.

[18] Arnold, “India,” 172.

[19] Lears, “Hegemony,” 574.

[20] Gray, “Britain,” 246.

[21] Ibid, 245-246.

[22] Ibid, 276.

[23] Lears, “Hegemony,” 586.