If Free Will Is False, Destiny Is True

Free will is like God: perhaps dead, its absence having something to say about morality (what Nietzsche meant by “Gott ist tot” was that the Christian God wasn’t believable, and that societal shifts away from him would undermine ethics), and yet impossible to fully disprove. By free will, we mean the ability to have done differently — the notion that the control we feel over our choices, words, and deeds is real, not delusional.

The more thought devoted to free will the less believable it becomes. Even the mixed, limited bag of scientific findings generates at least some skepticism. Two short books I found interesting take opposing sides in the debate over the relevant studies: Sam Harris’ Free Will (2012) and Alfred Mele’s Free (2014). Though over a decade old, these works collectively remain a valuable and accessible introduction. Today’s commentary is little different. Skeptics of free will

point to evidence that we can be unconsciously influenced in the choices we make by a range of factors, including ones that are not motivationally relevant; that we can come to believe that we chose to initiate a behavior that in fact was artificially induced; that people subject to certain neurological disorders will sometimes engage in purposive behavior while sincerely believing that they are not directing them. Finally, a great deal of attention has been given to the work of neuroscientist Benjamin Libet (2002). Libet conducted some simple experiments that seemed to reveal the existence of “preparatory” brain activity (the “readiness potential”) shortly before a subject engages in an ostensibly spontaneous action. (Libet interpreted this activity as the brain’s “deciding” what to do before we are consciously settled on a course of action.) Wegner (2002) surveys all of these findings (some of which are due to his own work as a social psychologist) and argues on their basis that the experience of conscious willing is “an illusion.”

Such interpretations have been criticized, but the findings themselves — for instance, that the brain lights up (milliseconds or even full seconds) before we make certain conscious choices — are largely taken for granted. The scientists and philosophers who believe in free will, such as Mele, rightly point to the constraints of the experiments, which ask participants to do mindless tasks. As neuroscientists recently wrote in Scientific American while arguing science has not disproven free will:

The neuroscience of volition typically focuses on immediate (or proximal) and meaningless decisions (for instance, “press the button from time to time, whenever you feel like it, for no reason at all”). The decisions we care about with respect to free will and responsibility, however, are ones that are meaningful and often have longer time horizons. Perhaps many, or even most, of our day-to-day decisions — choosing when to take the next sip from your water cup or which foot to put forward — are not acts of conscious free will. But maybe some decisions are.

Observe the ground that is given here. It is seismic that what we once regarded as a conscious choice — reaching for your water to take a sip — was actually a directive of the subconscious. Your brain began firing long before you “decided” to act. You had your orders, and you followed them, unwittingly. (Note that this is not marveling over the fact that you reached for your glass without an inner monologue — “I should drink now.” Most of what we do is done without the voice inside our heads speaking. But we still assume that we decided to do whatever it was, not our subconscious.) No, science has not shown free will to be false. But it has produced cause for doubt. If my decision to stand rather than remain sitting was not really my decision, it is at least possible that more meaningful, higher-order “choices” — whether to quit a job or propose — are also guided by subconscious processes outside of one’s awareness. We will have to see what future experiments bring.

Philosophy also erodes trust in free will. First, consider the experiential. Sam Harris, on his Making Sense podcast, once suggested we try the following. Think of a movie. Go ahead, any movie will do. Do you have one? When we do this, in no sense do we choose which film arrives. One simply bubbles up from the dark. Who chose it? Well, your subconscious delivered it to you. This is merely a fun introduction to the idea that we may not be as in the driver’s seat as we think, but it is imperfect, for at least the conscious self called out for an example. It is more valuable to simply reflect upon the instances when a random thought pops into your head. We’ve all experienced this. Have you ever thought to yourself afterwards What the fuck was that? or Where did that come from? The fact is, thoughts often come to us completely against our will. They are much like emotions — in the same way the brain inflicts anger, sadness, embarrassment, and so on upon you, many thoughts arrive uninvited and often without mercy. Sometimes we blurt them out, “speaking without thinking.” And of course you are pure animal instinct when you notice an object hurtling toward your face, ducking to safety. Is it so strange to suppose our “choices” might be automatic and involuntary in bodies defined by such terms, where thoughts bubble up from nowhere, unwelcome emotions burn, instinct takes over actions, lungs breathe unnoticed, and the heart drums unstoppably?

More importantly, determinism seems obviously true, as plain as the nose on your face. Think of a mistake from your past. Why do you regret it today? Why, it’s because you’ve had many life experiences since then, you’ve gained wisdom or a new perspective, you’re a different person. If you had been the person you are today back then you could have avoided the misstep. But the reason you made the choice you did was because that’s who you were in that moment. This is self-evident. To have made a different choice you would have had to have been a different person. And how is that possible?

Free will is the ability to have chosen differently. To legitimately choose among options before you at every present moment — for instance, to continue reading or to stop reading. We make a choice, but there is reason to suspect this is an illusion — surely we were always going to choose whatever we did. It seems obvious that each “choice” is simply the product of every moment that came before. How could it be otherwise? Each choice is the inevitable end result of every thought, feeling, “choice,” act, life experience, genetic disposition, and so on you’ve ever had. It is the effect of countless causes. That’s what’s meant by determinism. The thesis seems difficult to deny. How can one argue that who you are in any given moment is not the creation of all preceding moments (going all the way back to conception); how can one argue that who you fundamentally are in that moment does not determine the choice you make? This would make little sense. Every biological, environmental, and experiential factor determined who you were, and who you were could not have chosen differently — only a different you could have done that! Free will seems illusory.

This conclusion can cause consternation. Some see life as less meaningful or real, despite still being surrounded by the wonderful things that made their lives rich and full. As hinted at in the opening paragraph, people wonder where all this leaves morality. If all decisions are inevitable, are we really responsible for our actions? The killer was never not going to kill, after all. He was the product of all past things, how is it his fault? First, it must be said that the question of moral responsibility (like meaning) is often used irresponsibly: it is used to argue for the existence of free will. Free will must be true, we must believe in it, or no one will be responsible for her own actions, everyone might start killing each other! This is the fallacy argumentum ad consequentium, believing something is true because things would be bad if it wasn’t. Sorry, potential consequences don’t have anything to do with whether something is true or false.

Second, and more to the point, skepticism of free will does indeed weaken or reframe the idea of moral responsibility, perhaps stressing the need to build a more decent society, to improve the environment and experiences of all people, to change behavior. If poverty has something to do with crime, eliminate poverty. If a rapist rotting in prison is the result of his fate, not his genuinely free choices (recall that children who are sexually abused are more likely to become sexual abusers themselves; who we are is the result of all preceding realities), more mercy — improved prison conditions and rehabilitation, elimination of the death penalty and solitary confinement — may be justified. Regardless, the concerns over ethics and accountability have always seemed overdramatic. If everyone gained The Knowledge, judging free will and personal responsibility to be fictions, certain people might engage in foul words and deeds they otherwise wouldn’t have (they won’t be able to help it). But most people probably wouldn’t (they won’t be able to help it). This is because acquisition of The Knowledge would be only one cause in an ocean of causes that determine one’s choices. It might be a big one, but so is genetic disposition, a happy life, fear of consequences, and so on. You’ve read a few things in this piece that perhaps make you doubt free will a bit; do you now feel a bit closer to being able to rape or murder someone? Probably not, due to all the other factors that make you who you are. In the same way, laws and punishments, while perhaps reformed, would not disappear if everyone had The Knowledge. Even without belief in free will, we would still be vulnerable, living creatures: most people would still not want to be harmed (they won’t be able to help it) and would thus (again, inevitably) demand violent people be kept away from the general population, regardless of whether such criminals are morally responsible for their actions. As others have pointed out, we already do this. An insane person, a child, or someone who commits crimes while sleepwalking is not considered as morally responsible for misdeeds as your usual adult, but they are not exempt from law or restraint. (The overall concept of morality isn’t going anywhere either, because it is necessary to justify that desired protection from physical harm, as it always has been. Plus, to say we do not freely choose between moral and immoral possibilities is not to say such possibilities have no meaning, as if the latter don’t cause real suffering or violate holy scriptures. We would still want to teach and internalize ideas of what’s right, a powerful causal factor of a desired effect: the unavoidable “choice” to do good, avoiding real-world harms.)

If free will is false, destiny is true. Here it’s skeptics of agency that must be careful to avoid fallacy, because the positives that might come from free will’s nonexistence cannot be used as evidence or argument for such nonexistence. That will always be a temptation, because determinism is psychologically comforting. As already implied, it helps us let go of regret and dissatisfaction. Our most terrible mistakes needn’t burden us any further. You were always going to make that choice. It couldn’t have happened any other way. It’s who you were. Our present conditions, now matter how miserable, no matter what we lack, were likewise inevitable. It was always going to be this way. You can be at peace, grateful for what you have, what you inevitably received. See, determinism is also like God: so comforting we should be suspicious.

I cannot conclude with full conviction that free will is false, for while it is less believable now it has hardly been disproven. However, though armed with a healthy suspicion, I can appreciate the new meaning that would be wrought by The Knowledge. Destiny is a beautiful idea, and here it is fully realized, in the secular world. A few Christian sects reject free will and embrace the concept of fate (see Calvinism, predestination, theological determinism, and so on), but most are mired in the quicksands of their own contradictions: as a human being I was divinely created with free will, yet, as the song goes, “God has a plan for my life.” When God intervenes in this world and saves you from a killer, he violates the free will of two people. How free are you if gods ensure your life goes just so? All that can be put aside. There are no contradictions with the destiny considered here. Old phrases that used to feel so empty to us rationalists who reject religion, astrology, and so on — “everything happens for a reason,” “if it’s meant to be,” “you’re where you’re supposed to be” — are suddenly imbued with new meaning. And that’s a delightful thing.

For more from the author, subscribe and follow or read his books.

America Is Simply Too Absurd for Democracy to Survive

The descent continues. The facts are well known. Support for authoritarianism, closely tied to conservative ideology in an avalanche of studies, is frighteningly high among Trump supporters and Republican voters in general. Trump and his allies schemed to stay in office after losing a free and fair election in 2020, attempting to throw out and replace Biden electors, while a rightwing mob stormed the Capitol with similar intent, leaving multiple people dead. The rightwing Supreme Court ruled in 2024 that presidents are virtually immune from criminal prosecution — the law simply does not apply to them. They can order subordinates to do anything, even assassinate political rivals. Trump praises dictators and claims Americans desire one; he openly calls himself a king and positions himself, now accurately, as above the law. The madman and his cult are talking about a third term. A Republican in the U.S. House introduced a bill to allow this. At speeches, Musk and Bannon openly give Nazi salutes, with no consequence. With his executive orders on birthright citizenship, elections, and more, Trump willfully violates the Constitution, among other laws. His ICE underlings may even have worked to deport U.S. citizens, the children of the undocumented. Hundreds of U.S. citizens have been wrongfully arrested, without probable cause, due to their race and language. Foreigners here legally have been arrested with intent to deport, though charged with no crime, over their political views. There is increasing talk of stripping Americans of their citizenship. Trump, Vance, and others publicly question federal judges’ constitutional right to check presidential power. In March 2025, they willfully ignored the orders of a federal judge to terminate a deportation flight. In April, they ignored court orders to restore press access to the AP. They later disobeyed judicial rulings on allowing potential deportees to challenge removal to unfamiliar nations, and ignored stays of deportation. Republicans have made it harder to enforce contempt of court rulings and called for the impeachment of judges who attempt to block Trump’s actions. In the summer, Trump deployed Marines on U.S. soil against U.S. citizens, an illegal act. He declares emergencies that do not exist to take over police forces and send in soldiers to American cities. Several key Rubicons have been crossed, and the end of functioning democracy is increasingly easy to visualize.

Democracy is inherently fragile because it is voluntary, surviving only as long as public officials take it seriously. One must choose to obey federal court orders, accept an election loss, or follow established law because democracy is more important than holding onto power, than enacting ideology. Once those priorities are reversed, the house of cards quickly collapses, as we are witnessing. Yet the United States has several features that make it especially vulnerable to authoritarianism, whether under Trump or someone else in the future. We saw some of these in An Absurd, Fragile President Has Revealed an Absurd, Fragile American System (for instance: “A president can fire those investigating him — and replace them with allies who could shut everything down”). Everywhere we turn, we see absurdity — the great accelerant to our destruction.

The populace was top of mind after the disastrous November 2024 election that restored Trump to power. Clearly, voters cannot be relied upon to save a nation from the descent. Those familiar with history already knew this, of course, as authoritarians are often highly popular, plus polarization and the two-party trap grease the wheels (see Three Thoughts on Democracy). Still, the outcomes were horrifying. Trump lost the popular vote in 2016 by 3 million (winning power through the Electoral College, itself an anti-democratic lunacy that makes it much harder for the people to stop a tyrant), lost the popular vote in 2020 by 7 million, then won the popular vote by 2 million in 2024. The Democrats earned 6 million fewer votes in 2024 compared to the prior contest. Hispanics, young people (especially men), and other groups shifted toward Trump. 77 million people — a mix of true believers and the conservatives and moderates who dislike Trump but are compelled to stop the evil Democrats — gave Trump the presidency once more, after all we’ve witnessed, all his awful words and deeds, the chaos and insanity, his pathological lying, extremist policies, demagogic tendencies, attempts to undermine democracy and the rule of law, and his extracurricular criminality (found guilty of or liable for falsifying business records, forcing his fingers into a woman’s vagina, defamation, and defrauding banks and insurance companies). People simply don’t care. Not enough to stick with the Blue candidate or abandon the Red one. That someone like this can keep winning does not bode well for the American future.

Yet the 2024 election brought into sharp relief a more profound absurdity of the populace. It’s what one might call the know-nothing voter or, more charitably, the reactive voter. On Election Day there were worrying spikes in U.S.-based Google searches of “Who is running for president?” and “Did Joe Biden drop out?” And after: “Can I change my vote?” In yet another infamous, shocking street interview on Jimmy Kimmel Live, people were asked, on the day after the election, if they were planning to vote. Respondents were unaware the election was over, and at times unaware of who competed. In a post on socials, an Hispanic man was stunned to learn, after voting for Trump in hopes of lower gas prices, that Trump favored mass deportations. Some voters indeed have regrets, seemingly not understanding what they supported. One must use caution with such things (the anecdotal, the selected for entertainment value, searches of dumb searches impacting search data), but plenty of people know nothing of politics, they do not consume the news, even in a social media age that makes it difficult to avoid. But some of them still vote! How large a voting bloc they represent is impossible to know. Thousands? Millions? There’s probably some crossover between know-nothing voters and swing voters. 56% of Biden’s 2020 supporters switched to Trump in 2024 (3-4% of Trump’s 2020 voters voted Democrat). In 2020, nearly 6% of Americans voted for the opposite party they had in 2016, with more switching to the Democrats. 13% of Trump’s voters in 2016 had backed Obama in 2012. There is an army of 8-9 million people each election who are unmoored from the parties; some in this number are probably unmoored from coherent political ideology and awareness of basic happenings. They simply react. In 2024 they raced to Trump over inflation, just as they raced to Obama in 2008 over economic turmoil. It was a fantasy to believe Biden’s dominant victory in 2020 was a repudiation of Trump himself, rather than a fear-based reaction to economic strife and COVID. The economy is basically always the top concern of voters, so it’s likely jumping ship in hard times hoping that the other party will somehow aid survival and prosperity, no devotion to either free markets or government intervention, to beliefs and ideology, to be found. This is understandable, as people are crushed by poverty and desperate to meet their personal needs, but it might spell doom for democracy. If concerns about authoritarianism and criminality cannot at some point, among moveable voters, override other concerns, or never even register due to lack of awareness, we are in grave trouble. Of course, this rogue element has the potential to save us as well, as the bewildered herd will rush away from a would-be tyrant overseeing a bad economy, but this only works as long as meaningful elections persist.

Around 90 million Americans, per usual, did not vote in 2024, another boon to a potential authoritarian. Many people are too busy trying to survive to pay attention to politics or vote; many feel it won’t make a difference in their lives. So many in this bloc do not know what’s happening either (the know-nothing non-voter), a dangerous reality.

Right after Trump was reelected, by the way, the federal charges regarding his election interference were dropped and the state case (Georgia) concerning the same crimes was postponed indefinitely, as sitting presidents are not to be prosecuted (Trump’s incoming Justice Department would have axed the federal charges anyway). What a delightful state of affairs, that winning the presidential election is a Get Out of Jail Free card, that our ability to stop a would-be tyrant through legal means is contingent upon the idiocy of voters.

We now turn to the presidential pardon, a massively obvious mistake from the beginning. Article II, Section 2 of the Constitution planted a bomb for all to see, and it was only a matter of time before it blew up democracy and the rule of law. Some founders saw clearly at the Convention of 1787:

There was little debate at the Constitutional Convention of the pardon power, though several exceptions and limitations were proposed. Edmund Randolph proposed reincorporating an exception for cases of treason, arguing that extending pardon authority to such cases “was too great a trust,” that the President “may himself be guilty,” and that the “Traytors may be his own instruments.” George Mason likewise argued that treason should be excepted for fear that the President could otherwise “frequently pardon crimes which were advised by himself” to “stop inquiry and prevent detection,” eventually “establish[ing] a monarchy, and destroy[ing] the republic.” James Wilson responded to such arguments by pointing out that if the President were himself involved in treasonous conduct, he could be impeached.

This naively underestimated the devotion to the madman we would see from his party in Congress. Yes, the House may impeach (if controlled by the opposition party), as it did twice with Trump (and before him Bill Clinton and Andrew Johnson), but the Senate will acquit, as with all these examples, and the authoritarian will remain in office. The Senate is unlikely to ever reach the 67 votes needed to convict. You’d need impossibly strong bipartisan support. A few Republicans — Mitt Romney, Liz Cheney, Adam Kinzinger — have been brave and principled enough to warn of Trump’s danger to democracy, but most in the GOP have shown nothing but slobbering fealty, racing to lick his boots.

Thus, any president bent on “destroying the republic” is free to issue pardons to allies, “his own instruments,” involved in such a plot. Whether you participate in a violent coup or an illegal political scheme to throw out election results, you will be forgiven — immediately if you were successful at installing your strongman, later on if you failed (eventually the strongman or his party will return to the White House). In January 2025, Trump issued pardons to the 1,500 rioters who ransacked the Capitol, most of whom had been convicted in court. Now, his political allies found guilty or accused of attempting to overturn the 2020 election committed state crimes, with trials in Georgia, Arizona, Wisconsin, Nevada, and more — presidents can only pardon federal crimes (that is, until Trump attempts to ignore this law as well). But any federal offenses committed by a madman’s cronies on the road to authoritarianism will be pardoned, and loyal governors and clemency boards can easily wash away the state crimes. The pardon ensures that attacks on democracy will simply go unpunished, encouraging further similar acts, perhaps one day fully successful.

The power to pardon will almost certainly not be revoked. You would again need two-thirds of the Senate, plus two-thirds of the House, to propose an amendment to the Constitution, then the approval of three-fourths of the states. (Alternatively, you’d need two-thirds of the states to propose a Constitutional Convention, then three-fourths of the states to approve the amendment.) Given the predictable loyalty to the strongman wielding the pardon, and the crazed polarization and propagandistic parallel worlds wherein Republicans frame any step Democrats take to protect democracy and the rule of law as an attack on democracy and the rule of law, this bar is too high.

Let us now consider the problem of a president who terminates our system of checks and balances by ignoring judicial edicts. As with Ford’s 1974 pardon of Nixon and his crimes at Watergate, one can find historical examples of this problem — for instance, Jackson refusing to enforce Supreme Court orders to Georgia concerning the Cherokee in 1832 or Lincoln defying the Supreme Court and suspending habeas corpus during the Civil War. Past affronts should offer no comfort (“Well, these terrible things happened then and democracy survived!”), but should rather serve as frightening warnings, for these weaknesses at some stage will be exploited to such an extent and with such malicious purpose that what follows will be far less rosy.

If the American system had a modicum of sense, the judicial branch would have direct control of its law enforcement mechanism. Federal judges and the Supreme Court can dispatch the U.S. Marshals to arrest those who violate their orders, but the Marshals are part of Trump’s Department of Justice! The director of the Marshals is appointed by the president and answers to the attorney general, also a Trump lackey. An administration that defies the judicial branch once would simply do so again, withholding use of the Marshals. It is difficult to imagine federal judges ordering Trump taken into custody for exceeding his constitutional authority (which is not protected by the 2024 immunity ruling), let alone a rogue Marshals office or Justice Department that would actually carry this out. Now, there is some room for action. Should a judge be brave enough, she could theoretically bypass the Marshals and legally deputize others to make an arrest. However, an authoritarian would likely refuse to go, rallying the Secret Service — and superior numbers — to keep the deputies out of the White House. Given this fact, that of inevitable confrontation, perhaps it does not matter whether the courts directly control and dispatch the Marshals, but the setup has certainly created roadblocks helpful to an authoritarian. (What help, of course, can we truly expect from judges? Serious judges lay down no punishment when Trump is found guilty of business fraud, while Trump-leaning judges recklessly dismiss criminal cases against him concerning the theft of classified documents.)

The military stepping in, while also highly unlikely, is probably the only real hope for preserving democracy. Should an authoritarian attempt to stay in power when his legal term expires, or refuses to follow the orders of federal courts, or pretends to change the Constitution on his own (or with a simple majority vote in Congress, because why not simply ignore the rules if it serves your purposes), one would hope that the Joint Chiefs of Staff — the heads of each military branch — would stand united in defense of the Constitution, leading a contingent of soldiers to the White House to remove the strongman and restore the democratic order, with or without violence. As long as the military remains loyal to a tyrannical commander-in-chief there is little hope. In the end, democracy probably only survives behind the barrel of the gun.

And it surely must be the military gun. There has been much talk of civil war lately, partly because it feels good to imagine mowing down the other side, whatever side that is. Armed civilian resistance on any large scale would probably be wiped off the face of the earth immediately. The advantages possessed by the American military are astronomical. Any comparison to 1775, or even modern insurgencies in the Global South, simply does not take seriously the absurd might of our military machine. Of course, fascism falling to civilian forces is not impossible, and at some point it becomes a moral duty to fight for freedom, despite questions of efficacy. Small-scale, underground civilian violence, akin to the French Resistance against the Nazis, could have an impact. The vigilante assassination of the strongman and other officials may help slow or stop authoritarianism. Of course, it may make things worse (though at some dystopian stage one has nothing to lose). Now, if the military became divided against itself there would be opportunities. Same for the states turning on each other, as in the American Civil War (though what a mess this would be, with essentially all states defined by liberal cities and rural conservatism, with less geographic-ideological coherence than the 1860s bloodbath over slavery). Those longing for a nonviolent solution, as I do, may eventually have but one final hope. The type of nonviolent revolution I described in Why America Needs Socialism, in which tens of millions of people shut down American cities, bringing society to a halt until demands are met, even at the risk of being massacred, could prove effective. But despite recent record-setting protests of 4-6 million Americans condemning would-be kings, this would be a tall order, as Americans have no modern history of national strikes — many probably could not tell you what that means. The United States is not France or India, whose civilians know what it’s like to shut down a nation. This is a serious impediment to democracy’s survival.

Two points of clarification. First, I think it is far more likely that nothing happens, at least not for a long time. No judicial deputies, no Joint Chiefs intervention, nonviolent revolution, underground resistance, or civil war. Even when an authoritarian illegally remains in office or more literally rewrites the Constitution. Life, and the descent, will simply go on. That is speculative, but suggested by the failures of the current moment (and by the relative passivity, at least for long stretches of time, of other populations under tyrannical regimes throughout history). What exactly in the past 10 years engenders confidence that a bold, strong, effective response is coming a few feet further down the pit, not too far past our current position where the federal courts are ignored? True, the worse things get the more likely dramatic action occurs. But all the talk of civil war and such probably underestimates just how dark things will need to be. Perhaps it is our grandchildren who will witness dramatic things. As a second clarification, I will simply reiterate that “the authoritarian” in this writing against whom the military and populace would act may be Trump or it may be someone in the future. Trump and his loyalists are doing immense damage to the democratic order, and have revealed frightening possibilities, but he may nevertheless leave office for good in 2029. The point of this piece is to consider the absurdities that the Trump era has highlighted and how they benefit any strongman looking to cast aside democracy and the rule of law. Trump may not oversee the full termination of our system. It may be someone else, someone worse, whether in a decade or a century. Perhaps much of the above is authorial bias, not wanting to personally witness the end, not wanting to kill or die, but I think reasonable possibilities are described nonetheless.

It is difficult to stave off pessimism, as little has been done to stop the descent. And there is so much more. (Apologies to both the dead horse and you, the exhausted reader.) Consider that in 2025, the Supreme Court ended nationwide injunctions, the ability of federal judges to quickly stop a president’s unconstitutional acts — judges can now only stop a president if a class-action lawsuit is filed. Since 2024, the Supreme Court has allowed us to pay politicians for their decisions, as long as it’s after the fact — it’s not a “bribe,” it’s a “gratuity.” We at least used to pretend to be against corruption. That same year, the Supreme Court ruled it cannot regulate political gerrymandering, leaving such a task, disastrously, to the states. Should state courts allow politicians to choose their voters, rather than voters choosing their politicians, this will help the authoritarian’s party carve out more seats in the House and elsewhere to maintain power, or else lead to the type of redistricting war we are currently witnessing. In 2023, the Supreme Court was actually just a couple votes away from freeing state legislatures from any regulation concerning elections, meaning not even state courts could stop gerrymandering (remember the abhorrent “Independent State Legislature Theory”?). And we haven’t even gotten to Christian nationalism, closely tied to authoritarian views. In 2025, we were one vote away from publicly funded religious schools; Christian supremacists would strip women of their right to vote if given the chance.

One experiences haunting feelings of inevitability, and not solely because the rot spreads unabated. After all, no nation will last forever, no democracy will last forever. Perhaps it persists 250 years, perhaps 2,500. But not forever. What if we happen to live in that particular moment in history when that inevitability comes to pass? The temptation to accept fate, to let go of one’s craving for an end to the descendant madness and thus relieve the mind of its suffering, grows quite strong. This entire piece, its headline and argument, gives in to that temptation to a large degree. Of course, one can never stop fighting, for perhaps we don’t live in that particular moment. Our actions can determine whether or not we do. And one takes some solace in the fact that democracy can be restored later. It did not last in Athens, Rome, Germany, and so on, but today the citizens of these places enjoy it anew. Poland, Brazil, Senegal, and others have rescued their democracies from the brink (many other countries failed to do so). We will see whether America can do the same despite its foolish people and systems — or we will see how long it takes to emerge from a period of tyranny.

For more from the author, subscribe and follow or read his books.

Dying Girls and Dead Theses

It is an unenviable end, doing a vast amount of research on a particular subject and discovering your insights have already been made by others. I recently set about studying the historical trope of “the dying girl,” imagining I would write my thesis, to conclude my second master’s degree, on the topic. Unfortunately, while it was clear from primary sources that the dying girl had much to tell us about past American thought, the more scholarship I read the more it grew equally clear that the subject had been thoroughly covered. That is the way of things. Your work has to offer something new, but you cannot know if you have something new until you’ve read everything that everyone else has already written on the matter, a mammoth task, or at least read enough to come across your ideas. I also find it unnatural to try to find something new under the pressure of the clock (ringing at the end of the semester). Historical discoveries and meaningful insights cannot be rushed or forced — they may take months, years, or decades to find, and I’d rather come across them organically.

But it seemed a shame to not do something with this work, so I thought I would take some of my notes and craft a short piece.

In the nineteenth century, the archetype of “the dying girl” or “the dying woman” pervaded art and literature throughout the West, from the United States to Britain to Australia. A young female on her deathbed was the subject of a deluge of poems, short stories, novels, plays, songs, paintings, photographs, and sculptures. However manifested, the dying girl did a great deal of cultural work — as cultural theorist John Storey writes, works of art “always present a particular image of the world” and “win people to particular ways of seeing” it (Cultural Theory and Popular Culture: An Introduction). The dying girl broadcast ideas on sexuality, religion, and perhaps even industrialization, the latter representing one of my few (somewhat) original insights or theories on this matter.

The archetype arose from various causes. While historians and English scholars have marked the nineteenth century as a high point of the archetype, it was hardly new, having appeared in preceding centuries. It was perhaps an outgrowth from imagery of the virgin Mary’s death. Compare Henry Peach Robinson’s photograph Fading Away (1858) with Death of the Virgin by Rembrandt (1639), Bruegel (1564), or Christus (1460). Jessica Straley, associate professor of English at the University of Utah, writes, “Scholars explain Victorian attitudes toward death as the result of the timely confluence of three traditions: the Gothic fixation on the mysterious and the melancholic, the Romantic veneration of nostalgia and decay, and the Evangelical view of death as a lesson for the living.” Thus, alongside an explosion of dying girls in art, “the nineteenth century uniquely transformed death into an elaborate performance: the century saw the invention of the modern cemetery, the elaboration of the funeral into an extravagant and expensive visual spectacle, and the codification of mourning attire and etiquette.”

There is likely a connection between the dying girl and tuberculosis, which killed large numbers of people in the nineteenth century. Scholars such as English professor Katherine Byrne of the University of Ulster in Northern Ireland (Tuberculosis and the Victorian Imagination, 2011) and Ashleigh Black, doctoral student of visual culture at the University of Aberdeen in Scotland (“Even in Death She Is Beautiful: Confronting Tuberculosis in Art, Literature, and Medicine,” 2022) have considered this. Tuberculosis, or consumption, was a “wasting disease” that British Victorians saw as enhancing feminine beauty, offering “pale, waxen features and a thin figure.” Red cheeks and glittering eyes were other key traits. Thus, real-world deaths, and ideas of beauty, bled into art. In depictions, dying girls were always young, fragile, and pretty. Symptoms of TB include cough, exhaustion, fever, weight loss, aches, and headaches, some of which are referenced in the American poems and short stories I examined (though what the dying girl is dying of is never worth directly mentioning, in the same way her name is usually unimportant).

A typical example of such a primary source (and it cannot be emphasized enough how uniform these writings are, right down to their identical titles, which limits mineable meanings) is Mrs. John K. Laskey’s “The Dying Girl” from Godey’s Magazine in 1843. The dying girl is “lovely,” “gentle,” and “youthful,” with “light curls on her brow.” Laskey depicts a struggle between the natural desire to live and the will of God, his invitation to paradise. With a warm and beautiful summertime, and a garden she “worshipped,” reminding her of the “thousand lovely things of earth,” the dying girl feels the “deep yearning of the soul for life,” longing to “delay” the end. She cries out: 

To die, alas, to die!
To say farewell to all my heart holds dear;
To pass from earth while yet the summer sky
Resounds with gladsome voices, sweet and clear, —
Oh! would I might yet longer tarry here! 

The dying girl eventually comes to her senses (“Father in heaven, my reason half departs!”), overcomes her fears, and accepts God’s will. 

Yes, yes! I will submit! — 
Forgive the spirit that has dared rebel! 
And, holy Father, if THOU thinkest fit, 
Take me from earth, for THOU dost all things well! —
With THEE, henceforth, I would for ever dwell!

Here you see the three key elements of the dying girl’s beauty, her noble Christian example, and her love of nature. These appear throughout my sources, whether written by men or women (typically white and middle or upper class). Of course, scholars have seen sexual connotations in the dying girl’s beauty and weakness. The archetype has been described as “a male fantasy of women’s bodily surrender.”

Magda Romanska, theatre scholar at Emerson College, explores the connection between the dying girl and sexuality in “NecrOphelia: Death, Femininity, and the Making of Modern Aesthetics” (2005). The nineteenth century, she writes, “was a period of morbid aesthetics and a peculiar and apparently inexplicable fascination with deadly eroticism.” Indeed, the “poetic and artistic imagination of the time began conceiving of the erotic as invariably ‘touched by death’ and of death as invariably touched by the erotic.” European artists, for example French sculptor Auguste Clésinger (“Woman Bitten by a Serpent”), offered in their work naked or semi-naked dying women, whose limp, sprawled forms could be read as post-orgasmic as easily as post-mortal. The dead or withering female body was inherently pornographic. The male gaze, Romanska argues, had a particular impact on artistic depictions of Shakespeare’s Ophelia, who became a central necrophilistic subject: “dead, yet sexually appealing.”

Of course, the nineteenth-century writer could not get away with descriptions of the nude body or various stages of undress and hope to be published; the painter and sculptor had more freedom in this regard. The furthest one could go would be where preacher and historian Timothy Horton Ball went in his 1893 Annie B., The Dying Girl: the girl is “very fair,” a “maid” (virgin) of “faultless form” and “native grace,” her cheeks and lips a “rosy hue,” her eyes “tender,” “bright,” and “blue.” This is alongside a beautiful voice and loving nature. Her lips and skin of course grow pale from illness. Not exactly erotica, but certainly pleasing to the male gaze. Elsewhere vulnerable girls are “sad and languid, weak and faint.”

And the dying girl is indeed at times explicitly a maiden, a sexual prize. (Though not always: see Deborah Deacon, “Seduced and Dying: The Sympathetic Trope of the Fallen Woman in Early and Mid-Victorian Britain.”) Professor of English Susan K. Martin of La Trobe University (Australia) explores this in her 1995 article “Good Girls Die; Bad Girls Don’t: The Uses of the Dying Virgin in Nineteenth-century Fiction.” After pointing out that nineteenth-century Western narratives confined young middle-class women to very few roles (“she can marry; she can fall; she can die”), Martin turns to the dying virgin, finding in her an “unthreatening ideal.” By ideal, Martin means that the dying girl is successfully trapped in a “closed system of virginity.” The only way to preserve her sexual purity is to kill her. The dying girl is unthreatening because her agency and power — sexual, social, economic, political, and so on — are curtailed through illness and eventual termination. In much Australian literature, Martin observes, including works by women, female characters who are transgressive, who push against class or gender boundaries, are both punished and cured of “over-active and over-assertive” traits by illness and death — they are made powerless and passive. 

However, various interpretations, and myriad authorial anxieties, are possible: the “death of young women on the brink of sexual maturity” could also be read as escape “from repressive models of nineteenth-century female maturity, the sexual economy and gender boundaries which burden the adult female.” Martin also notes that dying girls issue commands and instructions to men that can hardly be refused (they also provide religious teachings to men, for instance in Ball’s piece). Narratively, the deathbed is a place of power: with the tale about to end and the dying girl showered with love and adulation, anything can be asked for. Further, in some instances, as with Beth and Jo in Alcott’s Little Women, the interpretation that a dying (ideal, domesticated) young woman enables the more radical path of another young woman is possible. Other scholars have noted that Poe’s dying women tend to return from the grave. Dying girl stories can thus be seen as sites of negotiation or feminist subversion, not solely as weapons of male supremacy and the gender order.

However, it must be said that the short nature of the works that appear in American magazines and newspapers do not allow for substantial narrative (and, again, there exists little variance). There is no before or after, no time for the girl to be transgressive, for other characters to walk a life path, etc. The girl is introduced on her deathbed and described physically, she comforts a loved one, reminisces about her childhood in nature, accepts God’s will and delights in seeing heaven, and dies. It is a steady drip of specific ideas into the cultural body. While we certainly see commands that cannot be refused, they tend to service the religious indoctrination. In Daniel Cooledge’s 1833 The Dying Jewess, the girl begs her father to find Christ as she has; she passes away and he does just that.

Beyond the alluring femme fragile and sexually untouched object, beyond the exemplar of the good Christian death (which was a staple of children’s literature in the nineteenth century and prior centuries, featuring more than just dying girls), only the adulation of nature appears to have received little commentary.

Samuel D. Patterson’s “The Prayer of the Dying Girl” was published in Godey’s Magazine in late 1848. The titular character remembers fondly her childhood home, and pleads to be taken there to die. She recalls the green valleys, streams, mountains, and plains she would explore, the “happy days that there I spent when health and strength were mine,” when she “never knew a pang of sorrow or of pain.” The Mourner’s Chaplet: An Offering of Sympathy for Bereaved Friends from 1844, edited by John Keese, contains three works entitled “The Dying Girl,” seemingly by different authors. In the first poem, the girl lifts up the natural world left behind: “Speak of me, when the summer day is bright / With glorious sunbeams, and the golden light / Streams through the lattice of my own green bower / Let me be there, in that rejoicing hour.” Heaven is likewise described as an everlasting spring, with flowers and streams and peaceful skies. The second poem speaks of exploring the ocean shore and picking flowers in childhood. In the third work, the dying girl admires the “golden sun” that warms her “lovely” “native land,” remembering herself as a “careless merry child” who “twined me garlands of sweet wild flowers — no hot-house denizens.” This is an explicit rejection of an urban structure. She regards each “rock and tree as old remembered friends,” and lauds the sparkling river and soft grass. Finally, consider abolitionist and suffragist Mattie Griffith Browne’s “The Dying Girl,” prominently positioned as the first poem in her 1853 collection. This dying girl laments she will not be buried at their family home but rather in “this cold, strange land.” No further explanation is given.

Straley argues that the Victorians linked death and childhood (death is returning home and becoming a child again, returning to a state of innocence), an idea existing alongside a connection between death and the erotic (see Romanska), and between death and woman (both seen as Other, outside the norm, in male-centric societies). But the dying girl’s rejection of her current place and the craving to return to nature could reflect attitudes toward the increasing urbanization and industrialization of nineteenth-century America — in other words, these are works of the Romantic movement (think Thoreau, Emerson, Bryant). Cities grew rapidly in the early 1800s and they were not always pleasant, nor was industrial labor. Philadelphia, Boston, and New York saw large influxes of rural migrants — these are the cities with the magazines and newspapers that received and published most of the dying girl stories I surveyed. While the idea that these works contain expressions of yearning for old ways of life is interesting, it is a bit interpretive for my tastes (like some elements of the sexual analysis) and could not carry a lengthy thesis.

Similarly, my suggestion that the dying girl archetype began to die out with the transition from Americans dying in the home to dying in the hospital in the late nineteenth century (the good Christian death was supposed to happen in the home, the hospital is a less sentimental setting, you may be less likely to be surrounded by family at the end, etc.) would be difficult to prove and quick to discuss.

It is time, then, to find a new topic.

For more from the author, subscribe and follow or read his books.

When Did Jesus Finally Get the Name Jesus?

In my massive article The Bible is Rife With Contradictions and Changes, I introduced the fact that bible stories and verses have been changed over the centuries, at times significantly, with the following:

Christians don’t want to believe that biblical translations over time altered original stories, but one small way they obviously did was by giving characters altered names. Jesus did not consort with John and James. They were in the Middle East, not an English pub. Instead, Yeshua (ישוע) consorted with Yohhanan (יוחנן) and Ya’akov (יעקב). Hebrew and Aramaic names were translated into Greek and later into English (and other tongues), resulting in names of different pronunciation than were actually used. Mattityahu became Matthaios and finally Matthew. (No, English speakers did not independently have a name like “John” and then “translated” Yohhanan [Hebrew] or Ioannes [Greek] to the pre-existing John, as if there was some magical lingual match or a “Hey, this name sounds a bit like one of ours” situation! Study the etymology of these names. The only reason John existed in English is because over centuries the name Yohhanan, thanks to the bible, spread beyond Palestine, through other parts of Europe, and finally to the English-speaking world, changing along the way.) If something as simple as names and their pronunciations could change from actual people to written text, and then translation to translation, could other things have changed, too?

This is one of those things that is right in front of your face as a devout Christian, which I was until about 12 years ago, but you somehow never notice. Of course the bible has changed over time! To me, all this speaks to how blindly, how absolutely, we believe the bible to be true — if it says his name was John, his name was John — and how little the gears of critical thinking will turn under the stunting influence of religion.

But when exactly did “Yeshua” evolve into “Jesus”? The authors of the New Testament, written in Greek in the later half of the first century A.D., took the Hebrew Yeshua (ישוע) and made it Iesous (Ἰησοῦς), pronounced EE-ay-soos. A change was necessary because the Greek alphabet lacked the “sh” sound; further, Greek male names ended with an “s” sound, so “ς” was added. A few hundred years later, in the fourth century, Latin translations were authored. The Latin Iesus (IESVS) was pronounced similarly to the Greek name. Overall, the Greco-Roman pronunciation ruled for some 1,500 years, a much longer life than the modern version (for now at least). However, one should note that by the 12th century, the spelling “Jesus” was used alongside “Iesus,” but they were both pronounced like the latter. See, the letter “J” began as a fancy, elongated “I” in the Middle Ages, and was applied to all kinds of words and names. Not until the 1500s did J more and more come to sound like the letter we know and love. The modern pronunciation of Jesus began and grew from there, though it took a century or two to become standard. The letter J made the jump from Latin to English in the 1600s. Some readers will recall that Shakespeare did not use the letter J (sorry, Juliet), since it was not part of the English alphabet in his day, and the 1611 King James Bible spoke of Iesus — the 1629 revision spoke of Jesus.

This is not intended to be framed as breaking news. Many know that Yeshua was this figure’s actual name, that it was a common name at the time, that “Christ” isn’t Jesus’ last name, and so on. Plenty of Christian sites will walk you through the transformation of Yeshua. Further, the evolution of a character’s name across time and languages doesn’t automatically make stories about him untrue (we know the gospels are probably fictional for many other reasons). But it is suggestive. Again, if character names can be altered by later human beings, why not the stories the characters are in? Why not whole verses? See the article cited in the first sentence of this piece for examples.

Overall, I find it quite striking to examine the etymology of “Jesus.” More tongue in cheek, there’s thoughts of Darwin, with this slow evolution and a common ancestor of splintered descendants — Yeshua became both Jesus and Joshua. It’s interesting to me, for some reason, that before 500 years ago, no one had ever said the name Jesus, not how we say it; were you to time travel, English-speaking Christians wouldn’t be fully sure who you were talking about. Most importantly, there’s the human fingerprints all over Jesus’ name, in the same way they are all over the tales about him. What human beings believe in, who they worship, the name they cry out to — these things can easily be man-made constructions. Why do we call him Jesus? Because of a limiting Greek alphabet, a Greek tradition regarding masculine names, and a bunch of medieval scribes who wanted to write fancy.

For more from the author, subscribe and follow or read his books.

Announcing My 3rd Book

My latest book is now available for purchase! It is a bit different than my prior works. It is entitled Becoming Missouri State: Conversations on the Great Name Change Battle.

Missouri State University was known as Southwest Missouri State until 2005. The Bears fought for the name Missouri State for 25 years, overcoming fierce opposition from the University of Missouri and Columbia legislators, who acted like “the sky would fall” if Southwest succeeded, to quote a Mizzou graduate. This is that story.

Becoming Missouri State can be found on Amazon here. I do hope you enjoy it.

For more from the author, subscribe and follow or read his books.