On the Spring-Stone Debate

While finding a decisive victor in debates on semantics and historical interpretation often proves difficult, in the lively clash between historians David Spring and Lawrence Stone on social mobility into Britain’s landed elite, the former presented the strongest case. The discourse, of the mid-1980s, centered around the questions of how to define “open” when considering how open the upper echelon was to newcomers from 1540-1880 and, most importantly, to newcomers who came from the business world. On both counts, Spring offered a more compelling perspective on how one should regard the historical evidence and data Stone collected in his work An Open Elite? Namely, that it was reasonable to call the landed elite open to members of lower strata, including business leaders.

The debate quickly obfuscated lines between the two questions. In his review of An Open Elite?, Spring noted that Stone showed a growth in elite families from 1540-1879, beginning with forty and seeing 480 join them, though not all permanently. Further, “Stone shows that regularly one-fifth of elite families were newcomers.”[1] In his reply, Stone declined to explore the “openness” of a twenty percent entry rate because it was, allegedly, irrelevant to his purpose: he was only interested in the entry of businessmen like merchants, speculators, financiers, and manufacturers, who did not come from the gentry, the relatively well-off stratum knocking at the gate of the landed elite. Spring “failed to distinguish between openness to new men, almost all from genteel families, who made a fortune in the law, the army, the administration or politics…and openness to access by successful men of business, mostly of low social origins.”[2]

True, Stone made clear who and what he was looking at in An Open Elite?: the “self-made men,” the “upward mobility by successful men of business,” and so on, but leaned into, rather than brushed aside or contradicted, the idea of general social immobility.[3] For instance, observe the positioning of: “When analysed with care…the actual volume of social mobility has turned out to be far less than might have been expected. Moreover, those who did move up were rarely successful men of business.”[4] The notion of the landed elite being closed off in general was presented, followed by the specific concern about businessmen. Stone went beyond business many times (for instance: “the degree of mere gentry penetration up into the elite was far smaller than the earlier calculations would indicate”[5]), positing that not only was the landed elite closed to businessmen but also universally, making his protestations against Spring rather disingenuous. Stone insisted to Spring that an open elite specifically meant, to historians and economists, a ruling class open to businessmen, not to all, but Stone himself opened the door to the question of whether the landed elite was accessible to everyone by answering nay in his book. Therefore, the question was admissible, or fair game, in the debate, and Spring was there to provide a more convincing answer. A group comprised of twenty percent newcomers from below, to most reasonable persons, could be described as relatively open. Even more so with the sons of newcomers added in: the landed elite was typically one-third newcomers and sons of newcomers, as Spring pointed out. Though it should be noted both scholars highlighted the challenge of using quantitative data to answer such historical questions. The collection and publication of such numbers is highly important, but it hardly ends the discussion — the question of openness persists, and any answer is inherently subjective.

However, it was the second point of contention where Spring proved most perceptive. He pointed out that while the gentry constituted 181 entrants into the landed elite during the observed centuries, those involved in business were not far behind, with 157, according to Stone’s data. This dwarfed the seventy-two from politics and seventy from the law. As Spring wrote, Stone’s quantitative tables conflicted with his text. Stone wrote in An Open Elite? that “most of the newcomers were rising parish gentry or office-holders or lawyers, men from backgrounds not too dissimilar to those of the existing county elite. Only a small handful of very rich merchants succeeded in buying their way into the elite…”[6] Clearly, even with different backgrounds, businessmen were in fact more successful at entering the landed elite than politicians and lawyers in the three counties Stone studied. What followed a few lines down in the book from Stone’s selected words made far more sense when considering the data: businessmen comprised “only a third of all purchasers…”[7] The use of “only” was perhaps rather biased, but, more significantly, one-third aligned not with the idea of a “small handful,” but of 157 new entrants — a third business entrants, a bit more than a third gentry, and a bit less than a third lawyers, politicians, and so on. Spring could have stressed the absurdity, in this context, of the phrase “only a third,” but was sure to highlight the statistic in his rejoinder, where he drove home the basic facts of Stone’s findings and reiterated that the landed elite was about as open to businessmen as others. Here is where quantitative data truly shines in history, for you can compare numbers against each other. The question of whether a single given number or percentage is big or small is messy and subjective, but whether one number is larger than another is not, and provides clarity regarding issues like whether businessmen had some special difficulty accessing Britain’s landed elite.

Stone failed to respond directly to this point, a key moment that weakened his case, but instead sidetracked into issues concerning permanence of newcomers and by-county versus global perspectives on the data, areas he explored earlier in his response, now awkwardly grafted on to Spring’s latest argument. Yet the reader is largely left to pick up on what is being implied, based on Stone’s earlier comments on said issues. He noted that only twenty-five businessmen of the 157 came from the two counties distant from London, seemingly implying that Hertfordshire, the London-area county, had tipped the scales. Merchants and others were not as likely to rise into the landed elite in more rural areas. What relevance that had is an open question — it seemed more a truism than an argument against Spring’s point, as London was a center for business, and thus that result was perhaps expected. Regardless, he did not elaborate. The adjacent implication was that Spring was again seeing “everything from a global point of view which has no meaning in reality, and nothing from the point of view of the individual counties.”[8] In the debate, Stone often cautioned that it made sense to look at counties individually, as they could be radically distinct — one should not simply look at the aggregated data. But Stone’s inherent problem, in his attempt at a rebuttal, was that he was using the global figures to make his overall case. He took three counties and lifted them up to represent a relatively closed elite in Britain as a whole. It would not do to now brush aside one county or focus heavily on another to bolster an argument. Spring, in a footnote, wrote something similar, urging Stone to avoid “making generalizations on the basis of one county. [Your] three counties were chosen as together a sample of the nation.”[9] To imply, as Stone did, that London could be ignored as some kind of anomaly contradicted his entire project.

Stone’s dodge into the permanence of entrants was likewise not a serious response to Spring’s observation that business-oriented newcomers nearly rivaled those from the gentry and far outpaced lawyers and politicians. He wrote that “of the 132 business purchasers in Hertfordshire, only 68 settled in for more than a generation…”[10] The transient nature of newcomers arose elsewhere in the debate as well. Here Stone moved the goalposts slightly: instead of mere entrants into the landed elite, look at who managed to remain. Only “4% out of 2246 owners” in the three counties over these 340 years were permanent newcomers from the business world.[11] It was implied these numbers were both insignificant and special to businesspersons. Yet footnote five, that associated with the statistic, undercut Stone’s point. Here he admitted Spring correctly observed that politicians and officeholders were forced to sell their county seats, their magnificent mansions, and abandon the landed elite, as defined by Stone, at nearly the same rate as businessmen, at least in Hertfordshire. Indeed, it was odd Stone crafted this response, given Spring’s earlier dismantling of the issue. The significance of Stone’s rebuttal was therefore unclear. If only sixty-eight businessmen lasted more than a generation, how did that compare to lawyers, office-holders, and the gentry? Likewise, if four percent of businessmen established permanent generational residence among the landed elite, what percentages did other groups earn? Again, Stone did not elaborate. But from his admission and what Spring calculated, it seems unlikely Stone’s numbers, when put in context, would help his case. Even more than the aggregate versus county comment, this was a non-answer.

The debate would conclude with a non-answer as well. There was of course more to the discussion — it should be noted Stone put up an impressive defense of the selection of his counties and the inability to include more, in response to Spring questioning how representative they truly were — but Spring clearly showed, using Stone’s own evidence, that the landed elite was what a reasonable person could call open to outsiders in general and businessmen in particular, contradicting Stone’s positions on both in An Open Elite? Stone may have recognized this, given the paucity of counterpoints in his “Non-Rebuttal.” Spring would, in Stone’s view, “fail altogether to deal in specific details with the arguments used in my Reply,” and therefore “there is nothing to rebut.”[12] While it is true that Spring, in his rejoinder, did not address all of Stone’s points, he did focus tightly on the main ideas discussed in the debate and this paper. So, as further evidence that Spring constructed the better case, Stone declined to return to Spring’s specific and central arguments about his own data. He pointed instead to other research that more generally supported the idea of a closed elite. Stone may have issued a “non-rebuttal” not because Spring had ignored various points, but rather because he had stuck to the main ones, and there was little to be said in response.

For more from the author, subscribe and follow or read his books.


[1] Eileen Spring and David Spring, “The English Landed Elite, 1540-1879: A Review,” Albion: A Quarterly Journal Concerned with British Studies 17, no. 2 (Summer 1985): 152.

[2] Lawrence Stone, “Spring Back,” Albion: A Quarterly Journal Concerned with British Studies 17, no. 2 (Summer 1985): 168.

[3] Lawrence Stone, An Open Elite? England 1540-1880, abridged edition (Oxford: Oxford University Press, 1986), 3-4.

[4] Ibid, 283.

[5] Ibid, 130.

[6] Ibid, 283.

[7] Ibid.

[8] Stone, “Spring Back,” 169.

[9] Spring, “A Review,” 154.

[10] Stone, “Spring Back,” 171.

[11] Ibid.

[12] Lawrence Stone, “A Non-Rebuttal,” Albion: A Quarterly Journal Concerned with British Studies 17, no. 3 (Autumn 1985): 396. For Spring’s rejoinder, see Eileen Spring and David Spring, “The English Landed Elite, 1540-1879: A Rejoinder,” Albion: A Quarterly Journal Concerned with British Studies 17, no. 3 (Autumn 1985): 393-396.

Did Evolution Make it Difficult for Humans to Understand Evolution?

It’s well known that people are dreadful at comprehending and visualizing large numbers, such as a million or billion. This is understandable in terms of our development as a species, as grasping the tiny numbers of, say, your clan compared to a rival one you’re about to be in conflict with, or understanding amounts of resources like food and game in particular places, would aid survival (pace George Dvorsky). But there was little evolutionary reason to adeptly process a million of something, intuitively knowing the difference between a million and a billion as easily as we do four versus six. A two second difference, for instance, we get — but few intuitively sense a million seconds is about 11 days and a billion seconds 31 years (making for widespread shock on social media).

As anthropologist Caleb Everett, who pointed out a word for “million” did not even appear until the 14th century, put it, “It makes sense that we as a species would evolve capacities that are naturally good at discriminating small quantities and naturally poor at discriminating large quantities.”

Evolution, therefore, made it difficult to understand evolution, which deals with slight changes to species over vast periods of time, resulting in dramatic differences (see Yes, Evolution Has Been Proven). It took 16 million years for Canthumeryx, with a look and size similar to a deer, to evolve into, among other new species, the 18-foot-tall giraffe. It took 250 million years for the first land creatures to finally have descendants that could fly. It stands to reason that such statements seem incredible to many people not only due to old religious tales they support that evidence does not but also because it’s hard to grasp how much time that actually constitutes. Perhaps it would be easier to comprehend and visualize how small genetic changes between parent creatures and offspring could add up, eventually resulting in descendants that look nothing like ancient ancestors, if we could better comprehend and visualize the timeframes, the big numbers, in which evolution operates. 16 million years is a long time — long enough.

This is hardly the first time it’s been suggested that its massive timescales make evolution tough to envision and accept, but it’s interesting to think about how this fact connects to our own evolutionary history and survival needs.

Just one of those wonderful oddities of life.

For more from the author, subscribe and follow or read his books.

Suicide is (Often?) Immoral

Suicide as an immoral act is typically a viewpoint of the religious — it’s a sin against God, “thou shalt not kill,” and so on. For those free of religion, and of course some who aren’t, ethics are commonly based on what does harm to others, not yourself or deities — under this framework, the conclusion that suicide is immoral in many circumstances is difficult to avoid.

A sensible ethical philosophy considers physical harm and psychological harm. These harms can be actual (known consequences) or potential (possible or unknown consequences). The actual harm of, say, shooting a stranger in the heart is that person’s suffering and death. The potential harm on top of that is wide-ranging: if the stranger had kids it could be their emotional agony, for instance. The shooter simply would not know. Most suicides will entail these sorts of things.

First, most suicides will bring massive psychological harm, lasting many years, to family and friends. Were I to commit suicide, this would be a known consequence, known to me beforehand. Given my personal ethics, aligning with those described above, the act would then necessarily be unethical, would it not? This seems to hold true, in my view, even given my lifelong depression (I am no stranger to visualizations of self-termination and its aftermath, though fortunately with more morbid curiosity than seriousness to date; medication is highly useful and recommended). One can suffer and, by finding relief in nonexistence, cause suffering. As a saying goes, “Suicide doesn’t end the pain, it simply passes it to someone else.” Perhaps the more intense my mental suffering, the less unethical the act (more on this in a moment), but given that the act will cause serious pain to others whether my suffering be mild or extreme, it appears from the outset to be immoral to some degree.

Second, there’s the potential harms, always trickier. There are many unknowns that could result from taking my own life. The potential harms could be more extreme psychological harms, a family member driven to severe depression or madness or alcoholism. (In reality, psychological harms are physical harms — consciousness is a byproduct of brain matter — and vice versa, so stress on one affects the other.) But they could be physical as well. Suicide, we know, is contagious. Taking my own life could inspire others to do the same. Not only could I be responsible for contributing, even indirectly, to the death of another person, I would also have a hand in all the actual and potential harms that result from his or her death! It’s a growing moral burden.

Of course, all ethics are situational. This is accepted by just about everyone — it’s why killing in self-defense seems less wrong than killing in cold blood, or why completely accidental killings seem less unethical than purposeful ones. These things can even seem ethically neutral. So there will always be circumstances that change the moral calculus. One questions if old age alone is enough (one of your parents or grandparents taking their own lives would surely be about as traumatic as anyone else), but intense suffering from age or disease could make the act less unethical, in the same way deeper and deeper levels of depression may do the same. Again, less unethical is used here. Can the act reach an ethically neutral place? The key may simply be the perceptions and emotions of others. Perhaps with worsening disease, decay, or depression, a person’s suicide would be less painful to friends and family. It would be hard to lose someone in that way, but, as we often hear when someone passes away of natural but terrible causes, “She’s not suffering anymore.” Perhaps at some point the scale is tipped, with too much agony for the individual weighing down one side and too much understanding from friends and family lifting up the other. One is certainly able to visualize this — no one wants their loved ones to suffer, and the end of their suffering can be a relief as well as a sorrow, constituting a reduction in actual harm — and this is no doubt reality in various cases. This writing simply posits that not all suicides will fall into that category (many are unexpected), and, while a distinguishing line may be frequently impossible to see or determine, the suicides outside it are morally questionable due to the ensuing harm.

If all this is nonsense, and such sympathetic understanding of intense suffering brings no lesser amount of harm to loved ones, then we’re in trouble, for how else can the act break free from that immoral place, for those operating under the moral framework that causing harm is wrong?

It should also be noted that the rare individuals without any real friends or family seem to have less moral culpability here. And perhaps admitted plans and assisted suicide diminish the immorality of the act, regardless of the extent of your suffering — if you tell your loved ones in advance you are leaving, if they are there by your side in the hospital to say goodbye, isn’t that less traumatizing and painful than a sudden, unexpected event, with your body found cold in your apartment? In these cases, however, the potential harms, while some may be diminished in likelihood alongside the actual, still abound. A news report on your case could still inspire someone else to commit suicide. One simply cannot predict the future, all the effects of your cause.

As a final thought, it’s difficult not to see some contradiction in believing in suicide prevention, encouraging those you know or those you don’t not to end their lives, and believing suicide to be ethically neutral or permissible. If it’s ethically neutral, why bother? If you don’t want someone to commit suicide, it’s because you believe they have value, whether inherent or simply to others (whether one can have inherent value without a deity is for another day). And destroying that value, bringing all that pain to others or eliminating all of the individual’s potential positive experiences and interactions, is considered wrong, undesirable. Immorality and prevention go hand-in-hand. But with folks who are suffering we let go of prevention, even advocating for assisted suicide, because only in those cases do we begin to consider suicide ethically neutral or permissible.

In sum, one finds oneself believing that if causing harm to others is wrong, and suicide causes harm to others, suicide must in some general sense be wrong — but acknowledging that there must be specific cases and circumstances where suicide is less wrong, approaching ethical neutrality, or even breaking into it.

For more from the author, subscribe and follow or read his books.

Expanding the Supreme Court is a Terrible Idea

Expanding the Supreme Court would be disastrous. We hardly want an arms race in which the party that controls Congress and the White House expands the Court to achieve a majority. It may feel good when the Democrats do it, but it won’t when it’s the Republicans’ turn. 

The problem with the Court is that the system of unwritten rules, of the “gentlemen’s agreement,” is completely breaking down. There have been expansions and nomination fights or shenanigans before in U.S. history, but generally when a justice died or retired a Senate controlled by Party A would grudgingly approve a new justice nominated by a president of Party B — because eventually the situation would be reversed, and you wanted and expected the other party to show you the same courtesy. It was reciprocal altruism. It all seemed fair enough, because apart from a strategic retirement, it was random luck — who knew when a justice would die? 

The age of unwritten rules is over. The political climate is far too polarized and hostile to allow functionality under such a system. When Antonin Scalia died, Obama should have been able to install Merrick Garland on the Court — Mitch McConnell and the GOP Senate infamously wouldn’t even hold a vote, much less vote Garland down, for nearly 300 days. They simply delayed until a new Republican president could install Neil Gorsuch. Democrats attempted to block this appointment, as well as Kavanaugh (replacing the retiring Kennedy) and Barrett (replacing the passed Ginsburg). The Democrats criticized the Barrett case for occurring too close to an election, mere weeks away, the same line the GOP had used with Garland, and conservatives no doubt saw the investigation into Kavanaugh as an obstructionist hit job akin to the Garland case. But it was entirely fair for Trump to replace Kennedy and Ginsberg, as it was fair for Obama to replace Garland. That’s how it’s supposed to work. But that’s history — and now, with Democrats moving forward on expansion, things are deteriorating further.

This has been a change building over a couple decades. Gorsuch, Kavanaugh, and Barrett received just four Democratic votes. The justices Obama was able to install, Kagan and Sotomayor, received 14 Republican votes. George W. Bush’s Alito and Roberts received 26 Democratic votes. Clinton’s Breyer and Ginsburg received 74 Republican votes. George H.W. Bush’s nominees, Souter and Thomas, won over 57 Democrats. When Ronald Reagan nominated Kennedy, more Democrats voted yes than Republicans, 51-46! Reagan’s nominees (Kennedy, Scalia, Rehnquist, O’Connor) won 159 Democratic votes, versus 199 Republican. Times have certainly changed. Partisanship has poisoned the well, and obstruction and expansion are the result.

Some people defend the new normal, correctly noting the Constitution simply allows the president to nominate and the Senate to confirm or deny. Those are the written rules, so that’s all that matters. And that’s the problem, the systemic flaw. It’s why you can obstruct and expand and break everything, make it all inoperable. And with reciprocal altruism, fairness, and bipartisanship out the window, it’s not hard to imagine things getting worse. If a party could deny a vote on a nominee for the better part of a year (shrinking the Court to eight, one notices, which can be advantageous), could it do so longer? Delaying for years, perhaps four or eight? Why not, there are no rules against it. Years of obstruction would become years of 4-4 votes on the Court, a completely neutered branch of government, checks and balances be damned. Or, if each party packs the Court when it’s in power, we’ll have an ever-growing Court, a major problem. The judiciary automatically aligning with the party that also controls Congress and the White House is again the serious weakening of a check and balance. Democrats may want a stable, liberal Court around some day to strike down rightwing initiatives coming out of Congress and the Oval Office. True, an expanding Court will hurt and help parties equally, and parties won’t always be able to expand, but for any person who sees value in real checks on legislative and executive power, this is a poor idea. All the same can be said for obstruction.

Here is a better idea. The Constitution should be amended to reflect the new realities of American politics. This is to preserve functionality and meaningful checks and balances, though admittedly the only way to save the latter may be to undercut it in a smaller way elsewhere. The Court should permanently be set at nine justices, doing away with expansions. Election year appointments should be codified as obviously fine. The selection of a new justice must pass to one decision-making body: the president, the Senate, the House, or a popular vote by the citizenry. True, doing away with a nomination by one body and confirmation by another itself abolishes a check on power, but this may be the only way to avoid the obstruction, the tied Court, the total gridlock until a new party wins the presidency. It may be a fair tradeoff, sacrificing a smaller check for a more significant one. However, this change could be accompanied by much-discussed term limits, say 16, 20, or 24 years, for justices. So while only one body could appoint, the appointment would not last extraordinary lengths of time.

For more from the author, subscribe and follow or read his books.

Review: ‘The Language of God’

I recently read The Language of God. Every once in a while I read something from the other side of the religious or political divide, typically the popular books that arise in conversation. This one interested me because it was written by a serious scientist, geneticist Francis Collins, head of the Human Genome Project. I wanted to see how it would differ from others I read (Lewis, Strobel, Zacharias, McDowell, Little, Haught, and so forth).

You have to give Collins credit for his full embrace of the discoveries of human science. He includes a long, enthusiastic defense of evolution, dismantles the “irreducible complexity” myth, and the science he cites is largely accurate (the glaring exception being his assertion that humans are the only creatures that help each other when there’s no benefit or reward for doing so, an idea ethology has entirely blown up). He also dismisses Paley’s dreadful “Watchmaker” analogy, sternly warns against the equally unwise “God of the Gaps” argument (lack of scientific knowledge = evidence for God), stands against literal interpretations of the bible, and (properly) discourages skeptics from claiming evolution literally disproves a higher power. Some of this is different compared to the other writers above, and unexpected.

Unfortunately, Collins engages in many of the same practices the other authors do: unproven or even false premises that lead to total argumental collapse (there’s zero evidence that deep down inside all humans have the same ideas of right and wrong, if only we would listen to the “whisper” of the Judeo-Christian deity), argument by analogy, and other logical fallacies. Incredibly, he even uses the “God of the Gaps” argument, not even 20 pages before his serious warning against it (we don’t know what came before the Big Bang, what caused it, whether multiple universes exist, whether our one universe bangs and crunches ad infinitum…therefore God is real). The existence of existence is important to think about, and perhaps we do have a higher power to thank, but our lack of scientific knowledge isn’t “evidence for belief,” as the subtitle puts it. It’s “nonevidence” for belief. It’s “God of the Gaps.” The possibility of God being fictional remains, as large as ever. Overall, Collins doesn’t carry over principles very well, seeing the weakness of analogy, “God of the Gaps,” and literal biblical interpretations but using them anyway (it is possible Genesis has untruths, but of course not the gospels). Weird, contradictory stuff.

Overall, the gist of the book is “Here are amazing discoveries of science, but you can still believe in God and that humans are discovering God’s design.” Which is fine. While trust in science forces the abandonment of literal interpretations of ancient texts (first man from dirt, first woman from rib, birds being on earth before land animals, etc.), faith and science living in harmony isn’t that hard. You say “God did it that way” and move on. Evolution was God’s plan, and so forth. That’s really all the chapters build toward (Part 2, the science-y part, has three chapters: the origins of the universe chapter builds toward the “We don’t know, therefore God” argument, while the life on Earth and human genome chapters conclude with no argument at all, just the suggestion that “God did it that way.” I found this unsettling. In any case, “evidence for belief” wasn’t an accurate subtitle, as expected).

Finally, I was disappointed Collins didn’t dive deeper into his conversion to the faith, a subject that always interests me. He cites just one (poor) argument from C.S. Lewis that caused him to change his mind about everything, the right and wrong proposition mentioned above. I would have liked more of his story.

For more from the author, subscribe and follow or read his books.