Is Relative Morality More Dangerous Than Objective Morality?

“The fool says in his heart, ‘There is no God.’ They are corrupt, their deeds are vile; there is no one who does good.”

Psalm 14:1 neatly summarizes the anti-atheist stereotype held by many people around the world, and further laid the foundation thousands of years ago for this modern Christian belief. It says so in the bible, thus it must be true. While some people of faith trust that the nonreligious are just as moral as they, others believe atheism makes one more likely to commit unethical acts or even that no one can be good without God.

Having already examined how deities are not necessary to explain morality nor to justify moral decisions, and having cleared up confusion concerning objective morality versus objective truth, it seems relevant to address the idea that relative morality (humans alone deciding what is right and wrong) is so much more dangerous than objective morality (right and wrong as allegedly dictated by God and outlined in holy books).

First we will look at theists’ “relative morality in practice” argument and then move on to the theoretical or philosophical question of which is preferable, relative or objective morality. However, let us be clear from the outset that consequences have no bearing on whether something is true or false. Christians hope everyone will believe in objective morality because otherwise we’ll all kill each other and civilization will burn. Naturally, we should instead believe something is true because there’s evidence for it, not because there would be dire consequences if we did not believe (the argumentum ad consequentiam fallacy). There is, of course, no actual evidence for objective morality.

The “in practice” argument often centers around the atrocities of Hitler, Stalin, and other mass killers. “These atheists were responsible for the worst genocides in human history,” thus any morality devoid of gods is dangerous prima facie. 

This falls apart for several reasons.

First, one notes the personal views of the worst despots are sometimes misconstrued. Hitler repeatedly professed his Christianity in his books and speeches, often to explicitly justify oppressing the Jews; he also publicly criticized the “atheist movement” of the Bolsheviks. Privately, however, he made clear he was an enemy of Christianity, calling it an “absurdity” based on “lies” (Bormann, Hitler’s Table Talk). “The heaviest blow that ever struck humanity was the coming of Christianity,” he said, because it led to Bolshevism. “Both are inventions of the Jew.” Christianity would be “worn away” by science, as all “myths crumble.”

However, anti-Christian is not necessarily atheist. Joseph Goebbels wrote that while Hitler “hates” Christianity, “the Fuhrer is deeply religious” (Goebbels Diaries). Hitler said in private that

An educated man retains the sense of the mysteries of nature and bows before the unknowable. An uneducated man, on the other hand, runs the risk of going over to atheism (which is a return to the state of the animal) as soon as he perceives that the State, in sheer opportunism, is making use of false ideas in the matter of religion… (Bormann)

Hitler said to companions, “Christianity is the most insane thing that a human brain in its delusion has ever brought forth, a mockery of everything divine,” suggesting a belief in higher powers.

And while some of Hitler’s policies attacked the Catholic Church and German Christianity in general, only those who stood up to the Nazis, like some church leaders and Jehovah’s Witnesses, were in danger of extermination. And Hitler also persecuted atheists, banning most atheist groups, such as the German Freethinkers League. Again, fear of the link between atheism and Bolshevism was a factor.

With no real evidence Hitler was an atheist, what of Stalin?

The Soviet dictator’s case is more straightforward. He became an atheist as a youth, while studying to become a priest (also what a young Hitler wanted to do). “They are fooling us,” he said of his teachers. “There is no god” (Yaroslavsky, Landmarks in the Life of Stalin). “God’s not unjust, he doesn’t actually exist. We’ve been deceived” (Montefiore, Young Stalin). Later, he explained that “all religion is something opposite to science,” and oversaw “anti-religious propaganda” to eradicate “religious prejudices” (Pravda interview, September 15, 1927). Such efforts were meant to “convince the peasant of the nonexistence of God” (Stalin, “The Party’s Immediate Tasks in the Countryside” speech, October 22, 1924). As implied above, Communism in the Soviet Union typically embraced science and secularism.

Stalin thought religion was “opium for the people,” an exercise in “futility” that wrought “evil” (Hoxha, With Stalin). “The introduction of religious elements into socialism,” he wrote, “is unscientific and therefore harmful for the proletariat” (Stalin, “Party News,” August 2, 1909). He favored the “struggle” against religion. He also said he did not believe in fate, calling it a “relic of mythology” (Stalin, interview with Emil Ludwig, December 13, 1931). In terms of policy, Stalin shifted from a relative tolerance of religious freedom to a reign of terror against the Russian Orthodox Church and other faith organizations in the 1920s and 1930s. Countless priests, monks, and nuns were exterminated (100,000 between 1937-1938 alone; Yakovlev, A Century of Violence in Soviet Russia).

We could go on, digging into the views of other tyrants. But moving forward to the second point, can it be reasoned that, all other factors remaining the same, Stalin would not have harmed anyone had he believed in God? If Hitler had been a Christian? It is logical to posit Stalin’s disbelief was a contributing factor to his holocaust against his own people, even the primary factor in his massacres of religious leaders, but considering what believers in God (and Christ) have been capable of throughout history it is difficult to conclude piety would have stopped Hitler’s war, the Holocaust of Jews, Roma, and homosexuals, or Stalin’s mass murder of political enemies, kulaks (wealthy peasants), and ethnic minorities (such as the Poles). Would faith really have cured the imperial ambitions, extreme racism, fanatical patriotism, authoritarianism, lack of empathy, and power lust of these men? This is the problem with arguing that atheism was anything more than a contributing factor, at best, to (some) of the worst crimes of the 20th century. There are countless other examples of horrific violence committed by men who were unquestionably religious yet exhibited the same evil, and whose actions had a much stronger connection to their faiths than Stalin or Hitler’s actions had to their more secular views (that is, faith was the primary factor, not a contributing factor).

The crimes of the sincerely religious are vast and unspeakable, stretching not merely a few decades but rather millennia. If we could step back and witness the graveyard of all who were killed in the name of God, what would that look like? How many millions have been oppressed, tortured, maimed, and killed because “God said so”? To please the gods? To spread the faith?

Look to the atrocities that no thinking person believes divorced from faith. The 700-year Inquisition, the torture and mass murder of anyone who questioned Christian doctrine in Europe or refused to convert in the Americas and parts of Asia. The 400-year witch hunts of Europe and North America, the execution of women supposedly in league with and copulating with the devil. The 1,900-year campaign of terror against the Jews in Europe, the “Christ-killers.” The Crusades, bloody Christian-Muslim wars for control of the Holy Land that spanned two centuries and killed millions. The European Wars of Religion during the Reformation that lasted a century (Thirty Years’ War, Eighty Years’ War, French Wars of Religion, etc.), killing millions. And these are just the major wars and crimes against humanity of Christians from Europe! (See “When Christianity Was as Violent as Islam.”)

We could look at Arabian Islam, from the bloody conquest to establish a caliphate across the Middle East, North Africa, and Spain to the murder of infidels, from the Shia-Sunni wars to the terrorist attacks of the modern era. We could examine the appalling executions and genocide conducted by the Hebrews, according to their holy book. We could study the human sacrifices to the gods in South American and other societies. We could investigate today’s Christian-Muslim wars and the destruction of accused witches in sub-Saharan Africa. The scope of all this so large, encompassing all people who believed in a higher power in all cultures throughout all human history. The crimes of 20th century tyrants were horrific, but is there really a strong case that they could not have occurred on just as large a scale had the tyrants been more religious?

You will notice that all these atrocities were more closely connected to the faiths of the perpetrators than the atrocities of Hitler and Stalin were to their anti-Christian or secular views. The Jews were not killed in the name of atheism. Hitler’s attempt to conquer Europe was not an anti-Christian campaign. Stalin wanted to destroy religion, but few would suggest that was his primary goal, ahead of eradicating capitalism, establishing Communism, and modernizing Russia into a world power. Secular beliefs may have contributed to atrocities, but unlike these other examples they were not the primary factors. If belief or non-belief only need be contributing factors to credit them for crimes, we could also look at religious persons who committed crimes against humanity that weren’t closely motivated by or connected to faith.

Doing so makes faith guilty of any crime committed by a person of faith. And why not? If the False Cause Fallacy can be applied to atheists it can just as easily be applied to theists! (Same with the Poisoning the Well Fallacy: these atheists were evil, so atheism is evil; these people of faith were evil, so faith is evil.)

The Ottomans committed genocide against the Armenians from 1915-1922, killing 1.5 million, 75% of the Armenian population. Prime Minister Mehmed Talaat was its principle architect, and because he was a Shia Muslim it must have been a belief in a higher power that enabled him to carry out this act. The Rwanda genocide of 1994 was not a religious conflict, but some Catholic faith leaders participated — a crime the Pope apologized for this year. Their belief in a god must be credited. Radovan Karadžić, president of Republika Srpska and a Serb, orchestrated the genocide of Muslims and Croats in 1995, during the Bosnian War. He saw his deeds as part of a “holy war” between Christianity and Islam. Would he have refrained from mass murder had he been an atheist? Would the old butcher Christopher Columbus? Would King Leopold II of Belgium? This Catholic monarch was responsible for the deaths of perhaps 10 million people in Congo. “I die in the Catholic religion,” he wrote in his last testament, “and I ask pardon for the faults I have or may have committed.” This game can be played with anyone in human history, from the Christian kings, queens, traders, and owners who enslaved 12-20 million Africans (which killed millions; see Harman, A People’s History of the World) to the Christian presidents of the United States who intentionally bombed millions of civilians in Vietnam.

One could make the embarrassing argument that those who committed such evils were not actually believers in God (a “secret atheist theory”). Yes, it is difficult to know an historical figure’s true thoughts. But one could just as easily pretend Stalin and others were secretly believers. We have to use the evidence we have.

So you can see how the legitimacy of casual connections is highly important. One who doesn’t care about the strength of such connections could easily attribute Hitler’s crimes to his belief in a higher power! (One could then argue Hitler’s belief was far more dangerous than Stalin’s atheism, as Hitler oversaw the deaths of 11 million noncombatants, versus Stalin’s 6 million — in the decades since the fall of the Soviet Union, researchers have determined the death toll estimate typically associated with Stalin, 20 million, is grossly inaccurate.) It is illogical to blame secularism for being anything more than a contributing factor to Stalin and Hitler’s actions in the same way it is illogical to blame faith for being anything more than a contributing factor to the Armenian, Congolese, or other genocides committed by religious persons. There are many events in history with faith as a primary cause, like the Inquisition, but it cannot be said the Holocaust and the Russian purges were primarily caused by atheism.

Third and finally, one could refute the notion atheists are worse people using scientific research. Children from nonreligious homes were actually found in a 2015 study to be more generous than those from religious homes. A “Good Samaritan” study found religiosity does not determine how likely people are to lend a helping hand. A study on cheating found that faith does not make one less likely to cheat. A 2014 study showed secular and religious people commit immoral acts equally. Some atheists trumpet the fact they are underrepresented in U.S. prisons, but shouldn’t due to the fact atheists are predominantly educated, middle-to-upper class whites, a group that is itself underrepresented. Similarly, some point out nations like the United Kingdom, the Netherlands, Denmark, Sweden, the Czech Republic, Japan, and others have some of the highest rates of atheism and lowest rates of crime in the world, but this should be avoided as a False Cause Fallacy as well. These nations are likewise disproportionately wealthy and educated — low crime rates and atheism are byproducts; they likely do not have a cause-effect relationship (but at least those worried about society falling into chaos and crime as atheism spreads can rest easy).

So is the belief in relative, godless morality so much more dangerous than the belief in objective, God-given morality? In practice, it appears not. The capacity for horrific actions in secular and religious people seems equivalent. Same with kindness and other positive actions.

From a theoretical standpoint, however, there are two facts that make relative morality better. They help explain why atheists are not worse people than believers.

First, objective morality has a glaring flaw: it cannot be known. Just as one cannot prove the existence of the Christian deity, there is no way to definitively prove that Christ-ordained right and wrong exists or is the objective standard humanity is meant to follow. Why not Islamic right and wrong? Because one can’t prove which set of ethics is actually objective and god-decreed, each simply becomes one option among many and thus we have to choose among them (it’s quite relative!). Even if you believe in objective morality, there’s no way to actually know what it is. The person of (any) faith thinks he knows but might easily be wrong. “I’ve looked at her with lust in my heart, I’ve done wrong.” Well, perhaps not. It could be the higher power that actually exists doesn’t believe in thought crimes. Saying we should try to follow an objective morality, offered by a particular religion, is not particularly compelling. One cannot know for certain that a religion is true, nor that objective morality is true, nor what it says. Even within religions, the objective standards cannot be fully known — you may know not to kill, but the bible offers no guidance on many ethical issues, such as the age of consent for sex (probably a good thing, considering when it was written). Relative ethics can of course be fully known because we create them for ourselves — and we all know relative morality exists because different individuals, societies, and time periods have different values.

Second, relativity allows us the freedom to make our ethics better. I understand why people of faith see a risk in humans deciding what’s right and wrong, but religion clearly isn’t any better in terms of danger to others (if you ask me why it’s because religion is man-made, so it all makes sense). We have gods saying all sorts of things are right: killing homosexuals, those who engage in extramarital sex, and people who work on the Sabbath (Old Testament); enslaving people and oppressing women (New Testament); waging Jihad on nonbelievers and cutting off body parts for crimes (Qur’an). Well, perhaps humans would like to base what’s wrong on what actually causes harm to others, not what insults a deity, which makes all that killing and maiming wrong and makes things like working on the Sabbath, homosexuality, and sex outside marriage (and porn, masturbation, smoking weed, etc.) ethically permissible. We have the ability to continue to improve our ethics to a point where fewer people get killed for nonviolent “crimes.” Relative morality allows us to move past the absurdities and barbarism of ancient desert tribes. We’ve been very successful at this.

Yes, it also allows us to return to barbarism, with no thoughts of angry higher beings to stop us. Faith-based appeals can prevent barbarism too (“I can’t kill, I’ll go to hell”). But at least we’re free to move in a more positive direction if we choose. Religion doesn’t really offer that. God’s word is perfect and is not to be altered or deviated from; it has been set for thousands of years. Being paralyzed by religious ethics keeps us stuck in the dark ages, from oppressive Islamic societies in the Middle East and Asia to the lingering hysteria in the United States over homosexuality, which is a very natural trait of the human species and other lifeforms. Progress on such matters requires putting aside ancient faith-based ideas of right and wrong (Americans were no longer allowed to execute homosexuals after 1786). The more humanity does so the more safe and free each of us becomes.

For more from the author, subscribe and follow or read his books.

Kentucky Judge Refuses to Marry Atheists

In July 2016, Kentucky judge Hollis Alexander refused to wed atheists Mandy Heath and her fiancé Jon because they requested any mention of God be excluded from the ceremony.

“I will be unable to perform your wedding ceremony,” Alexander told them. “I include God in my ceremonies and I won’t do one without him.”

Alexander, being the only judge in Trigg County able to perform a wedding ceremony, advised Heath to seek out a judge in another county. The Freedom From Religion Foundation, a leader in suits against violations of constitutional church-state separation, sent the judge a letter outlining the laws he chose to break, adding:

There is no requirement that such ceremonies be religious (any such requirement would be unconstitutional). Ms. Heath sought you out as the only secular alternative available to her under Kentucky law.

As a government employee, you have a constitutional obligation to remain neutral on religious matters while acting in your official capacity. You have no right to impose your personal religious beliefs on people seeking to be married. Governments in this nation, including the Commonwealth of Kentucky, are secular. They do not have the power to impose religion on citizens. The bottom line is that by law, there must be a secular option for people seeking to get married. In Trigg County, you are that secular option.

There is no word yet if a lawsuit will follow.

Kentucky is the state where Kim Davis worked as a county clerk; she refused to issue marriage licenses to gay couples, citing her Christian faith. Alexander also refuses to conduct weddings for LGBT Americans.

For more from the author, subscribe and follow or read his books.

Atheists Sue Kansas City Over Payment to Baptists

On July 22, 2016, the American Atheists group and two Kansas City residents sued Kansas City Mayor Sly James and the city government for designating $65,000 in taxpayer funds for Modest Miles Ministries’ National Baptist Convention, taking place at Bartle Hall in early September.

Missouri’s Constitution forbids using taxpayer money to fund religious events and institutions: “No money shall ever be taken from the public treasury, directly or indirectly, in aid of any church, sect, or denomination of religion.” The lawsuit aims to prevent the city from handing over the funds.

“The National Baptist Convention is inherently religious — and it is clear under Missouri law and the First Amendment that Missouri taxpayers should not be paying for it,” argues Amanda Knief, legal director of American Atheists. The group’s website also notes:

Modest Miles Ministries claims in emails to the City that the funds will be used for transportation to and from the convention, making the funding purposes “secular.” That would mean, according to Modest Miles Ministries’ funding application, about 25% of the entire budget of the convention — $65,000 — is being spent on shuttles to and from the convention.

The $65,000 grant for the Baptist Convention was the second largest grant that the City gave in 2016. This was the fourth time the City has approved funding the National Baptist Convention: in 1998, the City approved $100,000 (about 32% of the convention’s total budget); in 2003, the City approved $142,000 (about 42% of the convention’s total budget); and in 2010, the City approved $77,585 (about 27% of the convention’s total budget).

The city government refused to comment to The Kansas City Star. But the paper says, “City spokesman Chris Hernandez pointed out no contract has been signed yet to spend the money. If and when that does happen, Hernandez said, the contract has language spelling out that the money would be used for secular purposes.”

The lawsuit says the Kansas City plaintiffs have a “right to be free from compelled support of religious institutions and activities,” and cites another Missouri case, “Trinity Lutheran Church of Columbia, Inc. v. Pauley, upheld by the Eighth Circuit in 2015, in which this court refused to allow public money to be spent on a Lutheran day care.”

The contract between the city and Modest Miles Ministries is due this month.

For more from the author, subscribe and follow or read his books.

Foundations of Faith: A Comparative Analysis of Kohlberg, Erikson, and Fowler

Developmental psychologist James W. Fowler (b. 1940) posited in 1981 that the way in which men and women understand faith is determined by his or her construction of knowledge. One’s perception of self and one’s experiences in specific environments are more telling of how meaning is made from faith than how often one attends temple, mosque, or mass services, how well one knows church doctrine, or how much holy scripture one can recite from memory. While it is important to note Fowler writes from a Christian perspective (being professor of theology at the United Methodist-affiliated Emory University in Atlanta, as well as a Methodist minister), his vision of human faith development is not meant to be content-specific. It is meant to be applicable to all faiths, disregarding religious bodies to focus solely on an individual’s spiritual and intellectual growth. Fowler formulated “stages of faith,” drawing inspiration from the developmental theories of Erik Erikson and Lawrence Kohlberg, among others. Upon exploring Fowler’s stages, this comparative analysis will examine the ideas of Kohlberg and Erikson, analyzing how their theoretical structures influenced the formation of Fowler’s work.

According to Stephen Parker’s “Measuring Faith Development,” Fowler’s idea was that faith was formed by many interrelated and developing structures, the interaction of which pinpointed one’s stage (2006, p. 337). “Stage progression, when it occurs, involves movement toward greater complexity and comprehensiveness in each of these structural aspects” (p. 337). The structures include form of logic (one progresses toward concrete and abstract reasoning), perspective taking (one gains the ability to judge things from various viewpoints), form of moral judgement (the improvement of moral reasoning), bounds of social awareness (becoming more open to changing social groups), locus of authority (moving toward self-confidence in internal decision-making), form of world coherence (growing aware of one’s own consciousness and one’s ability to understand the world using one’s own mental power), and symbolic function (increasing understanding that symbols have multiple meanings) (p. 338). These are the bricks that build each stage of faith; as one is able to think in more complex ways, one advances up Fowler’s spiritual levels.

The stages of faith are primal faith (pre-stage), intuitive-projective faith (1), mythic-literal faith (2), synthetic-conventional faith (3), individuative-reflective faith (4), conjunctive faith (5), and universalizing faith (6). According to Fowler, during the pre-stage, an infant cannot conceptualize the idea of “God,” but learns either trust or mistrust during relations with caretakers, which provides a basis for faith development (Parker, p. 339). More will be discussed on this later. In the intuitive-projective stage, a child of preschool age will conceptualize God, though only as “a powerful creature of the imagination, not unlike Superman or Santa Claus.” During the mythic-literal stage, the child will develop “concrete operational thought,” and will view God as a judge who doles out rewards and punishments in a fair manner. In the synthetic-conventional stage, one will develop “formal operational thought”; the idea of a more personal God arises, and one begins to construct meaning from beliefs. The individuative-reflective stage at last brings about self-reflection of one’s beliefs. Parker writes, “This intense, critical reflection on one’s faith (one’s way of making meaning) requires that inconsistencies and paradoxes are vanquished, which may leave one estranged from previously valued faith groups.” As this occurs, and somewhat ironically, God is viewed as the embodiment of truth. Conjunctive faith is a stage in which one attempts to reconcile contradictions; while staying wary of them, he or she may see the nature of God as inherently unknown, a “paradox,” while still being Truth. Where certainty breaks down, acceptance of the diverse beliefs of others grows more pervasive. Fowler suggests the conjunctive stage may occur during midlife. Finally, if one can attain it, the universalizing stage is when one becomes fully inclusive of other people, faiths, and ideas. People hold “firm and clear commitments to values of universal justice and love” (p. 339).

It is important to note these stages do not represent a universal, concrete timetable for faith development. Each stage requires greater critical thinking and self-reflection (which is what makes Fowler’s model applicable to multiple faiths), and therefore not everyone will progress through them at the same rate or even attain the same level of development. Further, the model does not address those who abandon faith completely; it demonstrates only a progressive scale that suggests one either stops where one is or moves toward greater knowledge of self and one’s values, and more open-mindedness in regards to others and the nature of God Himself. For many, faith development may not be so simple, nor so linear. Regardless, Fowler’s work has had a great impact on religious bodies and developmental psychology (Parker, p. 337).

Fowler borrowed much from other theorists. Psychologist and psychoanalyst Erik Erikson (1902-1994) created a model for the psychosocial development of men and women, from which Fowler later drew inspiration. In lieu of a lengthy summary of Erikson’s (and Kohlberg’s) ideas, this comparative analysis will provide a brief overview, and focus more on the aspects that relate closest to Fowler’s finished product. According to Erikson’s “Life Span Theory of Development,” human growth goes through eight stages, each of which featuring a crisis that, if successfully conquered, will result in the development of a valuable virtue, such as hope, love, or wisdom. Erikson’s crises were: Trust vs. mistrust (infancy), autonomy vs. shame (toddlerhood), initiative vs. guilt (preschool), industry vs. inferiority (childhood), identity vs. role confusion (adolescence), intimacy vs. isolation (young adulthood), generativity vs. stagnation (middle adulthood), and integrity vs. despair (late adulthood) (Dunkel & Sefcek, 2009, p. 14). One’s ability to embody the more positive aspect of one of these pairs makes it likely one will do the same with the next positive aspect (p. 14).

Fowler liked Erikson’s trust vs. mistrust idea, seeing it as the very foundation of faith development. Clearly, trust becomes a critical theme as one is exposed to spiritual beliefs, the “known”-yet-unseen. Can one trust the holy book? Can one trust the priest, rabbi, or parent? It is interesting to consider how the development of trusting or distrusting relationships will affect future spiritual development. What are the results of the trust vs. mistrust conflict? Erikson felt that “for basic trust versus mistrust a marked tendency toward trust results in hope” (Dunkel & Sefcek, p. 13), which implies a lack of hope if unresponsive caretakers breed feelings of mistrust. While it was strictly Erikson concerned with virtues gained from each life stage, Fowler, in adapting Erikson’s first stage, provides in his model a single stage with conflict. It begs questions. Can one successfully enter the intuitive-projective stage without building trusting relationships in the infant pre-stage? If so, what is the impact of mistrust in stage 1, and all the following stages? Could it mean different perspectives of God (for instance, perhaps as less fair-minded during the formation of concrete operational thought in the mythic-literal stage)? Would one likely progress through the stages more rapidly, or more slowly? Hypothetically, one less trusting might be quicker to see problems and contradictions in faith, advancing to the individuative-reflective stage sooner. Further, Erikson believed “optimal psychological health is reached when a ‘favorable ratio’ between poles is reached” (p. 13), meaning a positive trust-mistrust ratio is all that’s needed to develop hope and move through the stage. Therefore, “a ‘favorable ratio’ indicates that one can be too trusting” (p. 13). What will be the impact on faith development for someone who has grown too trusting of people? By their nature, both Erikson’s and Fowler’s stages build upon each other. For Erikson, trust made it “more likely the individual will develop along a path that includes a sense of autonomy, industry, identity, intimacy, generativity, and integrity” (p. 14). If Fowler’s model is built on the same principle of trust acquisition, what will happen to faith when the foundation is not ideal?

In reality, Fowler’s model parallels Erikson’s even more so, in regards to Erikson’s psychosocial crises. Erikson saw the individual as being pulled by two opposing forces in each stage, the favoring of the positive force leading to new virtues. On the surface, Fowler’s stages may appear simple and gradual, the progression seeming to occur naturally and expectedly, or at least without specifics on how or why individuals progress to higher levels of critical thinking and new perspectives on God. What takes one from an unexamined faith in the synthetic-conventional stage to taking a long, hard look at contradictions and controversies in the next? It cannot be simple maturation, or everyone would make it to the final stages. There must exist something that holds people back, or drives them forward. Que Erikson and his crises. Erikson would say the individual must accept the force pushing forward and resist the one pulling backward. In his fifth stage, for instance, that which Dunkel and Sefcek deem “the most important” (p. 14), an adolescent faces the crisis of identity versus role confusion. The adolescent must form an identity in the social world, build convictions, choose who he or she will be (p. 14). Confusion, temptation, and doubt will impede progress. In Fowler’s model, a crisis certainly makes sense, only perhaps less of a ratio or continuum and more of a single event or confrontation. For example, what better way to explain the transition from the intuitive-projective stage to the mythic-literal stage than the moment when the parent tells the child Santa Claus isn’t real? That could begin the shift from imagination to logic, and with it a change in the child’s perception of God. Personally, this author sees his own transition into Fowler’s individuative-reflective stage as beginning the afternoon he read a work by the late evolutionary biologist and Harvard professor Stephen Jay Gould, who pointed out contradictions between the timeline of the Biblical story of Noah and modern archeology. Though different for each individual, such turning points provide Erikson-esque crises that explain one’s advancement through Fowler’s model.

The work of psychologist Lawrence Kohlberg (1927-1987) also inspired Fowler. Fowler’s form of moral reasoning structure was an adaptation of Kohlberg’s “Six Stages of Development in Moral Thought” (Parker, p. 338). Kohlberg theorized that as one ages, the way in which one justifies actions advances through predictable stages. His Pre-Moral stage saw children motivated to make moral decisions through fear of punishment (Type 1), followed by the desire for reward or personal gain (Type 2). Morality of Conventional Role-Conformity was spurned by the desire to avoid the disapproval of peers and to abide by social norms (Type 3), and later the wish to maintain social order by obeying laws and the authorities who enforce them (Type 4). In the Post-Conventional stage, people acknowledge that laws are social contracts agreed upon democratically for the common good, and are thus motivated to behave morally to gain community respect (Type 5). Finally, one begins to see morality as solely within him- or herself: One must be motivated by universal empathy toward others, acting morally because it is just and true, not because it is the law or socially acceptable (Type 6) (Kohlberg, 2008, p. 9-10). It is not difficult to see how Fowler viewed the development of moral judgement as being a crucial building block to the development of faith. Universal morality, like universal faith, are byproducts of deeper critical thinking, reflection, and cognitive ability.

In that regard, it is easy to see how well Fowler’s six stages and Kohlberg’s six stages align. Both move from perceptions and beliefs borrowed from and influenced by others, and motivated by selfishness, to perceptions and beliefs formed in one’s own mind, motivated by empathy and love. They both advance toward justice for justice’s sake. One might think the stages are pleasantly compatible. What’s fascinating, however, is that Fowler believed the majority of people remained in his third stage, the synthetic-conventional (with the few who advanced usually only doing so in their later years), but Kohlberg showed in his studies with children that “more mature modes of thought (Types 4–6) increased from age 10 through 16, less mature modes (Types 1–2) decreased with age” (Kohlberg, p. 19). (With age, of course, comes factors such as “social experience and cognitive growth” (p. 18).) He saw youths who addressed moral conundrums (such as his famous Heinz Dilemma) with the Golden Rule and utilitarianism (p. 17), noting that “when Type 6 children are asked ‘What is conscience?’, they tend to answer that conscience is a choosing and self-judging function, rather than a feeling of guilt or dread” (p. 18).

Clearly, the post-conventional moral stage can emerge very early in life. While keeping in mind Fowler’s form of moral reasoning structure may not be a perfect reproduction of Kohlberg’s ideas, it is interesting to consider the contradiction between an adolescent in the synthetic-conventional stage, an era marked by unexamined beliefs, conformity to doctrine, and identity heavily influenced by others, and a “Type 6” adolescent in the post-conventional stage of moral thinking, who uses reason, universal ethics, empathy, and justice to solve moral problems. Would not such rapid moral development lead to more rapid progression through Fowler’s model? With Type 4-6 thinking increasing so early, why do so few begin thinking critically of their faith and analyzing contradictions, and so late in life? Perhaps it is simply that Type 6 children are such a minority; perhaps it is they that will go on to reach the individuative-reflective stage. It would be intriguing to compare a child’s ability to answer moral dilemmas with his or her perspective on God and faith. How did the children of Kohlberg’s research view God? Surely some believed in God (and thus could be placed on Fowler’s model) and some did not. Was there a positive or negative correlation between moral decisions and faith? Were the children moving through Fowler’s stages more likely or less likely to develop higher types of moral thinking? Or was there no effect at all? Fowler, of course, might say there are too many variables in faith progression, that it requires advancement in multiple interactive structures; even if a child makes it to Kohlberg’s final stage of moral development, there are six other structures that affect one’s spiritual progress that must be taken into account.

While this comparative analysis places an emphasis on Fowler, that is not to say Erikson and Kohlberg’s works do not stand on their own, or that their theories somehow automatically validate his. Placing them side-by-side simply provides an interesting perspective that both raises and answers questions. Whether examining the moral, the psychosocial, or the spiritual, it is clear self-reflection and critical thinking are paramount to development. Kohlberg, Erikson, and Fowler were leaders in their fields because they understood and based their research on this idea. Their combined theories present a convincing case that as one grows, greater cognitive power and the confrontation of new ideas can change perspectives in positive ways, from forming one’s identity to learning love, empathy, and respect for others.

For more from the author, subscribe and follow or read his books.

References

Dunkel, C. S., & Sefcek, J. A. (2009). Eriksonian lifespan theory and life history theory: An integration using the example of identity formation. Review of General Psychology, 13(1), 13-23.

Kohlberg, L. (2008). The development of children’s orientations toward a moral order. Human Development, 51, 8-20.

Parker, S. (2006). Measuring faith development. Journal of Psychology and Theology, 34(4), 337-348.

The Philosophy of Morality

Having explored how human morality — ideas and feelings of right and wrong — does not need a god to explain it, instead being the product of our evolutionary history and our unique societies, it is time to address a common criticism of godless morality.

It goes something like this: If morality is purely subjective, if right and wrong do not exist “beyond” or “outside” what humans determine they should be (in other words, are not set by a god), how can one justify telling someone else she has behaved in an immoral way? If a man says rape or murder is morally right, how can another justify saying he is wrong? With no empirical standard of what is ethical, ethics are simply opinions, and why would one human’s opinion have more weight or importance than another’s? Relative morality is meaningless morality.

We can first deal with the obvious point that even if a god-decreed empirical standard exists there is no way for us to know precisely what it is. We’d have to first prove (prove) which god is real and which gods are fictional, then get clarification directly from this being on issues not specifically mentioned in its holy text. So the same question of how one justifies telling another she is wrong haunts the theory of Objective Morality as well. Scriptures are often vague, open to interpretation, so even among those who believe in Objective Morality, and the same God, morals inevitably vary. Conservative and liberal Christians may have different views on right and wrong — on what God’s standards are — based on the exact same holy book! Some Christians firmly believe contraception is a sin. Others disagree. There are debates over what God really thinks about premarital sex and certain sex acts, masturbation, the age of consent, alcohol, and drugs, as well as issues ancient writers couldn’t imagine, from gun control to genetic engineering. While the range of acceptable ethical standards may be more narrow when everyone agrees that Yahweh set Objective Moral laws, individual morals are still very much opinion-based, a matter of human perspective, because such laws are often not comprehensive, clear, or even present in the scriptures. Religious persons somehow think faith-based ethics are on firmer ground and more logical than those based on the works of human philosophers or voluntarily chosen principles like doing no harm, when one cannot prove the faith is true, nor prove its Objective Morality is true, nor even fully know what that Objective Morality would entail. Shifting human values may be problematic, but so are unprovable, unknowable divine ones.

More importantly, the common criticism is an incomplete thought, failing to comprehend the premise.

The premise is indeed that morality is opinion-based. Though rooted in evolution, the society and family one happens to be born into, life experiences, psychological states, and so on, right and wrong are ultimately matters of opinion. The answer to this question (“If morals are human opinions, how can one justify condemning another person’s actions?”) is then obvious: no justification is needed at all. Opinions do not need this kind of justification.

Suppose I were to ask, “What is your favorite color?” and then demanded you justify it using an empirical standard, a standard beyond yourself, beyond humanity — beyond human opinion. The very idea is absurd. The concept of a “favorite color” does not exist in any form beyond our individual selves (do you think that it too was decided by God for us humans to follow?). What sense does it make to demand that the person who expresses a favorite color also “backs it up” using some mythological benchmark not set by humans? Opinions of the prettiest color rest on their own laurels — the subjective standards of man, not the objective ones of a deity.

In the precise same way, no external justification is needed to say, “What the rapist did was wrong, even if he didn’t think so.” If one states that another person behaved in an immoral way, that is a subjective viewpoint like one’s favorite color; there is no requirement that one justifies saying so using anything other than human thought and reason. Opinions, moral or otherwise, do not need to be measured or validated against standards “beyond” or “outside” humanity.

The religious may believe these things are different, because naturally an Objective Favorite Color does not exist but an Objective Morality does. That’s as impossible to prove as the deity it’s based on, but think that if you wish. Regardless, the statement “You have to justify judging others if you don’t believe in an empirical standard” makes no sense. It’s specifically because one doesn’t believe an empirical standard exists that one doesn’t need to justify judging others! If you don’t believe in an Objective Favorite Color, you do not have to justify your favorite color using that standard. If you don’t believe in Objective Morality, you do not have to justify why you think someone did something immoral using that standard. You can stick to human standards — both individual and collective, which you can use to justify your beliefs (for example, my morality — and that of many others — emphasizes minimizing physical and psychological harm, therefore rape is wrong, therefore the rapist has done wrong).

So if no justification is needed to state your opinion that a murderer has done wrong, if the very act of asking for justification is illogical because it ignores the obvious implication of the premise, what of the rest of the common criticism? If it’s all opinion, doesn’t one have to say all opinions are equal, if we look at things objectively? Any notion that Opinion A has more weight or importance than Opinion B is bunk. Is morality then meaningless?

It is true, if we view all this objectively, that Opinion A and Opinion B, whatever they may be, are indeed “equal,” “equally valid or important,” or however else you’d like to phrase it. How else could it be? If there is no deity, no Final Say, to give the thumbs up or down to moral opinions, that is simply reality. (Without an Objective Favorite Color, “My favorite is blue” and “My favorite is green” are both valid.) Now, this generally makes us uncomfortable or sick because it means that though I think the opinions and ethics of the child molester are detestable and inferior to my own there is no deity to say I am right and he is wrong, so our opinions are equally valid. But that’s not the end to the story, because while opinions are equal their real-world consequences are not.

Some moral views lead to death, physical and psychological pain, misery, terror, and so on. Others do not, or have opposite effects. These are real experiences. So while mere opinions, in and of themselves, can be said to be “equal,” we cannot say the same regarding their actual or possible effects. Some moral views are more physically and psychologically harmful than others. This is quite different than favorite colors.

See, the common criticism has it backwards. A lack of an empirical standard makes opinions meaningful, not meaningless. It’s where an empirical standard exists that opinions don’t matter. Consider an actual empirical standard: the truth (yes, atheists and liberals believe in absolute truth). Either George Washington existed or he didn’t. I say he did, another says he didn’t…one of us is incorrect. When it comes to the truth, opinions don’t matter. The objective truth is independent of our opinion. Morality is different: it is not independent of our opinions (it’s opinion-based, after all), and thus our moral views matter a great deal because some will cause more harm than others. If God exists and determined that killing a girl found to not be a virgin on her wedding night was right, your opinion about killing non-virgin girls on their wedding nights would be meaningless. It wouldn’t matter if you thought this wrong — you’d be incorrect. But if there is no deity-designed standard “beyond” humanity, your opinion is meaningful and matters a great deal because awful real-world consequences can be avoided if your moral opinion is heard and embraced.

“Well, so what?” one might ask. “Why is harm itself wrong? Who says we should consider death and pain ‘wrong’ rather than, say, life and happiness?”

The person who asks this has lost sight of linguistic meaning. What exactly does “wrong” (or “bad” or “evil” or “immoral”) mean? Well, it essentially means undesirable. To say something is wrong is to say it’s disagreeable, intolerable, unacceptable, something that should not be done, something to be avoided.

Why is harm wrong? Harm is wrong because it’s undesirable. To put it another way, asking “Why is harm wrong?” is really asking “Why is harm undesirable?” And the answer is “Because it hurts” — because we are conscious, organic creatures capable of experiencing death, pain, humiliation, grief, and so on. Now, this does not mean everyone will agree on what constitutes harm! That is the human story, after all: a vicious battle of opinions on what is harmful and what isn’t (and thus what’s wrong and what isn’t), with some ideas growing popular even while change awaits on the horizon. We even argue over whether causing harm to prevent a greater harm is right (desirable), as with killing one to save many or going to war to stop the evils of others. But the idea that harm is undesirable is universal, because each human creature has something they would not like to happen to them.

This includes those who bring pain and suffering to others or themselves. The rapist may not wish to be raped; the mullah who supports female genital mutilation may not wish to be castrated; the suicidal person may not wish to be tortured in a basement first; the masochist, who enjoys experiencing pain, may not wish to die; the serial killer may not wish to be left at the altar; the sadist, who loves inflicting pain, may not wish to be paralyzed from the neck down.

As soon as you accept the premise that each person has some form of harm he or she wants to avoid, you’ve accepted that harm is wrong — by definition. Even if our views on what is harmful (or how harmful something is) vary widely, we have a shared foundation built on the actual meanings of the terms we’re using. From this starting point, folk from all sides of an issue present their arguments (for instance, “It is wrong — undesirable — for a starving man to steal because that harms the property owner” vs. “It is right — desirable — for a starving man to steal because if he doesn’t he will die”). Though we individuals do not always do so, we often decide that what’s wrong (undesirable) for us is also wrong for others, because we evolved a capacity for empathy and are often smart enough to know a group living under rules that apply to all can actually protect and benefit us by creating a more stable, cooperative, caring society). The disagreements may be savage, but an important premise of harm being wrong because it’s undesirable is universally accepted. Things couldn’t be any other way unless you simply wanted to throw out the meaning of words.

The path forward from there is clear, despite the insistence of some that actions need external justification even if moral opinions do not. This is merely another go at an obviously flawed idea. If no external, objective standard is needed to justify moral views, why would you need one to justify actions based on those moral views? You wouldn’t. We justify our actions based on the subjective, human ideas that are our moral views, and then try to popularize our ideas because we think we know best. It’s simply what human creatures do, whether our ideas are in the minority or majority opinion, whether they lead to death and pain or peace and kindness.

Understandably, some may see no sense in individuals objecting to or regulating the ethics of others. If there’s no higher basis for whose idea of morality is true or better, the next question is oftentimes “How is it then logical to tell someone they’re wrong and force them to live by your moral code?” In a word, self-interest. If you think your morality is better, it’s not an illogical decision to try to convince someone else or even force him to abide by it through law. Even if you know there is no external basis to make your morality objectively “better” or “truer,” it’s still a reasonable action for you to take because you see it as better or truer, and know your efforts can work — minds get changed, so do laws, so do societies. For example, I know there’s no external, objective basis for police murder being wrong, but because I personally think it is, I act. I try to change minds, support law changes. The act is a logical step after opinion formation. If I act, I may help win a world I want, one with fewer senseless killings of unarmed people. If you would prefer a world with your moral code adopted, and know acting can bring that about, it almost seems more logical to act than to not act — even if you know all moral views are equivalent — to bring about that different world! “Logical” just means “makes sense,” after all. So each individual tries to shape the world in a certain way they personally like — a rational thing to do, given individual motives, even while knowing no one is “right.” Acting in self-interest is rarely considered irrational.

For more from the author, subscribe and follow or read his books.